Here’s an interesting thought to chew on: as efforts increase to minimize visual and manual dashboard “stimuli” that can turn into all sorts of distractions for vehicle operators, could that help spur demand for more “interactive voice commands” designed to perform tasks secondary to driving?
That’s not as easy as it sounds, either, noted Thomas Schalk (at right), vice president of voice technology for connected vehicle technology provider Agero, for he argued recently that even a “voice command” structure can create distractions for drivers, too.
“While a growing body of evidence from research is pointing to the importance of interactive speech systems in vehicles to keep drivers eyes on the road and hands on the wheel, research also reveals the need to avoid voice menus and minimize the amount of speech interaction for drivers,” he explained. “Both actions tend to extend the duration of non-driving tasks, thereby increasing the risk of driver distraction.”
Schalk believes that finding the right combination of interdependent interfaces is where the cutting edge of in-vehicle, human-machine-interface (HMI) research is leading.
[Below is an example of how Ford Motor Co. is integrating voice commands with its “My Touch” onboard control system. This is from a few years ago but gives you an idea of how voice command systems are designed to work.]
"Reducing distraction will require matching the right blend of natural interfaces that can successfully and quickly perform specific, independent actions—such as task selection, list management, entering text string, understanding warnings, interrupting or pausing a task, resuming a task, and completing a task,” he explained. “That’s what is required to perform a growing assortment of in-vehicle, non-driving tasks.”
From where he sits, Schalk noted that a common problem encountered with in-vehicle speech-only interfaces is that a driver often doesn't know what to say in response to the talk button's "please say a command" voice prompt, thereby confusing the speech system as it listens for a response.
“Unexpected sounds within the vehicle during this listening mode also can confound the system,” he added. “And both issues can trigger the system to produce seemingly inaccurate results, generating driver frustration, which in turn can result in driver distraction or early abandonment of using the system.”
One approach Schalk said Agero is testing deal is to integrate the vehicle's talk button – commonly found on the steering wheel – with the vehicle's touch screen, providing the driver with what’s called a “Tap-or-Say” prompt.
"With the ‘Tap-or-Say’ command, the user instinctively glances and taps from a list of results displayed on a touch screen without the need to contemplate a spoken response,” he explained. “No extra prompting and no extra dialog steps are required, dramatically reducing the task completion time and the risk of distraction."
And time constraints are the essence of such “speech-only” interfaces, Schalk stressed, largely due to guidelines crafted by the National Highway Traffic Safety Administration (NHTSA) that seek to ensure in-vehicle navigation, infotainment and other communications systems do not divert drivers' attention away from the roadway for more than two seconds at a time – or 12 seconds in total.
The big question is can such voice command technology work reliably and consistently in the rough-and-tumble world of trucking? Only further testing will tell, methinks.