Helping drivers use smart cars smarter

This conversational in-vehicle digital assistant can respond to drivers’ questions and commands in natural language

Adasa Interface Enlarge

Cars are getting smarter all the time, and most new vehicles come fully equipped with Advanced Driver Assistance Systems (ADAS) like lane keeping and adaptive cruise control. But most drivers don’t put these features to use – they’re either uncomfortable with the new technology or unsure how it’s even supposed to help them.

To bridge the gap between consumer and tech, Profs. Jason MarsLingja Tang, CSE students Shih-Chieh Lin, Chang-Hong Hsu, and Yunqi Zhang, and Ford Motor Company have developed Adasa. This conversational in-vehicle digital assistant can respond to drivers’ questions and commands in natural language, helping them get to know the tools their cars have to offer. The paper, titled “Adasa: A Conversational In-Vehicle Digital Assistant for Advanced Driver Assistance Features,” earned Honorable Mention Award in the Best Paper competition at this year’s ACM User Interface Software and Technology Symposium (UIST’18).

ADAS features are designed to either keep drivers alert or assist them in controlling their vehicle, helping to reduce the effects of human error while driving. A lane keeping system, for example, vibrates the steering wheel to alert drivers when their vehicles drift out of the lane, while adaptive cruise control adjusts a vehicle’s speed to maintain a certain distance from other vehicles.

These features, which now come prepackaged in the fleets of nearly all major automakers, are still unfamiliar territory for most drivers. Tools that affect the way a car behaves while on the road can be especially daunting for drivers to try out – as many videos of early self-driving tests demonstrate.

But studies have shown that these ADAS features have the potential to have a positive impact on the driving experience, with widespread use leading to an estimated 28% reduction in all vehicle crashes. Manufacturers are doing their part to get this technology on the road – research has shown the ADAS market will likely double its size in the next three years, reaching $35 billion in annual revenue.

Unfortunately, an estimated 73% of drivers with ADAS-enabled vehicles have not even attempted to use these features. Beyond the uncertainty that comes with a new driving experience, many consumers have difficulty even interfacing with the new tech. According to the U-M researchers, the current interfaces being deployed leave drivers with “gulfs of evaluation” (inability to tell if the ADAS systems are turned on and what their settings are) and “gulfs of execution” (uncertainty about how to activate and use ADAS).

Building an effective interface for ADAS is challenging for several reasons, the researchers say. First, it is unclear what information drivers need to effectively use ADAS features. Second, the interface can not require any complex visual-manual interactions because asking the driver to perform these while driving is unsafe. To make that issue even more difficult, these systems are complex, and their behavior depends on a combination of system states and vehicle contexts that have to be made clear to the driver somehow. Adaptive cruise control can appear to be deactivated because the feature is malfunctioning, the system is in a state where it is not used, or the feature is turned off, and the driver has to be able to distinguish between these three states. Finally, drivers often feel unclear what they need to continue doing while ADAS features are activated.

Adasa is the first speech-based approach to solving these problems. Adasa’s features are based on the researchers’ analysis of over 9,000 conversations between drivers and Ford’s customer service division, and includes additional training data generated by crowd workers. Adasa was built using the conversational machine learning platform Lucida, which allows drivers to interact with Adasa in unconstrained natural language in real-time. Drivers can simply ask questions or issue commands after enabling Adasa by pressing a single button on the steering wheel.

Drivers can ask Adasa the meaning of a symbol on the dashboard, system diagnostic questions like why their steering wheel might be vibrating, and command Adasa to control different ADAS features. The system was integrated into a commercial vehicle and tested with 15 drivers in a real-world driving environment. The study showed that Adasa correctly identified, understood, and responded to over 77% or participants’ unconstrained natural language commands about ADAS, which included questions about the current ADAS state and commands to precisely control their features.

The system also earned an 8.9/10 from user feedback, indicating an improved user understanding and overall driving experience. Proposed extensions to the work include building on Adasa’s capabilities to include other ADAS features like forward collision warning (FCW) and automatic parallel parking.

“We believe that in the future more ADAS features will continue to be introduced to improve the driving experience and drivers will be able to utilize an Adasa-like system to interact with the vehicle,” the researchers conclude. “We hope Adasa will encourage discussion and excite more in-vehicle interface design within the HCI community.”