Multimodal Car Dashboard

Reimagining the car dashboard



Project Details

This is the final project for the course INST711: Interaction Design Studio. We were a team of three where everyone contributed equally to every aspect of the project. Our brief was to do a project with two distinct phases which consisted of two related modules within the chosen product.


Team

Ashrith Shetty | Aravind JR | Sharan Hegde


Problem

Cars are getting better and better. More spacious, more powerful, faster and now, intelligent too. But car interfaces or infotainment systems have largely remained completely unchanged and clunky. With cars getting more advanced everyday, and the advent of smaller and more powerful computing devices, we feel car interfaces haven't kept up in terms of innovation.

Many car manufacturers have adopted the strategy of just porting a phone or tablet styled interface to the dashboard but they cannot be treated the same. The key difference between car interfaces and other hand-held interfaces is that in the former, the main task is driving the car and not operating the interface. The infotainment system has to be designed in a way that it doesn't distract the drivers from their main task.


Product Design Process


Initial Research

According to a survey conducted by the AAA Foundation for Traffic Safety, the annual Traffic Safety Culture Index shows that 88% of drivers believe distracted driving is on the rise, topping other risky behaviors like aggressive drinking (68%), driving using drugs (55%) and driving using alcohol (45%).

Additionally, AAA also released a study which states that drivers who look away from the road for just two seconds double their chances of being in an accident.

The study which monitored 120 drivers age 21 to 36 as they drove in 30 different models along a 2-mile stretch of road at 25 mph. It aimed to answer key questions:

  • Which task is the most demanding to complete while driving: calling/dialing, sending a text message, tuning the radio, or programming navigation?
  • What level of cognitive demand is associated with completing these tasks using voice commands, touch screens, or technologies such as buttons, rotary dial, or writing pad?


Contextual Inquiry

We totally interviewed 6 users (2 per teammate) using the contextual inquiry methodology which follows a Master-Apprentice model. We sat in the users' cars and engaged them in a conversation about their experiences and interactions with car dashboards and, when required, asked them probing questions to get more meaningful insights. We also observed and made note of each of the users' actions. Following the interviews, the team met to discuss the interviews to build a shared understanding and interpret the raw data into meaningful insights.


Competitive Analysis


Feature Prioritization

Due to time constraints and the large scope of our chosen project, we decided to sort and prioritize features so that we could choose the crucial ones and design them. We made an exhaustive list of all the features and used card sorting to categorize them. After that we used a Prioritization Grid to help us weed out the less important or least frequently used features and bring our focus to the absolutely must-have features.


Scenarios and Tasks

Using our list of features, and all the data and insights we gathered, we started creating scenarios and tasks to get an all-round understanding of every facet of a feature and the problem it is trying to solve.

  1. Every time Arya gets into the car he wants to know if the car is good to go.

    What she needs:

    • Some sort of a diagnostic screen that will give us the car status on demand.
    • A summarized view of crucial car parameters, like for instance, air pressure, oil levels, fuel level, etc.
    • When the car is started, the interface opens with the diagnostic view.
    • Highlight any detected faults and possibly state remedial actions.

  2. Kunal is driving along the highway with moderate traffic on a rainy day. He wants to call his home to check if his family needs supplies before the storm intensifies.

    What he needs:

    • Quick way to call home with minimal distraction or without taking his eyes off the screen.
    • Tactile controls so he can avoid looking at the screen (with voice feedback).
    • Recent/favorite contacts that are easily accessible.
    • A voice interface so he can call without taking his hands of the wheel.

  3. Rick loves to listen to music while driving. He usually listens to the radio station or Spotify. He sometimes wants a seamless way to switch between the two and find songs he likes. He would also like to find radio stations easily. Sometimes he wants to find out what song is playing.

    What he needs:

    • Easy-to-use music interface
    • Tactile controls so he can avoid looking at the screen(with voice feedback).
    • Multi-channel support for easy switching between music, spotify and radio.
    • What’s playing on the driver’s dashboard. (Nice to have)
    • Voice assistant to play, pause and change songs (and possibly, option to specify)

  4. Cersei is travelling around a place she is not familiar with and is looking for an exit so she can grab a bite.

    What she needs:

    • Quick and effortless navigation.
    • Find nearby restaurants.
    • Exit indicators.
    • Add stops to existing route.
    • Voice assistant based navigation

Goals

Based on the research phase, we came up with following goals

  • Design a system which can minimize distraction but still proceed to serve user needs.
  • Implement a multi-modal system to minimize the pitfalls of each mode of interaction
  • EVery part of the system has to be accessible via all 3 modes: touch, tactile control, and a voice interface.

Brainstorming

Once we had all our scenarios, and insights, it was time to brainstorm for solutions. We created many sketches and paper-prototypes and went through multiple levels of iteration for every feature.


Lo-Fi prototyping

Once we received feedback on our paper prototypes, we proceeded to start creating low to medium fidelity screens so that we could start usability testing of our designs for additional feedback and iteration. Here are a few of the many screens we created:


Visual Design

After more rounds of usability testing and iteration, we started work on the visual design by first defining a design language for our product. We created a mood-board that would help us shape our vision, and prepared a style guide.



Tactile Control

This set of buttons and a dial was 3d modeled on Fusion 360 and 3D printed into a prototype for usability testing. This was to provide an auxiliary way of controlling the interface without the need to look at the interface, based on tactile controls and voice feedback. The tactile knob would also function like a joystick to move the selection box around and can be pressed down for selection


Voice Interface Design

In order to truly be connected with the car, we felt that voice is the best way to feel connected. Here are few of the scenarios where voice could be of use. We used Adobe XD for the voice prototyping. We tried and tested multiple commands and responses to make it a conversational experience.

  • Here is a scenario which shows how a user can place a call without using the touch interface.


  • We can ask the assistant to change the AC settings in a conversational manner.


  • This scenario shows how the assistant can help you find places around you and start navigation.


  • Surprise me feature shows how the assistant will understand your behavioral patterns to try to understand you.


Feature Highlights

Due to the sheer scale of the project, we picked a few interesting features that we wanted to highlight:

Add Stop


While navigation is currently underway, if the user starts looking for another location, we don’t straight away replace the old navigation with the new location like most navigation apps. Instead we offer a choice of adding the new location as a sort of “pit-stop” on the existing route or to replace and start a new route navigation.


Around Me


This feature allows a user to find useful amenities such as resturants, hospitals, ATMs and so on in a convenient manner. We also make it easy to switch between searching the nearby area or to search in a specific location


Home screen diagnostics


Now get notified about any faults in the car easily on the home interface. We currently show malfunctions in 2 levels of severity - yellow signifier for moderate and red signifier for critical. These signifiers are located on the corresponding location on the image of the car as shown. Upon interaction, we reveal more details about the malfunction and possible remedial action.


Music mini-player and Quick-Nav on Home


The default home screen shows a minimized version of the music player and the next navigation step so that the user doesn't have to scramble across screens to reach the navigation and music screens every time.


Driver Profiles


Based on our interviews, we found that users with shared household cars found it inconvenient to adjust their seats, mirrors and other settings when switching drivers. With the use of Driver Profiles, each user can now save his/her/their preferences and just by switching profiles, the car settings are now customized to their saved preference.


Challenges

  • Because we picked a project with a pretty large scope, time was a major constraint for us and hence we could only focus on a subset of features.
  • We could not get access to many types of cars and hence couldn't do an exhaustive competitive analysis.
  • We could not test our prototypes in a real-world driving scenario for the obvious reasons of risk and cost.
  • Voice prototypes were restricted to the capabilities of the voice feature on Adobe XD (at the time of the project.)

Future Work

  • Work on the remaining features that were sidelined.
  • Implement a stand-alone, natural and responsive voice interface.
  • Find ways to test usage in an actual car (stationary).

THE END