In part one of this series I introduced several technologies that will impact the way we will use and interact with our vehicles in 2030 and beyond. But what does mobility look like once these technologies are put into practice? Let us follow Dave, Mei, and Robert on their individual trips through future Shanghai.
But first, let’s start with a glimpse into how transportation might be organized in our cities of the future: In the future, mobility hubs will be commonplace in larger cities. Here, hyperloops, drones, shared cars, cycle highways, public transit, and more converge and enable swift changes between different transport modes. These mobility hubs will enable users to choose the most needs-based transportation device possible for their individual commute. As a result, new mobility concepts arise, such as:
- Urban Shuttles – shared modes of individual transport – which will become the ultimate tool for exploring the environment
- Car-a-Homes, room-car hybrids that will mark the exclusive, privacy-oriented end of 2030’s mobility spectrum
- Executive Saloons that will enable people to fulfill a variety of tasks in the car – from productivity to leisure
Urban Shuttles – shared yet individual transport
Let us have a look at Dave who has booked an Xbnb-Trip to Shanghai, with full accommodation and mobility subscription. Once he arrives in town, he checks for mobility options inside the multimodal mobility hub. Dave chooses an urban shuttle to start exploring the city. Ambient display technologies combined with augmentation might be key factors to make the urban shuttle the best choice. This trend, which nowadays can be seen in the entertainment industry, has a huge potential to enhance the in-car experience. The Internet of Things will create a secondary space, a virtual layer of information, especially in cities. To get the most out of the urban experience, people will want to be able to access this space – with the necessary mobility services directly connected. A mixture of interactive materials, transparent displays, and multisensory input via gaze and voice will be the means to make that possible. Once Dave looks at things and places outside the car, the gaze detection feature recognizes the movement of his eyes and dynamically projects information about his surroundings into his field of view. In addition, Dave can interact with these places and take action, like booking a table at a restaurant.
Although Dave is in his vehicle, he can experience the street life as the interior materials emulate ambient lighting and atmospheric sounds from the outside. As eye movement analysis tells the car about Dave’s psychological state, the car assistant proposes to leave the location, as Dave seems a bit tense. Dave decides to end the day’s journey and go to his booked room. The biometric profile of his voice will be the key that grants him access to this space. We expect voice biometrics to be as established as fingerprint sensors today and be the driving factor for seamless user experiences between physical and virtual services.
The Car-a-home – more than a mobile home
Dave’s booked room turns out to be a car-a-home module – a room on wheels standing in the city. Rented out by a Shanghai citizen, it gives Dave the most embedded urban experience possible. Features like directional sensory technology—supersonic air bursts that form tangible, virtual buttons mid-air—and light field projection, which creates holographs in mid-air, will enable an outstanding user experience. In addition, biometrics and adaptive materials that can sense the mechanical and chemical properties of their surroundings have already found their way into performance and health-wearables today. In car-a-homes, the quantification of this data will create a subtle layer of interaction that will go almost unnoticed by the user. Combined with machine learning, this data can create accurate, personalized settings for individual customers.
The car-home synergy will function as an interactive space in commuting and working scenarios. The interior of the car-a-home is free of specifically purposed HMI, but multimodal interaction will be realized with help from a combination of gaze, gestures, and voice. When she is not renting out the vehicle, Mei uses this mobile living space to work on her architecture projects while on-the-go, like on her way to a business appointment.
The executive saloon – office on wheels for improved productivity
Executive saloons will be the main means of transportation for business people like Robert, a sales manager who uses it to drive home from a customer meeting. The vehicle enables Robert to stay in touch with his business partners and his family while commuting. The interior of his car lacks most of the HMI components we are used to these days. The dashboard is swapped for a sleek surface and the steering wheel becomes a joystick which is used for making selections rather than driving. The rise of Level 4 autonomy and advanced driver assistance systems will remove the need for elaborate interfaces for demanding driving dynamics. For example, people will start choosing between directions and options, instead of navigating the vehicles. Robert can use this vehicle as a mobile office – e.g. for a conference call. Overhead cameras record the projection and transmit it to his colleagues, where he is present in the meeting as a hologram. Robert can essentially turn his commute into part of his work day. The dashboard in front of him is a shapeshifting interface. It can adapt its appearance to a variety of applications by transforming its haptic properties. This enables Robert to practice the piano with help of projected guides. In addition, his daughter can join him on the windshield the same way his colleagues did during the conference call.
These examples show that in the future, transportation will be more diverse and more needs-based than it is today. There will be individual solutions for different purposes, all of them focusing on how we can become more comfortable and productive while on-the-go. Cars will become the most powerful devices people own. By giving access to this power through new interaction modes, we can enable users to spend meaningful time with their vehicles. With respect to interaction, voice as a natural means of expression will play a major role in the multimodal interaction mix. Different mobility concepts, user preferences, and situational circumstances will impact individual modality choices.