CES 2024 Becomes the Event of AI in Cars
Everything from Nvidia GPUs to AR heads-up displays — all in the car
Each year, automakers take the stage at the Consumer Electronics Show of Las Vegas to showcase the latest technologies they have packed into their cars — whether it be electrification, new infotainment systems, or in recent history, artificial intelligence. This year, however, a new trend emerged: ChatGPT-like voice assistant integrations with famously technology-conservative legacy automakers’ historically atrocious software. While the “Big Three” U.S.-based automakers — General Motors, Stellantis, and Ford — were notably absent from the annual wintertime trade show this week due to the United Auto Workers’ labor strike from the summer of last year, European and Asian automakers seized the opportunity to show off new AI assistants in their vehicles — and American semiconductor manufacturing companies like Intel and Nvidia announced new initiatives to bring AI on wheels.
The announcements come when the technology industry is increasingly focusing on generative artificial intelligence, like the aforementioned ChatGPT, made by Silicon Valley start-up OpenAI. As I discussed yesterday, the three major semiconductor manufacturers — Advanced Micro Devices, Intel, and Nvidia — are already hard at work optimizing their processors for AI development. (Intel announced its willingness to bring AI to the car on Tuesday.) Legacy automakers this week announced new collaborations with those processor makers to run large language models and vision models on-device, or rather, on-car, as sending queries to the cloud would take time, would require speedy internet connections, and might pose a risk to the privacy of already increasingly skeptical customers. The automakers, namely Bayerische Motoren Werke, the German luxury vehicle maker commonly abbreviated to BMW; Mercedes-Benz, the German luxury automaker; German manufacturer Volkswagen’s Cariad software division; Hyundai, the South Korean conglomerate that owns a majority stake in Kia; and Honda, the Japanese automaker, among others, have all announced new software innovations coming to their cars soon, alongside new electric vehicles and concept vehicles.
BMW’s ‘iDrive 9’ Infotainment Software Brings AR, LLMs, and Video
BMW, the German automaker known for its iconic “ultimate driving machines,” announced new features coming to version nine of its infotainment system, called iDrive 9, as well as some extra features for version 8.5 of its BMW Operating System. The company first began its announcements on Monday, debuting a video gaming experience and video streaming applications on the main display of new 2024 model year vehicles.
With a software update coming later this year, owners of 2024 X2, Mini, i7, 5-series, and more models will soon be able to access entertainment applications, powered by Amazon Fire TV, Amazon’s television streaming software. The company is also working on a service called “DTS AutoStage Video Service,” powered by TiVo, a type of digital video recorder. BMW says the service — which first became available in 5-Series vehicles last year — will now be expanding to other luxury sedans this year, and will include both live channels and on-demand video libraries. No other streaming services, such as Netflix or Disney+, were announced — which most likely means that they will not be available in favor of the DTS AutoStage Video Service.
Video gaming-wise, customers who purchase the “BMW Digital Premium” upgrade package will be able to access a range of third-party applications from BMW’s ConnectedDrive application store. The applications include video games, which can be played either on the primary display of the vehicle — such as the i7’s 32-inch one — or smaller back-seat monitors. The company also says customers will be able to pair video game controllers via Bluetooth to play games on the cars’ screens; the functionality is coming via an over-the-air software update to select models later this year.
Both video streaming and video gaming are features that will only be available while cars are parked or charging at an EV charging station — even in the back seats — to comply with regulatory requirements, similar to Tesla cars. I assume that the new features will prove handy during extended charging sessions, such as when on road trips.
However, gaming and video streaming applications are nothing new for most EV customers — the innovation comes in BMW’s work in augmented and mixed reality products. The struggle with AR and MR products in automobiles is latency: For years, automakers have struggled to make AR and MR graphics that keep up with the fast speeds of moving cars in a way that feels natural and does not induce motion sickness. For the first time, BMW says that it has gotten that right. The company described a demonstration with a set of Xreal Air 2 AR glasses, a type of AR glasses that project high-quality, low-latency images onto the lenses using a high-definition micro-LED panel. The glasses display real-time navigation instructions, media information, charging station previews, and hazard warnings while driving — projected onto the road in front.
The demonstrations are simply proofs of concept — BMW does not expect most drivers to wear a specially modified version of AR glasses while they are driving purely for AR graphics — but the technology seems to work, according to the company. The AR experience acts as a highly advanced heads-up display, a technology where mission-critical information is projected onto the windshield on the driver’s side, above the steering column. I can see how such a technology would prove to be useful in the real world: Car infotainment systems are already filled with distracting clutter, a multitude of irrelevant symbols, and complicated buttons and menus — much of which is unneeded and irritating while driving. Instead, a streamlined, simplified interface could be projected in front of drivers onto the road so that they do not have to look down and search for information in piles of chaos. Moreover, the interface changes and interacts with current driving conditions, a godsend for directions, where spoken guidance is confusing in complex intersections. Instead, visually indicating where to make turns and take highway exits could prove incredibly useful.
To achieve this level of excellence and preciseness at high speeds, according to the company, the AR software gathers and uses data from the vehicle’s sensors to map exactly where the car is and how it is being operated. BMW also said in its press release that it wants to be an industry leader for this kind of technology so that other automakers can use the company as a benchmark and thought leader, as the industry did with BMW’s work in digital car keys — a technology now widely available through the market and that the company pioneered. The AR graphics animate cleanly and update in near real-time, says BMW, though the claims could not be tested as demonstrations were not given to the press at publishing time — they were only verbalized and described in press conferences and releases. It will be interesting to see how the technology works in the real world — not a controlled experience — however, as road conditions change every second and the software must keep up with the changes accurately and promptly to prevent confusion or distractions.
Lastly, BMW announced progress on its Alexa LLM-powered custom voice assistant, first announced in 2022. The company only provided updates and said that the technologies were still under development and that a final release version would not become available any time soon — including on the CES show floor. The announcement is indicative of a broader theme at this year’s trade show, where automakers are announcing new LLM-powered voice assistants, but staying clear of actually releasing the technology, presumably because it is not tested or ready for production.
BMW says it is using Alexa’s LLM — announced by Amazon last autumn — as the foundation for the new voice assistant, aptly named BMW Intelligent Personal Assistant. The company says that the voice assistant will be able to more precisely answer general knowledge questions and questions about “vehicle functions” in a more natural, human-like manner, whereas the current voice assistant can only control certain vehicle functions and speaks robotically, using pre-written canned responses. The new voice assistant will have complex processing abilities, according to the company, and will be much more useful than BMW’s current offerings. The new LLM will have deep integration with core system functions, such as climate control and navigation.
BMW did not elaborate on whether the LLM is run locally — on a computer processor inside the car — or if prompts are sent to servers, but it is almost certain that processing will be done on-device, as the LLM accesses sensitive controls that require fast response times and privacy assurances. BMW also did not announce new partners for the project other than Amazon; the company is not working with Google, which makes the underlying firmware for iDrive, called Android Automotive.
All of the new announcements are fascinating and signal a shift in automobile software. Previously, software in cars was notoriously slow, not advanced, and performed terribly, acting more like a hindrance than a nicety. I express deep skepticism that automakers will be able to improve their software experience so drastically in the coming years, however. For AI to feel “at home” on current car infotainment systems, user interfaces must be entirely reworked for AI, creating interfaces that are simple, speedy, and responsive. Currently, automakers have not made any strides in that regard.
The new announcements also cause concern for hallucinations, a phenomenon where LLMs create misinformation unintentionally, due to a quirk in the way that they are made. LLMs have no way to check if what they are producing is factual information, so if an LLM provides incorrect information while a driver is focused on driving, it could be distracting and irritating. BMW and other manufacturers must express care in their software and ensure the LLMs are not used for mission-critical information that has the potential to be fabricated by the assistants.
Mercedes-Benz Brings Generative AI to ‘MB.OS’
Mercedes-Benz, the German luxury automaker with a best-in-class voice assistant already hailed by critics, announced that it is integrating generative AI into its Mercedes-Benz Operating System that runs on its latest vehicles’ infotainment systems, similar to BMW. The voice assistant, called Mercedes-Benz User Experience Virtual Assistant, will run on the next version of MB.OS, shipping in 2025 model year cars that ship with the upcoming Mercedes Modular Architecture. Mercedes says the improvements will create “the most human-like interface with a Mercedes-Benz yet,” according to a press release published by the company on Tuesday.
The new voice assistant, similar to ChatGPT, will learn from drivers’ usage patterns to create a customized experience. For example, the company highlighted the assistant reading headlines from the morning news or offering to call into a virtual meeting based on a user’s calendar events. The current version of the assistant — which users activate with the hot word, “Hey, Mercedes” — is restricted to system commands, like controlling climate settings and media. The new version can more naturally attend to these queries in a more precise manner, the company claims.
In addition to controlling system options more accurately and naturally, Mercedes claims that the assistant will also be able to answer general knowledge queries as a human would, using “natural dialogue.” The company also says that the assistant will also gain the ability to ask follow-up questions to help users understand more and clarify any other concerns. It is unclear if the generative AI-powered assistant will be able to search the internet for real-time answers, though the real-time news features imply that it will be able to do so in some capacity. The company also did not elaborate on which companies it is partnering with for the generative AI functionality, as so did BMW. Mercedes previously stated that it was working with OpenAI to develop generative AI technologies, but the latest CES 2024 press release does not name the company.
The company, however, did announce that it is working in collaboration with Nvidia to manufacture custom graphics processing units with tensor cores to run the MB.OS operating system and LLM for MBUX Virtual Assistant. The custom-made processors, while not named explicitly, use water-cooling to prevent overheating and thermal throttling, and also work to render on-screen graphics and process information for driver assistance features. The collaboration is a testament to Nvidia’s engineering prowess and stronghold on the generative AI market and demonstrates Nvidia’s mission of bringing generative AI everywhere, as highlighted during the company’s Monday press conference.
A unique feature of Mercedes’ generative AI is that the MBUX assistant will have multiple “personality traits,” as the company calls them: predictive, natural, personal, and empathetic. In practice, however, these “traits” are more like conversational modes to tailor responses to the user’s preferences. Mercedes highlighted a more empathetic personality when a user was stuck in traffic, for example. MBUX Virtual Assistant also tailors its responses to each driver, mimicking the driver’s preferences and speaking style, the company claims, remembering information that the driver provides and using information from their connected smartphone to provide helpful knowledge.
Mercedes touts the virtual assistant as minimizing distractions while on the road, providing a source of entertainment while idle, and decreasing reliance on a smartphone while driving, similar to other automakers that have announced their uses of generative AI. The new features were demonstrated by the company in its 2025 Concept CLA car, which was on the CES show floor Tuesday.
I am unsure if I entirely am sold on the concept of generative AI minimizing distractions while on the road. As I said, if LLMs hallucinate, it could be irritating, which is not ideal — and if MBUX Virtual Assistant asks a follow-up question when not required, it adds to the complexity of the service, which is confusing and unnecessary when driving. However, the preciseness and speediness of an on-device (on-car) LLM may prove to be handy and less distracting, say when a general knowledge question pops up during a car ride. The assistant might also prove useful for owner’s manual guidance, or for enabling quick settings while driving.
The use cases are endless for such a technology, though, again, it will be intriguing to observe how the voice assistant will integrate with information visually displayed on the dashboard and throughout the interior of the vehicle. Mercedes, in recent history, has done a lot to improve its infotainment software, however, but I still do not have much confidence in the company to create a useful, practical, and less-frustrating software experience. I mainly think smartphone software and technology companies like Apple and Google should flex their muscles in this sector instead of leaving it to the automakers, adding generative AI features to Android Auto and CarPlay that seamlessly tie in with users’ personal data in a privacy-preserving, intuitive, and less-distracting way.
Volkswagen
Similar to Mercedes-Benz and BMW, Volkswagen — another German automaker — is adding generative AI to its cars, working with its software division, Cariad, alongside a U.S.-based AI company specializing in automotive technologies, Cerence, to develop the new features and technologies. Volkswagen’s current voice assistant, Ida, can only perform rudimentary tasks, like changing climate controls and adjusting media, but the automaker is now adding ChatGPT to the assistant later this year.
Volkswagen, in the company’s press release on Monday, stated that the addition will be available on cars built on the “MEB” and “MQB Evo” platforms, including the ID.3, ID.4, ID.5, ID.7, and 2024 models of the Tiguan, Passat, and Golf. The feature will not be available to U.S. buyers immediately; the company said that it is testing the assistant in other languages, but demonstrations were provided to selected press in Las Vegas.
Volkswagen highlighted the fact that no vehicle data is sent to OpenAI’s servers, and queries are deleted immediately. “ChatGPT does not gain any access to vehicle data; questions and answers are deleted immediately to ensure the highest possible level of data protection,” the company noted. It is unclear how Volkswagen will be anonymizing the data, and what kind of licensing deals it has with OpenAI to ensure users’ privacy and security. However, it is obvious that processing is done in the cloud, not in the car, and it will be interesting to see how this affects processing times.
Basic tasks, such as system-level controls, will be handled as they are now, by the current version of Ida, says the automaker. However, general knowledge questions and queries that require more thought and “sources” will be handled by the LLM. The same concerns I have outlined in the sections about the other automakers’ generative AI features apply here, but again, latency might be a hindrance in the Volkswagen system.
Cariad — which handles other components of the software stack for Volkswagen vehicles — also mentioned that it is testing an autonomous parking system for electric vehicles that automatically parks a vehicle in a designated spot and plugs it in. The technology is similar to what Tesla prototyped many years ago and involved a robot of some kind plugging the charging cable into the charge port of the car.
The autonomous charging features are only being tested in two parking garages in Germany, and the companies did not release more details on when the features may be slated for a broader release or testing phase. I think this is a much cleaner, more elegant solution than “wireless charging,” or placing a wireless charging receiver on the bottom of EVs for them to charge autonomously while parked in an EV charging space. The latter idea requires much more infrastructure than simply equipping existing charging stations with robotic arms, in my opinion, and would require specialized hardware on the cars themselves.
Hyundai Goes All-in on Hydrogen Power
Hyundai, the South Korean automobile conglomerate, announced new initiatives to become the industry leader in hydrogen energy on Monday during a press conference, held in Las Vegas. While the company did not explicitly announce new generative AI features in its cars, it did describe a new initiative called the “software-defined strategy,” where the company’s presenters said that Hyundai is increasingly prioritizing software across Hyundai cars, as it has correctly realized that software has become a vital part of the customer experience in automobiles. Hyundai says that it is writing more in-house software for the first time to “improve the user experience,” but did not detail its plans and operations further. The software-defined strategy includes “large language models” that may help to “reduce distractions,” said the company. It did not announce new features, as previously noted.
Hyundai’s main focus during its press conference was on hydrogen energy and how it claims hydrogen is the future of automotive fuels. The company is more bullish on hydrogen fuel than electricity because the company has developed strategies to extract the element from waste and compost, guaranteeing “clean” production — whereas electricity might be made from dirty sources, like coal and natural gas, Hyundai claims. The company said on stage that it has patented certain technologies to extract hydrogen from plastic bottles and other garbage, which it says is better for the environment than developing clean electricity sources. Hyundai wants to become the leader in hydrogen fuel, said the spokespeople, and has already begun its efforts in its home country, South Korea. Hyundai officials and C-suite executives on stage did realize that the effort was ambitious. However, the automaker is confident in its abilities.
In addition to the hydrogen plans, Hyundai briefly discussed its “software-defined strategy,” aimed at making Hyundai infotainment software friendlier for users and developers alike. The company did not announce new features, but said that it is exploring adding LLMs into its vehicles starting in 2026 or 2027, which is quite far away — and frankly, much later than the competition, which plans on bringing generative AI to its cars this year, as outlined above. The strategy involves creating LLM-powered technologies that reduce distractions, connect with popular applications and services, and provide a friendlier, more natural, and humanlike experience. In addition, Hyundai described tweaking its software to be easier to navigate, less distracting, and less cluttered, contributing to an improved experience.
The company said that hardware is important to support the initiative but did not elaborate further or announce additional partnerships. The company also did not explain whether the LLMs would take the form of an intelligent voice assistant, akin to Volkswagen, BMW, and Mercedes-Benz’s respective technologies, but it’s almost certain that they will be since visual software in a car is unhelpful and would almost certainly lead to regulatory scrutiny. Hyundai, however, did say that it is working to create a software development kit for developers to craft useful applications and plug-ins for the software, enhancing its utility. It did not provide a timeline for the SDK, though it did say new fleet solutions for enterprise customers would become available later this year.
I am glad to see a large automaker such as Hyundai announce new focuses on software development, but again, I think the genuine utility will come when Apple and Google add LLMs to their in-car infotainment software more broadly, unlocking new developer application programming interfaces and SDKs and giving users more control over their data. Additionally, the fact that Hyundai is so late to the LLM party is potentially concerning, as the market may simply move on from this technology by that time. And I think the hydrogen initiatives are more lofty than nifty since the rest of the world is focusing on clean electricity rather than hydrogen energy. If Hyundai ends up being the only car manufacturer that uses hydrogen energy, it could have the opposite effect, creating more emissions to build the specialized vehicles and their accompanying infrastructure.
Honda Announces Concept EVs
Finally, on Tuesday, Honda — the hugely successful legacy automaker from Japan — announced two new concept electric vehicles as part of its new EV series called “Honda Zero.” The new models, the Saloon and Space-Hub, are purely concepts for now — as many vehicles at CES are — but Honda said during its press conference, held from Las Vegas, that the Saloon would be the first to arrive in customers’ driveways in 2026. The two vehicles are built atop Honda’s “thin” architecture that the company is using for its electric vehicles, focusing on aerodynamics. The company says three design philosophies dictate the production of the new Honda Zero EVs: “thin,” “light,” and “wise.” Aside from the fact that those three adjectives sound like they were put through Google Translate, they signal a from-scratch strategy to develop the new cars.
Honda is working with Sony to develop the software for the automobiles, as the two companies have been working in tandem to develop the Afeela concept EV debuted last year at CES. The vehicles are designed to be sporty, futuristic, and technologically advanced, using AI to remember driver preferences for “personalized experiences,” as well as route suggestions using user data. The company did not make it clear if these features will be powered by generative AI or not, or if the processing will be done on-device, on a computer inside the cars.
The saloon is a mid-size sedan with a distinct sloping roof shape extending from the back of the car to the windshield. The design is interesting, to say the least, with curvy lines and slanted windows, adding to the commanding look. The company says that the design is derived from Honda’s work in Formula 1 race cars, and I can see the resemblance. The Space-Hub is a minivan-like vehicle with a boxy shape and plenty of cargo room, an expansive panoramic moonroof that spills over to the sides to seamlessly blend in with the side windows, and bench seating inside. The back of the minivan is distinctive, with no rear window, though the vehicle is almost certain to have a plethora of cameras to allow drivers to observe their surroundings. The rear end also features an illuminated “Honda” word mark.
Honda did not release specifications for either of the vehicles, as they are strictly concepts for now — but the company did note that the vehicles will be autonomous, as they are built on Honda’s previously announced “Honda Sensing” platform, a “Level 2” autonomous driving system. Drivers will be required to maintain alertness on the road while using the software, and drivers are legally held responsible for the system’s actions in its current state. However, by 2026, this technology could be improved and upgraded to a “Level 3” autonomous driving system, which would mean that the technology would require less human intervention and involvement.
As this is CES, it is important to know that no matter how many assurances these exhibitors provide, there is a strong likelihood that none of these vehicles come to the market — or that some of the features might be neutered or removed entirely. I also am not much of a fan of the design of these concept vehicles; they look like eyesores and stand out too much on the road for me to look at or feel comfortable driving. I guess that is just the world of design we live in now, but not everything has to be “futuristic.”
There was a broader theme in cars this week at CES: artificial intelligence. Like everything else in the world, cars are getting smarter, and legacy automakers are swiftly announcing initiatives to improve automotive software. We’ll have to see how the software performs in the real world in the coming weeks, months, and years when it all (hopefully) ships to consumers in production vehicles.
This article is part of my CES 2024 coverage.