Nvidia unveiled the Drive Hyperion 8 platform at this year’s GPU Technology Conference (GTC), which will be made available to automakers and other OEMs for 2024 vehicle models. Since 2015, the technology company has been providing IT platforms to businesses enabling autonomous and driving assistance features powered by deep learning.
The latest Hyperion 8 represents another step forward for the company, as it is an all-in-one solution developed specifically for full autonomous driving systems. While previous Nvidia offerings only feature a computer architecture, the new platform also includes a set of sensors from leading vendors such as Continental, Hella, Luminar, Sony and Valeo.
According to Nvidia, the production platform is designed to be open and modular, allowing customers to choose what they need – from core computing and middleware to NCAP, level 3 driving, level 4 parking and cockpit functionality. TO THE.
Inside, Hyperion 8 is powered by two Drive Orin system-on-a-chip (SoC) to provide redundancy and failover security, the latter being compliant with systematic security standards such as ISO 26262 ASIL-D. According to Nvidia, Lotus, QCraft, Human Horizons and WM Motor have already chosen Drive Orin for their future vehicles, joining others such as Mercedes-Benz, Volvo, Nio Other VinFast just to name a few.
The Orin Drive SoC is equipped with GPUs based on the company’s 7nm Ampere architecture and provides the computing power (254 trillion operations per second) needed for autonomous functions and deep neural networks. With scalability, the platform can be upgraded to the newest (and upcoming) SoC Drive Atlan in the future, keeping it relevant to businesses adopting it.
In its current form, Hyperion 8 comes with the DriveWorks Sensor Abstraction Layer which simplifies sensor configuration with easy-to-use plug-ins. Nvidia says the platform has 12 cameras, nine radars, 12 ultrasounds and a front lidar from the aforementioned partners, although the openness and flexibility of the ecosystem allow vehicle manufacturers to customize the platform to meet their needs, supported by a complete toolset.
This includes Omniverse Replicator for Nvidia’s Drive Sim simulation platform, which is a synthetic ground truth data generation engine for training artificial intelligence networks. Put simply, Omniverse Replicator aims to bridge the gap between simulated and real situations by generating scenarios with high fidelity and realism.
Most of the deep neural networks that power the perception of an autonomous vehicle are composed of two parts, including an algorithmic model and the data used to train that model. Engineers have spent a lot of time perfecting the former, but sometimes the data side is lacking due to the limitations of real-world data, which is incomplete, time-consuming and expensive to collect.
By augmenting real-world data collection with synthetic data generated within Omniverse Replicator, engineers can quickly manipulate scenes in a kind of detailed sandbox, repeating them as often as needed. This helps accelerate the development of autonomous vehicles, while ensuring they are safer and more efficient, appropriately for large-scale deployment.
Mapping is also a key pillar of autonomous driving, so Nvidia also introduced Drive Mapping, which combines its Drive ecosystem with DeepMap technology. Map data is collected through specific cars or data collection vehicles running Hyperion 8, which is fed into the AI Drive AGX computing platform to create and update maps in real time, creating a scalable solution for autonomous driving globally.
In addition to these new technologies, Nvidia also unveiled its Drive Concierge and Drive Chauffeur software platforms integrated into one S class with Hyperion 8. These allow hands-free driving from one address to another, with self-parking also available on arrival.
To these systems are added Drive IX and Omniverse Avatar, the latter equipped with voice artificial intelligence, computer vision, natural language understanding, recommendation engines and simulation to allow users to have conversations in real time with an artificial intelligence depicted as an avatar on the vehicle’s infotainment, also issuing commands without requiring physical controls or touchscreens.