The concept of software-defined cars was raised by Baidu, which is a typical way of thinking in the Internet industry. The software-defined car model is familiar to Internet enterprises. But it means industrial transformation for traditional automobile enterprises. This transformation needs a period of progress.
The core connotation of the car is to dynamically change the aggregation relationship between network nodes in the system through software. Then it produces new functions. But it has several preconditions which are unified system architecture, software system, application development framework and so on.
To put it bluntly, software-defined car needs a unified architecture specification or platform. But the starting point of a software defined car should not be dependent on a specific hardware, operating systems, chips, cars, etc. So, they need hardware abstraction or software middleware. This is a unified categorization of different chips, hardware, and system architecture. It will be easy for software modules to call, communicate, and execute output.
For conventional cars, unifying hundreds of ECUs in the current distributed architecture is impossible. The implementation of software-defined vehicles is not possible under the current automotive distributed EE architecture, and the ongoing domain controller architecture can be a good foundation.
Electrical/Electronic Architecture (EEA) was first proposed by Aptiv. According to the EEA development route proposed by Bosch and other Tier1 suppliers, it is currently a distributed architecture system. It gradually transitioned to a domain controller architecture, and the final form will be a central computing platform architecture. Therefore, the upgrade iteration of automotive EE architecture provides a good path for software-defined cars. The current distributed architecture cannot meet the needs of software-defined vehicle. The domain controller architecture and the future central computing architecture become a good carrier. This can be achieved through OTA upgrades to DCU or central computing platform.
The implementation of the control architecture has high requirements for domain controller hardware and system design. Combined with the multi-sensor intelligent cars and a large amount of computing needs, a domain controller for the choice of chip demand level increased significantly. So, the AI chip developing pattern will play an important role on the development of software defined vehicle.
This paper will focus on the design of domain controller and the development pattern of AI chip under the background of software-defined vehicle.
Currently, there are only a few domain controllers or computing platforms available in the autonomous driving industry. It includes Huawei MDC, NVIDIA Driver platform, Tesla FSD, ZF ProAI, Horizon Matrix platform and the recently released Desay SV auto IPU03, etc. These basically represent the major AI chip manufacturers or factions currently available. They represent different architectures based on GPU, FPGA and ASIC. Among which FSD, Mobileye, Huawei and Horizon all belong to the ASIC route. NVIDIA is the representative of the typical GPU route. FPGAs are currently dominated by Xlinx and Altera.
AI chip requirements for domain control design
Development Trends of AI Chips from the Design Requirements of Domain Controller Hardware Platform
The domain controller needs to fully consider the computing power and performance requirements. Therefore, the domain controller hardware layer is divided into three parts or three important chips:
The technical routes involved in the controversy is mainly the first part (the AI computing chip).
The automatic driving system includes complex logic operation and a large amount of data parallel processing tasks. The chip that can meet the requirements of these two kinds of computing tasks must adopt heterogeneous computing. The CPU is used to perform complex logic operations. The AI chip is responsible for the parallel computation of large amounts of data.
It is generally believed that L2 requires the computing power of AI to be less than 10TOPS. L3 requires AI computing power of 30-60TOPS. L4 requires AI computing power to be more than 100TOPS. L5 requires AI computing power of 500-1000TOPS. Currently, the available computing platforms can only meet the requirements of some L3 and L4 levels of autonomous driving.
To complete such heavy computing requirements, CPU is not competent. The traditional MCU is far from enough. The alternative is GPU, FPGA, or redesigned ASIC. For instance, various manufactures launched a variety of computing units. Of course, GPU currently has an advantage in the market due to long-term accumulation.
Analysis from the Perspective of Software Platform
Software platform includes operating system, middleware, application layer algorithm and so on.
For domain controllers, chips provide computing power. The core functions of the operating system are task scheduling, device management and transferring data from external sensors into the chip for subsequent processing. Middleware is responsible for standardizing software interfaces. Protocols are on top of the operating system and under the application algorithm. Application algorithms are responsible for processing data. Perceptual data is obtained according to the input interface. It can complete the perception of the external world, driving path planning and vehicle control. So, the level which the software has a direct relationship with the chip is the operating system.
An operating system is a computer program that manages computer hardware and software resources. The kernel architecture of the operating system determines the stability and security of the system. Vehicle regulations require that the chips must be RTOS, namely real-time operating system. QNX, as we often say, is an RTOS kernel.
For different chip architecture designs, the corresponding operating systems are also different. For example, ARM and X86 architecture. The corresponding operating systems are Windows, Linux, Android, MacOS, etc.
Software-defined cars need to build a basic software platform, standardize the hardware and basic software interface to facilitate the deployment of application algorithms. But it does not mean an operating system.
Core of autonomous driving is algorithm design and data accumulation. But the application software algorithm and operating system will not be tied. Its design must be cross-platform, mature and stable RTOS. Currently, there are three mainstream RT-Linux, QNX, VxWorks. In addition, the particularity of vehicle architecture determines that it is impossible to use a single operating system to achieve all functions. The parallel coexistence of multiple operating systems will continue for a long time.
However, autonomous driving software systems lack a real-time distributed development framework like RTOS that can be cross-platform. The introduction of Adaptive AUTOSAR may serve this role due to AUTOSAR’s advantages in the traditional automotive sector.
Therefore, from the software level, there is not much constraint on AI chip. But a unified architecture of middleware that conforms to RTOS is required.
AI chips in the industry
Nvidia is the absolute leader in autonomous driving. Its chip development for the autonomous driving field is rapidly iterating, and its computing power is constantly rising, with obvious performance advantages. NVIDIA Xavier is currently the most widely used AI chip in the autonomous driving field and the first AI chip to be put into mass production.
Xavier has 9 billion transistors built into it. The CPU uses NVIDIA’s own 8-core ARM64 architecture (code name: Carmel). The GPU uses 512 CUDA Volta. It supports FP32/FP16/INT8, 20W power consumption single precision floating point performance 1.3TFLOPS, Tensor core performance 20TOPs, up to 30TOPs after unlocking 30W. There are six different processors in the Xavier: Valta TensorCore GPU, eight-core ARM64 CPU, dual NVDLA deep learning accelerator, graphics processor, vision processor and video processor.
TÜV SÜD has confirmed that the NVIDIA Xavier system chip complies with THE ASIL C-rated ISO 26262 random hardware integrity. It meets ASIL D rating system capability requirements (the strictest functional safety standard).
At the NVIDIA GTC 2021, Jensen mentioned that starting next year, the Jetson AGX Orin will operate as NVIDIA’s flagship SoC for on-board and edge computing, and that NVIDIA will offer Jetson to non-automotive customers.
The flagship Jetson AGX Orin model, based on the Orin SoC, is scheduled for early 2022. Orin will have 12 ARM Cortex-A78ae CPUs and 2048 CUDA cores. With the integration of Ampere architecture GPUs, there will be 17 billion transistors. But Orin’s frequencies are expected to be conservative, given that it is designed for mobile devices. The Jetson AGX Orin, for example, has a 2GHz CPU and a 1GHz GPU at the top.
The Jetson AGX Orin will contain a pair of deep learning accelerators, DLA, as well as a vision accelerator. The package will also include 32GB LPDDR5 RAM with 256-bit memory bus for 204GB/s memory bandwidth and 64GB eMMC 5.1 for storage.
Intel acquired Mobileye in 2017. Mobileye is NO.1 in the global visual ADAS market, with 80% of the ADAS market. Have a wealth of visual ADAS products. Mobileye’s proprietary software algorithms and EyeQ chips analyze visual information in detail and predict possible collisions with other vehicles, pedestrians, bicycles, or other obstacles. It can also detect road markings, traffic signs and traffic lights.
The Mobileye line represents a typical ASIC technology route. The Eye Q4 is 2.5Tops and the Eye Q5 is 12Tops. They don’t have significant advantages over competitors right now.
From the perspective of application scenarios, the autonomous driving system includes complex logical operations and parallel processing of large amounts of data. A single processor can’t do it alone.
In terms of technical architecture, GPU, FPGA, ASIC and other chips have their own advantages.
Some also say that ASIC chips are also called brain chips. The human brain is characterized by neurons that transmit data. When using hardware to simulate the human brain, there are many redundant elements. The brain-like chip fits the role of the human brain. Making a brain-like chip is very difficult. The chip strategy of IBM, Qualcomm, Intel, and others is to use hardware to mimic the synapses of the human brain.
In conclusion the main direction of GPU in the future is advanced complex algorithm and general artificial intelligence platform. GPU has strong versatility, so it can be applied to large-scale artificial intelligence platforms to meet different requirements efficiently. FPGA is more suitable for a variety of segmented industries. ASIC chips are fully custom chips. In the long run, it is suitable for the mass production of autonomous driving. Now do AI application algorithms from this point. The more complex an algorithm becomes, the more it needs a dedicated chip architecture to match it. ASIC is customized based on AI intelligence algorithm, but it will take time to mature.
At present, artificial intelligence and intelligent driving algorithms have not been finalized. As a general-purpose accelerator, GPU is expected to maintain its mainstream position in automobile master control chips for a long time. FPGA as hardware accelerator will be an effective complement to GPU. In the future, if all or part of intelligent driving algorithms are solidified, ASICS will become the ultimate choice for optimal cost performance.
Ecotron is specialized in automotive electronics. Our company manufactures controllers for electric and autonomous vehicles. We bring high-tech and automotive grade controllers to small manufacturers at affordable prices. By using state-of-the-art technologies, such as scalable semiconductors from NVIDIA, Infineon or NXP and model-based design with MathWorks tools, Ecotron can dramatically speed up development and significantly reduce cost for customers. It is our goal to build future mobilities with our customers.
Ecotron ADCU is a cutting-edge platform with powerful computational capability tailored for autonomous driving system. It receives data from multiple sensors, and its output could be used for the driving status feedback, vehicle control, and various autonomous driving features. Our ADCU can be customized according on customer’s vehicle parameters and components input specifications. It is worth mentioning that our newest generation of ADCU is EAXVA05 which is based on Double NVIDIA Jetson Xavier, and it has been launched. Come and try our newest generation of ADCU. We are here ready for your next EV project!