Author: Xiangyu Shangrui, Beijing Huaxing Wanbang Management Consulting Co., LTD
The accelerated arrival of the intelligent vehicle era has brought unprecedented computing power demands to in-vehicle intelligent systems. With an increasing number of vehicle models introducing innovative technologies such as the shift towards centralized electronic and electrical architectures, multi-sensor fusion in intelligent driving, multi-modal interaction in intelligent cockpits, and generative AI-driven virtual assistants, it is required that the main chips used in vehicles be capable of simultaneously handling multiple tasks including graphic rendering, AI inference, and secure computing. At present, the "four core elements" of functional safety, efficient and highly flexible computing power, product life cycle, and software ecosystem compatibility have become the core standards for measuring the innovation and market competitiveness of AI chips in smart vehicles.
In traditional automotive computing architectures, heterogeneous computing models are often formed by using computing units such as cpus, Gpus, and/or Npus. As autonomous driving algorithms rapidly evolve from L1 to L5, the requirements for software adaptability are getting higher and higher, and new sensors and infotainment devices are constantly being added to vehicles, different architectures are beginning to show different development trajectories. Meanwhile, new challenges such as the rapid increase in system complexity, the massive transfer of data, the rising difficulty in resource scheduling and coordination, and the rapid iteration of software have begun to emerge, making the inflexible hardware architecture a bottleneck that restricts the development of intelligent automobiles in multiple aspects including technology, security, and cost.
The market urgently needs an automotive chip architecture that can not only flexibly provide high-performance and efficient graphics and AI acceleration capabilities, but also meet functional safety standards, be compatible with a wide range of software ecosystems, and have a lower total cost.
Functional safety is not only about safety but also the key to controlling the cost of chips
Functional safety has always been the top concern for consumers, automakers, Tier-1 and chip suppliers, but it is not a problem that emerged during the rapid development of intelligent driving technology. The industry has established a complete ISO26262 standard and ASIL certification system, as well as solutions such as lockstep and dual-hardware result comparison. However, as the CPU, GPU and NPU in automotive chips become increasingly complex and take up more space, the silicon cost and complexity of these functional safety solutions originally provided for MCUS have significantly increased. Therefore, the market needs to achieve innovation in the field of functional safety solutions for automotive chips.

As an innovative product in the automotive electronics field, Imagination DXS GPU IP focuses on the in-depth optimization of intelligent cockpits and autonomous driving scenarios. Its design fully reflects the recent significant breakthroughs in functional safety in the industry, while ensuring performance improvement, enhanced safety, and effective control of chip costs. The peak performance of Imagination DXS GPU has increased by 50% compared to the previous generation. The computing power has expanded from 0.25TFLOPS for a single core to 1.5TFLOPS, with a maximum of 6TFLOPS and 24TOPS. It also supports a graphics rendering rate of up to 192GPixel/s.
In terms of the implementation of functional safety, Imagination DXS GPU adopts the Distributed Safety Mechanism (DSM) open for GPU computing and achieves ISO 26262 ASIL-B functional safety certification with only 10% area overhead. DSM takes advantage of the parallel characteristics of the processor to run security tests during idle cycles, ensuring that performance is not affected while also guaranteeing security, breaking the limitations of traditional security designs such as lockstep and backup. This innovative design explains from an architectural perspective that there are still many innovation opportunities in the field of functional safety, which can help increasingly advanced Gpus create value in multiple aspects such as security assurance, system complexity, cost control, and downstream vendor certification.
It is precisely because Imagination DXS GPU has achieved pioneering breakthroughs in functional safety and computing performance that at the "12th Automotive Electronics Innovation Conference and Automotive Chip Industry Ecosystem Development Forum (AEIF 2025)" held in Shanghai in May 2025, Imagination DXS GPU IP has been honored with the "2025 Automotive Electronics Golden Core Award - Emerging Product" for its innovative and advanced GPU technology. It is understood that starting from the company's D series GPU IP products, including the automotive GPU ips in the recently announced E series, all will adopt this distributed functional safety mechanism that has obvious advantages in terms of cost and complexity. Currently, the DXS GPU IP has been integrated into Renesas R-Car Gen 5 series SoCs, facilitating the commercial application of intelligent driving technology and meeting the all-round demands from entry-level to flagship models.
High flexibility and efficient computing power are the core functions of automotive processors
For automotive chip design enterprises, in order to handle increasingly complex AI computing and graphic rendering, it has become inevitable to adopt parallel processors such as Gpus or Npus in their computing chips. However, as the automotive electronic and electrical architecture shifts from domain control to central control, automotive core processors not only require higher computing power but also need to be flexibly optimized in terms of architecture for different applications. In other words, they need a flexible architecture with higher performance that is not confined to NPU or GPU architectures. This has also become the development direction of high-performance parallel computing in automobiles and many other edge AI applications.
For this parallel computing processor architecture that can be optimized and defined for applications, it is still Imagination that has redefined the industry standard for high-performance parallel computing for automotive and edge AI with its technological foresight and capabilities. In May 2025, Imagination launched a new generation of E-Series GPU IP architecture specifically designed for edge intelligent scenarios. This series of Gpus, with its efficient parallel processing architecture, not only offers outstanding graphics performance but also has flexible computing power expansion capabilities for artificial intelligence workloads.

The E series GPU architecture integrates Neural Cores and Burst Processors, supporting flexible expansion of computing power from 2TOPS to 200TOPS (INT8/FP8), meeting diverse computing power requirements ranging from basic edge-side computing to advanced intelligent driving. This architectural design enables the Imagination E series GPU to possess both the high performance of an NPU and the high flexibility of a GPU. By optimizing the instruction scheduling and data reuse mechanism, the average power consumption efficiency of edge-side computing has increased by 35%, and under the same computing power conditions, the power consumption of the on-board system has decreased by 20%.
This efficient and flexible allocation of AI computing power and graphics processing capabilities perfectly aligns with applications in edge AI scenarios, including automobiles, where there are numerous demands for both AI computing and graphics processing. It optimizes the performance and energy consumption of core automotive chips in handling both driving and cabin demands under a central computing model. In addition, by upgrading hardware-level virtualization technology, the E series GPU IP supports up to 16 virtual machine task isolation, achieving asynchronous parallel processing of multiple tasks such as AI, graphics, and UI, ensuring the automotive scenario requirements of multi-system collaborative work in intelligent cockpits and multi-task parallel processing in autonomous driving. With "highly flexible and efficient computing power" at its core, the E-Series GPU not only meets the computing power requirements of future intelligent vehicle processors but also promotes the further improvement of the intelligent experience of automobiles.
Longer product life cycle: The programmability of GPU architecture breaks the total cost dilemma
As intelligent driving and intelligent cockpit technologies penetrate from flagship models to mid-to-low-end models, the strict control of chip costs by automakers is driving the industry to break through the limitations of traditional development models. According to industry data, the development cost of traditional automotive computing chips is as high as 200 to 300 million US dollars, and the research and development cycle lasts for 3 to 5 years. When an algorithm needs to be iterated, NPU chips with a fixed functional architecture must be re-taped out, and the cost of each iteration accounts for 40% to 50% of the initial development cost. This development model of "high investment, long cycle and low flexibility" clearly reveals cost limitations in the face of the goal of automakers to "meet the 10-year life cycle of multiple vehicle models with one chip".
Therefore, compared with NPU-based automotive control chips whose architecture and functions are basically fixed, automotive chips using GPU IP can more calmly cope with such total cost pressure, because the hardware design using GPU can achieve cross-algorithm, cross-vendor and cross-vehicle model applications through higher programmability By using a larger number of market applications to spread out the high cost of chip research and development, automotive chips based on GPU architecture have a longer product life cycle and higher application flexibility than those based on NPU.
Of course, if the AI computing and graphics processing capabilities of the GPU itself in automotive chips can be flexibly configured, it will also bring higher cost savings and a longer product life cycle. Take the programmable generalization architecture promoted by Imagination in its GPU IP products and the E-series GPU developed based on it as an example. Through software-defined hardware design, it can extend the hardware life cycle of the chip to more than 10 years: when the AI model is upgraded, only a few months of software adaptation are needed to complete the iteration, which is significantly reduced compared to the traditional NPU iteration solution. This unified computing unit design can be reused across several major scenarios such as automobiles and industries, reducing hardware design costs by 40% and significantly shortening the payback period of R&D investment.
Open software ecosystem: One-time development, multi-scenario deployment
Software ecosystems and adaptability are becoming important thresholds for various high-performance computations. Nvidia's CUDA ecosystem has given it a significant edge in AI computing and automotive chips. Therefore, for other automotive and edge AI chip manufacturers to gain a larger share in the market, their core computing unit IP providers need to build a more open software ecosystem. As numerous standard organizations for this purpose continue to be established and grow, the major players among them are supporting the new generation of automotive chip developers in addressing the issues of the software ecosystem.
For instance, Imagination has built an open system for the future, whose core advantage lies in "one-time development and multi-scenario deployment". Its computing power can be directly invoked through mainstream apis such as OpenCL and Vulkan. Developers can easily migrate workloads to the neural cores in E-series Gpus by using toolchains like oneAPI and Apache TVM. This programmability not only significantly reduces the cost of cross-platform development, but also endows the device with the flexibility to adapt to future algorithmic changes. Facing the rapid iteration of cutting-edge applications such as generative AI and multimodal interaction, the E series GPU does not require hardware iteration. It can quickly adapt through software upgrades alone, ensuring that the product continuously meets emerging demands.
In the field of autonomous driving, Imagination's GPU IP also adheres to the concept of "one-time development, multi-scenario deployment". By integrating the FP16 pipeline and efficient computing libraries such as imgBLAS and imgNN, the processing speed of sensor data like radar point cloud data and visual SLAM has been significantly enhanced, effectively reducing the burden on the CPU and NPU. Meanwhile, through compatibility with open standards such as OpenCL and Vulkan, as well as CoreAVI security drivers, it ensures the real-time response and stable operation of the system in complex scenarios, enabling technical capabilities to penetrate multiple scenarios including intelligent cockpits and autonomous driving, and consolidating the technical foundation for cross-scenario deployment.
Conclusion
Judging from the architectural innovations achieved by Imagination's E-series Gpus and other products, AI chips used in edge applications such as smart cars are also reshaping their technical logic and have even kicked off the AI computing revolution on the edge. In the field of intelligent vehicles, the technological moat constructed by four key elements - functional safety, efficient and flexible computing power, life cycle management, and an open software ecosystem - is driving the industry to transform from "hardware piling up" to "intelligent evolution". In the broader end-side AI scenarios, this "software-defined hardware" concept is helping edge computing fields such as consumer electronics, industrial Internet of Things, smart cities, and intelligent business to fully embrace AI technology.
The profound significance of this transformation lies in breaking down the technological barriers of "cloud-edge-device". When the GPU architecture of smart cars can support the traffic dispatching algorithms of smart cities through software upgrades, and when the computing units of industrial equipment can be reused in the AI interaction scenarios of consumer electronics, edge AI is evolving from a single functional module to a "growable intelligent agent". Evolvable edge-side chips will become the "digital brain" of all smart devices, just like today's cpus.
免责声明: 本文章转自其它平台,并不代表本站观点及立场。若有侵权或异议,请联系我们删除。谢谢! Disclaimer: This article is reproduced from other platforms and does not represent the views or positions of this website. If there is any infringement or objection, please contact us to delete it. thank you! |