Unlocking the potential of AI, the Arm computing platform builds the future of computing and storage
Source: Arm Author: Ma Jian, Vice President of Business Development, Internet of Things We are in the early stages of an exciting technological revolution in artificial intelligence (AI). With the accelerated evolution of natural language, multimodal large models, and generative AI technologies, AI is reshaping industries at an unprecedented rate. According to IDC's forecast, global data volume will grow from 159.2 zettabytes in 2024 to more than 384.6 zettabytes in 2028, representing a CAGR of 24.4%. By 2028, 37% of data is expected to be generated directly in the cloud, while the rest will be generated directly from the edge and end points. In the face of the proliferation of data at the edge, efficient data processing, low-latency transmission, and intelligent and secure storage are becoming the focus of the industry. Future computing architectures will not only have to deliver more computing power, but will also have to integrate more closely with storage systems to ensure that AI models operate efficiently while optimizing how data is managed and accessed. From the current direction of AI technology development, on the one hand, the large model is evolving towards general Artificial intelligence (AGI), exploring new directions such as multi-modal and physical AI, and continuing to challenge the new limits of computing power. On the other hand, in order to promote the process of full deployment of large models, the industry has begun to move towards deep optimization and vertical field customization, so that large models can enter thousands of industries and adapt to different scenarios such as mobile terminals, edge computing, and cloud deployment. The launch of DeepSeek has had a profound impact on the global AI market: as an open and innovative technology, it not only demonstrates the optimization potential of AI in the training and reasoning process, but also greatly improves the efficiency of large-scale deployment, and fully proves that the model can run stably in a lower cost and higher performance environment. This achievement is of great significance in promoting the large-scale application of AI in enterprise applications and edge computing.Arm Computing Platform: Continue to facilitate optimized AI deployment from cloud to end
In the early stages of AI development, the data center, as the core place for model training and early reasoning, is facing unprecedented challenges. Traditional standard general-purpose chips are inadequate to handle compute-intensive AI workloads and cannot meet the urgent needs of the AI era for high performance, low power consumption, and flexible scalability. In this context, the Arm computing platform, with its advanced technical advantages, has opened up a new paradigm for the development of a new generation of AI cloud infrastructure. From the Arm Neoverse Computing Subsystem (CSS), Arm Total Design Eco-project to the Core System Architecture (CSA), Arm has carried out an integrated layout from technology to ecology, providing efficient, flexible and scalable solutions for AI data center workloads. It also helps partners focus on product differentiation and speed up the time-to-market process.
AI reasoning is the key to unlocking value in AI, and it is rapidly expanding from the cloud to the edge, covering every corner of the world. In the field of edge AI, Arm leverages its unique technological and ecological strengths to continuously innovate to ensure that the smart iot and consumer electronics ecosystem can perform the best workloads in the right place at the right time. To meet the increasing AI workload demands of edge AI, Arm recently announced an edge AI computing platform based on the new Armv9 ultra-efficient CPU Cortex-A320 and the Ethos-U85 AI accelerator with native support for Transformer networks. The platform achieves deep integration of CPU and AI accelerator. Compared to last year's Cortex-M85 and Ethos-U85 platforms, eight times the machine learning (ML) computing performance has been improved, bringing a significant AI computing power breakthrough, enabling edge AI devices to easily run large models with more than 1 billion parameters.

Figure: The Arm edge AI computing platform supports end-to-end AI models running over 1 billion parameters
The newly released ultra-energy efficient Cortex-A320 not only provides the Ethos-U85 with higher memory capacity and bandwidth, enabling the execution of large models on Ethos-U85, but also supports a larger addressable memory space and more flexibility in managing multiple levels of memory access latency. The combination of the Cortex-A320 and Ethos-U85 is ideal for running large models and addressing the memory capacity and bandwidth challenges posed by edge AI tasks. In addition, the Cortex-A320 takes advantage of Armv9's enhanced AI computing capabilities and security features including Secure EL2, pointer validation/Branch Target Recognition (PACBTI), and memory Tag Extension (MTE). Previously, these features have been widely used in other markets, and Arm now brings them to the Internet of Things and edge AI computing, providing outstanding and flexible AI performance while achieving better isolation of software loads and protection against software memory anomalies, improving overall system security.
Storage in the Age of AI: a comprehensive upgrade of storage, computing, and security capabilities As the demand for AI computing continues to grow, the cloud edge puts higher requirements on computing power, but also puts more stringent requirements on the performance, density, real-time and power consumption of the storage system. In the traditional computing architecture, storage and computing are relatively separated, and storage devices only play the role of data storage. Data needs to be frequently moved between storage and computing nodes, resulting in a bottleneck between storage and computing. However, in the era of AI, to meet the needs of real-time data analysis, intelligent management and efficient access, placing storage closer to the computing unit, or allowing the storage itself to have computing power, has become particularly critical. This ensures that AI tasks are executed efficiently in the most appropriate location. From cloud-to-end AI computing, there are different requirements for storage throughput, latency, power consumption, security, and improved host manageability such as Open Channel. The storage controller and the firmware running on the Arm CPU in the storage controller play an extremely important role in supporting differentiated AI storage needs.
Figure: Arm's rich IP platform solutions deliver leading performance and power efficiency for AI storage
In fact, as the cornerstone of data storage and network control, Arm has been providing high-performance, low-power, secure and reliable solutions for storage controllers and devices worldwide, including:
Arm Cortex-R series real-time processors have the fastest interrupt delay and real-time response speed, and are widely used in many storage devices.
Arm Cortex-M series embedded processors are a popular choice for back-end flash and media control, and support custom instructions so customers can make a difference with deep optimization for unique NAND media;
Arm Cortex-A series application processors are pipelined for high throughput and maximum processing performance, supported by a solid ecosystem of ML, data processing software and rich operating systems.
The Arm Ethos-U AI accelerator supports native Transformer acceleration of 2048MACs per second, which can help the storage controller itself become smarter.
In addition, there is Neoverse, which is tailored for data centers. We are beginning to see innovative designs in CXL (Compute Express Link) that combine Arm Coherent Mesh Network (CMN) with Neoverse to enable "compotable" memory expansion and incorporate the concept of near-storage computing. Reduce data handling.
Ecology joins hands to build the future of AI computing and storage While focusing on delivering leading technologies and products, Arm is also committed to working with ecosystem partners to advance the storage industry. Platforms based on the Arm architecture are being widely adopted by industry-leading storage enterprises to optimize their storage solutions. For example, Solidigm's newly released 122TB PCIe SSD Solidigm? The D5-P5336 significantly improves the energy efficiency, storage density and performance of AI data centers, and its storage controller uses an Arm Cortex-R CPU to effectively improve the real-time and latency certainty of read and write. Silicon Motion's SM2508 master chip for AI PC uses Arm Cortex-R8 and Cortex-M0 to achieve breakthroughs in energy efficiency and data throughput. Its SM2264XT-AT is the industry's first automotive PCIe Gen4 master chip, enabling access to data for mixed critical workloads with enhanced virtualization and a 30% energy savings. XP2300, ORCA 4836 and UNCIA 3836 SSDS based on Arm Cortex-R CPU are widely used in AI PC, server, cloud computing, distributed storage and edge computing and other application scenarios by virtue of their large capacity and high performance. Meet the localization deployment requirements of AI technology. In addition, in the local storage market, leading storage companies such as DUP Micro, Lianyun Technology, Yixin Technology, Turner Fei, Deyi Microelectronics, and Yingren Technology have also widely adopted Arm technology to create SSD master chip and device solutions.
To date, there are nearly 20 billion storage devices based on Arm architectures and platforms, including cloud and enterprise SSDS, in-vehicle SSDS, consumer SSDS, hard drives, and embedded flash devices. Currently, storage devices powered by Arm technology continue to ship around 3 million units per day.
With cutting-edge technical strength, rich ecological layout, and deep storage industry accumulation, Arm is continuing to lead technological innovation and enable the development of computing and storage in the AI era. Arm will also continue to work with its partners to build a new future for computing and storage in the AI era through the secure and efficient Arm Computing platform.
免责声明: 本文章转自其它平台,并不代表本站观点及立场。若有侵权或异议,请联系我们删除。谢谢! Disclaimer: This article is reproduced from other platforms and does not represent the views or positions of this website. If there is any infringement or objection, please contact us to delete it. thank you! 矽源特科技ChipSourceTek |