13823761625

Industry Information

Arm Technology Forecast: Technology Trends to 2025 and beyond
Date:February 6, 2025    Views:22

Source: Arm Author: Arm
    Arm is constantly thinking about the future of computing. Whether it's the capabilities of the latest architectures or new technologies for chip solutions, everything Arm creates and designs is oriented towards the use and experience of future technologies.
    With its unique position in the technology ecosystem, Arm has a thorough understanding of the full range of highly specialized, connected global semiconductor supply chains, covering all markets such as data centers, iot, automotive, and smart terminals. As a result, Arm has a broad and deep insight into the future direction of technology and the major trends that are likely to emerge in the coming years.
    Based on this, Arm has made the following predictions for technology development in 2025 and beyond, covering all aspects of technology, from the future development of AI to chip design, to the major trends in different technology markets.


Rethinking chip design, the core will become an important component of the solution
    Traditional chip streams are becoming increasingly difficult from a cost and physics point of view. The industry needs to rethink chip design and break away from traditional approaches. For example, there is a growing realization that not everything needs to be integrated on a single chip, and new approaches such as cores are coming to the forefront as foundries and packaging companies explore new ways to push the limits of Moore's Law in new dimensions.
    Different technologies for implementing cores are gaining attention and are having a profound impact on core and microarchitectures. For cores, architects need to gradually understand the benefits of different implementation technologies, including process nodes and packaging techniques, to leverage relevant features to improve performance and efficiency.
    Core technology has been able to effectively address specific market needs and challenges and is expected to continue to evolve in the coming years. In the automotive market, cores help companies achieve vehicle specification level certification during chip development, while helping to scale and differentiate chip solutions through different computing components. For example, cores focused on computing have different numbers of cores, while cores focused on memory have different sizes and types of memory. As a result, system integrators can combine and package different cores to develop a large number of highly differentiated products。


"Recalibrate" Moore's Law
    In the old Moore's Law, the number of transistors on a single chip has reached billions, its performance has doubled every year, and its power consumption has decreased by half every year. However, this constant pursuit of more transistors, higher performance and lower power consumption on a single chip is no longer sustainable. The semiconductor industry needs to rethink and calibrate Moore's Law and what it means for the industry.
    One of these is that in the chip design process, performance is no longer only used as a key metric, but performance per watt, performance per unit area, power performance per unit and total cost of ownership are used as core metrics. In addition, new metrics should be introduced that focus on system implementation challenges (which are the biggest challenges for development teams) to ensure that performance does not degrade when IP is integrated into the system-on-chip (SoC) and the overall system. As a result, this will require continuous performance optimization throughout chip development and deployment. As the tech industry moves massively toward more efficient AI workload computing, these met


Chip solutions enable true business differentiation
    In order to achieve true business differentiation with chip solutions, enterprises are constantly pursuing more specialized chips. This is also reflected in the growing popularity of computing subsystems, core computing components that enable companies of all sizes to differentiate and personalize their solutions, each configured to perform or support a specific computing task or specialized function.


The importance of standardization is increasing
    Standardised platforms and frameworks are critical to ensuring that the ecosystem can deliver differentiated products and services that not only add real business value, but also save time and cost. Standardization has never been more important with the advent of cores that integrate different computing components, allowing different hardware from different vendors to work together seamlessly. Arm has worked with more than 50 technology partners to develop the Arm Core System Architecture (CSA) to date, and with the addition of more partners, Arm and the partners will work together to standardize the core market. In the automotive industry, this would be in line with SOAFEE's founding intention to decouple hardware from software in software-defined vehicles (SDVS), thereby increasing flexibility and interoperability 


The ecosystem will collaborate more closely around chips and software than ever before
    As the complexity of chips and software continues to increase, no single company can handle all aspects of chip and software design, development, and integration. Therefore, deep cooperation within the ecosystem is essential. Such partnerships provide unique opportunities for companies of all sizes to offer different computing components and solutions based on their core competencies. This is particularly important for the automotive industry, which needs to bring together the entire supply chain including chip suppliers, Tier 1 suppliers, Oems and software suppliers to share their expertise, technologies and products to define the future of AI-powered SDVS so that end users can enjoy the true potential of AI.


The rise of AI-enhanced hardware design
    The semiconductor industry will increasingly adopt AI-assisted chip design tools, using AI to optimize chip layout, power distribution, and timing convergence. This approach not only optimizes performance results, but also accelerates the development cycle of optimized chip solutions, enabling smaller companies to enter the market with specialized chips. AI will not replace human engineers, but it will become an important tool for dealing with the increasing complexity of modern chip design, especially in the design of energy-efficient AI accelerators and edge-side devices.


AI reasoning continues to evolve
    In the coming year, AI inference workloads will continue to increase, which will help ensure widespread and lasting adoption of AI. This trend is driven by an increase in the number of AI-enabled devices and services. In fact, much of everyday AI reasoning, such as text generation and summarization, can be done on smartphones and laptops, providing users with a faster and safer AI experience. To support this growth, such devices need to be equipped with technologies that enable faster processing speeds, lower latency, and efficient power management. Two key features of the Armv9 architecture, SVE2 and SME2, work together on the Arm CPU to enable it to execute AI workloads quickly and efficiently.


Edge-side AI is emerging
    In 2024, many AI workloads have shifted to running on the edge side (aka end side) rather than processing in large data centers. This shift will not only save businesses electricity and costs, but also bring privacy and security to consumers.
    By 2025, we are likely to see advanced hybrid AI architectures that effectively distribute AI tasks between edge devices and the cloud. In these systems, AI algorithms on edge devices identify important events first, and then cloud models step in to provide additional information support. The decision to run AI workloads on-premises or in the cloud will depend on considerations such as available energy, latency requirements, privacy concerns, and computational complexity.
    Edge-side AI workloads represent a trend toward AI decentralization that enables smarter, faster, and more secure processing of devices near the data source, which is particularly critical for markets that require higher performance and localized deci


Small Language Model (SLM) accelerated evolution
    As technology advances, smaller, more compact models with higher compression, higher quantification, and fewer parameters are rapidly evolving. Typical examples include Llama, Gemma, and Phi3, which are not only more cost effective and efficient, but also easier to deploy on devices with limited computing resources. Arm expects the number of such models to continue to increase in 2025. Such models can run directly on edge side devices, which not only improves performance, but also enhances privacy.
    Arm expects that SLMS will increasingly be used for end-to-end language and device interaction tasks, as well as vision-based tasks such as event interpretation and scanning. In the future, SLM will extract more experience and knowledge from large models in order to develop local expert systems.


Multimodal AI models that can hear, see, and understand more are emerging
    Currently, large language models (LLMS) like GPT-4 are trained on human text. When these models are asked to describe a scene, they respond only in literal form. But now, multimodal AI models that include text, images, audio, sensor data, and more are beginning to emerge. These multimodal models will perform more complex AI tasks through audio models that can hear, visual models that can see, and interaction models that can understand relationships between people and between people and objects. This will give AI the ability to perceive the world, just like humans, to hear, see, and experience.


The application of agents continues to expand
    Today, when a user interacts with an AI, it's usually with a single AI that tries to do the tasks the user asks it to do independently. Then, through the agent, when the user specifies a task that needs to be done, the agent delegates the task to a network of numerous agents or AI robots, similar to the gig economy of AI. Currently, industries such as customer support and programming assistance have begun to use agents. As AI continues to become more connected and intelligent, Arm expects agents to see significant growth in more industries in the coming year. This will lay the foundation for the next stage of the AI revolution, making our lives and work more productive.


AI enables hyper-personalization, enabling more powerful, intuitive, and intelligent applications
    Driven by AI, more powerful and personalized applications will emerge on devices. Examples include smarter, more intuitive personal assistants and even personal doctors. The function of the application will change from simply responding to user requests to proactively providing recommendations based on the user and their environment, enabling hyper-personalization of AI. This will lead to an exponential increase in the amount of data being used, processed and stored, so industry and government will need to adopt stricter security measures and provide regulatory guidance.


Healthcare services will be a key AI use case
    Healthcare seems to have become one of the major use cases for AI, and this trend will accelerate in 2025. Use cases for AI in healthcare include: predictive healthcare, digital record storage, digital pathology, vaccine development, and gene therapy to help treat diseases. In 2024, DeepMind's founders were awarded the Nobel Prize in Chemistry for working with scientists to use AI to predict complex protein structures with 90 percent accuracy. At the same time, research has proven that the use of AI can reduce drug discovery cycles by 50%. These AI innovations have brought significant benefits to society, accelerating the development and production of life-saving drugs. In addition, by combining mobile devices, sensors, and AI, users will have access to better health data to make more informed decisions about their personal health.


Push for "green AI"
    AI will accelerate its integration into sustainable practices. In addition to the use of energy-efficient technologies, "green AI" strategies will also receive increasing attention. For example, in response to increasing energy demand, AI model training may increasingly be conducted in regions with lower carbon emissions and during periods of low grid load, which may become standard operations in the future. By balancing the energy load on the grid, this approach will help ease peak demand pressure and reduce overall carbon emissions. As a result, Arm expects more cloud service providers to launch model training scheduling services for energy efficiency optimization.
    Other approaches include: optimizing existing AI models to improve energy efficiency, reusing or repositioning pre-trained AI models, and adopting "green coding" to minimize energy consumption. In the "green AI" wave, we may also see the introduction of voluntary standards, followed by the gradual formation of formal standards to promote sustainable development of AI.


The integration of renewable energy and AI
    The combination of renewable energy and AI is expected to drive innovation across the energy industry. Currently, renewable energy lacks reliability and flexibility to balance peak loads, which limits grid decarbonization. Arm expects AI to help solve these problems by being able to more accurately predict energy demand, optimize grid operations in real time, and improve the efficiency of renewable energy sources. Energy storage solutions will also benefit from AI's ability to optimize battery performance and life, which is critical to balancing the intermittent nature of renewable energy.
    The introduction of AI will not only help solve the difficult problem of forecasting and balancing peak demand, but also identify maintenance needs in a predictive way, thereby reducing energy supply disruptions. Smart grid can use AI for real-time management of real-time power flow, effectively reducing energy consumption. The deep integration of AI with renewable energy is expected to greatly improve the efficiency and sustainability of the energy system.


Heterogeneous computing meets diverse AI needs
    n a wide range of AI applications, especially in the area of the Internet of Things, different AI needs will require multiple computing engines. In order to maximize the deployment of AI workloads, cpus will continue to be key to the deployment of existing devices. The new iot devices will feature larger memory and higher performance Cortex-A cpus to enhance AI performance. Embedded accelerators such as the newly launched Ethos-U NPU will be used to accelerate low-power machine learning (ML) tasks and provide energy-efficient edge reasoning capabilities for a wider range of use cases such as industrial machine vision and consumer robotics.
    Essentially, in the near term, we will see multiple computing components being used to meet the needs of specific AI applications. This trend will continue to emphasize the need to develop common tools, software technology libraries, and frameworks so that application developers can take full advantage of the capabilities of the underlying hardware. There is no "one size fits all" solution for edge AI workloads, so it is important to provide a flexible computing platform for the ecosystem.


The growing popularity of virtual prototyping is revolutionizing chip and software development processes in the automotive industry
    Virtual prototyping accelerates chip and software development, allowing companies to start developing and testing software before physical chips are ready. This is particularly important for the automotive industry. In the automotive industry, vehicle development cycles can be shortened by up to two years after the launch of virtual platforms.
    In 2025, in a wave of continued transformation of chip and software development processes, Arm expects more companies to launch their own virtual platforms. These virtual platforms will operate seamlessly, leveraging the ISA peer-to-peer features provided by the Arm architecture to ensure consistency between cloud and edge-side architectures. With ISA peer-to-peer, ecosystems can build their own virtual prototypes in the cloud and then seamlessly deploy them at the edge.
    This will result in significant time and cost savings, while giving developers more time to leverage software solutions to improve performance. In 2024, Arm introduced the Armv9 architecture to the automotive market for the first time, and Arm expects more developers to take advantage of ISA peer features in the automotive space and use 


End-to-end AI enhances the performance of autonomous driving systems
    Generative AI technology is rapidly being applied to end-to-end models, promising to solve scalability problems faced by traditional autonomous driving (AD) software architectures. Thanks to end-to-end self-supervised learning, autonomous driving systems will be able to generalize to situations they have never encountered before. This new approach will effectively accelerate the expansion of the Operational Design Domain (ODD) to deploy autonomous driving technology to diverse environments such as highways and urban traffic at a faster speed and lower cost.


More hands-free driving, but more monitoring of the driver
    As vehicle regulations for L2+ Driver Control Assistance Systems (DCAS) and Level 3 Automatic Lane-keeping systems (ALKS) are harmonized around the world, these advanced features of DCAS and ALKS will be deployed faster and more widely. Leading automakers are investing in the necessary hardware to promote these features through subscription services throughout the vehicle's life cycle.
    To prevent drivers from abusing autonomous driving systems, regulations and the New Vehicle Assessment Program (NCAP) are increasingly focusing on more sophisticated in-vehicle monitoring systems, such as Driver Monitoring Systems (DMS). In Europe, for example, EuroNCAP 2026's new rating mechanism will encourage deep integration of direct-sensing (such as camera-based) DMS with advanced driver assistance systems (ADAS) and autonomous driving functions to make appropriate vehicle responses for varying degrees of driver hands off the wheel.


Smartphones remain the dominant consumer electronics device for decades to come
    Smartphones will continue to be the dominant consumer electronics device for the foreseeable future. In fact, it is likely to remain the device of choice for consumers for decades to come, and it will be difficult for other devices to meaningfully challenge it. With the widespread adoption of Armv9 in mainstream smartphones, it is expected that by 2025, the new flagship smartphones will have stronger computing power and better application experience, which will further consolidate the smartphone's position as the device of choice. But it is clear that consumers will use different devices according to different needs, with smartphones being used primarily for apps, web browsing and communication, while laptops are still seen as the "go-to" device for productivity and work tasks.
    It is also worth noting that AR wearables such as smart glasses are gradually becoming the ideal partner for smartphones. The key to the continued popularity of smartphones is their ability to evolve, from apps to cameras to games, and now the industry is witnessing the emergence of new applications for AR, and smartphones are beginning to support the AR experience of wearable devices.


The continuous evolution of technology miniaturization
    Across the technology industry, devices are becoming smaller and more fashionable, such as AR smart glasses and increasingly small wearable devices. This trend is the result of a combination of factors. First, the application of energy-efficient technologies provides equipment with the required performance to support critical equipment functions and experiences. Secondly, the application of lightweight technology makes smaller devices possible, in the case of AR smart glasses, it uses ultra-thin silicon carbide technology, not only to achieve high-definition display, but also to greatly reduce the thickness and weight of the device. In addition, small new language models are enhancing the AI experience of these small devices, making the devices more immersive and interactive. Looking ahead to next year, the combination of energy-efficient, lightweight hardware and small AI models will accelerate, driving the development of smaller, more powerful consumer electronics devices.


Windows on Arm continues to heat up
    In 2024, the Windows on Arm (WoA) ecosystem has made significant progress, with major applications already launching native versions of Arm. In fact, the average Windows user spends 90% of their time using Arm native apps. A recent example is Google Drive, which was released as Arm native in late 2024. Arm expects this momentum to continue in 2025, with WoA becoming increasingly attractive to developers and consumers as ARM-native apps that are critical to the everyday user experience, including Google Chrome, achieve significant performance improvements.





    免责声明: 本文章转自其它平台,并不代表本站观点及立场。若有侵权或异议,请联系我们删除。谢谢!

    Disclaimer: This article is reproduced from other platforms and does not represent the views or positions of this website. If there is any infringement or objection, please contact us to delete it. thank you!
    矽源特科技ChipSourceTek

Copyright © 2017 copyright © 2017 ShenZhen ChipSourceTek Technology Co., Ltd. All Rights ReservedAll Rights Reserved 粤ICP备17060179号