Top 10 Technology Trends in the Semiconductor Industry
Release Time:
2023-06-26
1. The "scale" commercial use of 5G will drive the development of 5G mobile phones, base stations, VR/AR equipment, and emerging applications such as Industry 4.0, autonomous driving, and medical care
Compared with 4G networks, 5G is just like China's high-speed rail compared to traditional ordinary railways. High speed, low latency and large capacity are the salient features of 5G networks. 3GPP defines three major technologies and application scenarios of 5G, namely: enhanced mobile broadband (eMBB), mainly for 3D/UHD video, VR/AR and other applications; massive machine communication (mMTC), mainly for smart wearable, smart IoT applications such as home furnishing, smart city, Internet of Vehicles, and industrial IoT; High Reliability and Low Latency (uRLLC), mainly for high-reliability critical applications such as autonomous driving, industrial automation, and mobile medical care. The increasing maturity of 5G technology and the large-scale commercial deployment of 5G networks will drive technologies such as AI, big data, and cloud computing to be used in emerging video games, VR/AR, AIoT, autonomous driving, smart cities, industry 4.0, and medical imaging. development and popularization.
In 2020, 5G will enter the large-scale commercial stage, which will first drive the rapid development, technological innovation and mass shipment of chips and electronic components for 5G mobile phones, wireless base stations and communication network systems, including: Modem and baseband chips for 5G mobile phones, Application processors, GPUs and AI accelerators, RF devices and filters, image sensors/cameras, and other components such as antennas; since 5G networks work at higher frequencies (sub-6GHz), 5G base station signal attenuation is fast and transmission distance is short The disadvantages of 4G base stations force operators to deploy and install at least 3 times the number of 4G base stations to achieve full coverage, which will drive baseband digital signal processing devices, RF devices, power amplifier devices, antennas, and power management devices (5G base station power consumption is 4G 2 -3 times) demand growth.
The high-speed and low-latency characteristics of 5G network can just solve the pain point of "poor user experience" of VR/AR. "Electronic Engineering Seminar" predicts that this will drive a new round of VR/AR/MR upsurge, and Facebook will invest heavily in the development of the head wear equipment, and even acquire chip companies to develop their own integrated hardware platform; inspection operations such as electric power and manufacturing industries will deploy AR applications, and cooperate with 5G networks to better provide remote information interaction and support; more than 30% of exhibitions Such scenarios will provide VR/AR equipment, software, content and services; colleges and universities and training institutions will also use VR/AR more for course training.
5G technology can also form a separate network for manufacturing enterprises, boosting the development of Industry 4.0, Industrial Internet of Things and industrial big data, while ensuring the security of enterprise data. In addition, 5G will also drive the accelerated implementation of Internet of Vehicles and ADAS/autonomous driving, and provide high-speed, stable and secure data transmission for emerging applications such as telemedicine and medical imaging.
2. The trend of computing "marginalization" endows more AI and computing capabilities to edge devices, which provides more opportunities for SoC design companies and also puts forward higher PPA requirements
The decentralization and fragmentation of IoT application scenarios has put a lot of pressure on transmission network bandwidth and cloud computing capabilities, forcing IoT terminal devices to have the ability to process data locally. This demand has driven the rise of edge computing and has also improved The performance of the microprocessor at the heart of the edge device is enhanced accordingly for AI processing. Edge computing can collect and analyze data on IoT devices, make fast inferences (or decisions), and then send only a small amount of useful data to the cloud. This will reduce latency, bandwidth consumption, and cost, and enable quick decision-making based on data analysis. Even if the system is offline, edge computing can continue to operate, processing data in real time and determining which data should be sent to the cloud for further analysis.
As the heart of IoT edge or terminal devices, the System-on-Chip (SoC) should not only have better performance, but also have as low power consumption and footprint as possible, that is, it needs to achieve the best PPA. The traditional general-purpose MCU/MPU/CPU has been difficult to meet the requirements of different application scenarios and PPA, and the innovation of technology and business model in the field of edge computing can release the potential of AI and computing power. In addition, different application scenarios have different requirements for software and AI algorithms. Although it is technically feasible to add AI reasoning functions on the edge side, customized chips are still needed to implement processors with AI-enhanced performance. Small and medium-sized enterprises and startups focus more on application software and AI algorithms, while large and medium-sized enterprises pay more attention to the ecological construction of edge computing. For example, Huawei participates in or leads the formulation of industry standard protocols and software and hardware development environments.
In terms of IoT communication protocols, telecom operators around the world are pushing NB-IoT, especially in the Chinese market. LoRa, Zigbee, Bluetooth and other communication protocols also have their own development paths and main application areas, but the coexistence of multiple standards and protocols will be the status quo of the future IoT market. SoC design engineers and microprocessor developers must consider the compatibility and support of multiple protocols.
3. "Heterogeneous" integration of wafer manufacturing. Dies of different process nodes are packaged together through 2.5D/3D stacking technology. Chiplet may become a new IP for chip design and manufacturing in the post-Moore era.
High-performance CPUs, smartphone APs, GPUs and FPGAs have always been "early adopters" of the most advanced process nodes below 14nm. TSMC's 7nm process is currently the most advanced mass production technology. It is expected that the 5nm process will replace it as the highest-end process in 2020. In this craft competition that is more expensive than building an aircraft carrier, only three companies in the world, TSMC, Samsung and Intel, are competing. Are 3nm, 2nm, and 1nm nodes next? Even with enough money invested in R&D, the physical limits of Moore's Law have seen their end, so where is the future of semiconductor manufacturing?
2.5D and 3D stacked packaging technology has become a "heterogeneous integration" solution generally recognized by wafer foundries, IDMs, and packaging and testing manufacturers, because it can integrate bare chips of different process nodes and can meet high-end, mid-end, and low-end markets various device requirements. Through-silicon via (TSV) is one of the earliest stacking technologies. At present, the packaging technology competition from TSV to wafer-level stacking is mainly concentrated between "TSV" and "TSV-less". For high-performance devices, the most popular 2.5D and 3D integration technologies are 3D stacked storage TSVs, and heterogeneous stacked TSV interposers. Wafer foundry manufacturers such as TSMC, UMC and Global Foundries are leading the technological development in this area. The Foveros technology developed by IDM manufacturer Intel is based on an "active" TSV interposer and 3D SoC technology. The storage "Big Three" Samsung, SK Hynix and Micron dominate the competition and development of 3D stacked storage.
These bare chips that are "heterogeneously integrated" in one chip through stacked packaging have different functions and different process nodes, but if a unified interface standard is used for data communication and transmission, the chip design can be greatly simplified , manufacturing and packaging. Thus, the concept of chiplet (chiplet) came into being and began to be accepted by the semiconductor industry. DARPA of the United States specially set up a CHIPS (General Heterogeneous Integration and IP Reuse Strategy) project to promote the research and development of chiplets, and Intel also opened its AIB (Advanced Interface Bus) interface to support a wide range of Chiplet ecosystems. TSMC worked with Arm to develop a 7nm chiplet system using Chip-on-Wafer-on-Substrate (CoWoS) packaging technology, which consists of two chiplets, each chiplet contains four Arm Cortex A72 processors and an on-chip interconnection bus. With the development of wafer manufacturing and packaging heterogeneous integration, chiplets may evolve from a concept to a general-purpose technology and die form, and even become a new type of IP in the post-Moore era.
4. "Specialization" of chips opens up application-oriented customized chip design ideas, and AI chips will become massive data processing accelerators for data centers, terminal equipment and autonomous driving
Google's TPU is a concrete embodiment of the "Domain-Specific Architecture" advocated by the 2017 Turing Award winners John Hennessy and David Patterson. It is Google's special requirements for its cloud platform, with software, algorithms and applications as the Leading AI chip development paradigm. The shift from general-purpose CPUs, GPUs, and FPGAs to dedicated SoCs and AI accelerator chips is to address the massive data processing challenges of various emerging applications, including high-performance computing in data centers, broad and fragmented application scenarios of the Internet of Things, and autonomous driving and industrial 4.0, etc. require real-time processing and decision-making.
Not only are Internet giants such as Google, Amazon, and Ali, and hypescaler cloud computing service providers starting to develop their own dedicated chips, Tesla is also developing its own "full self-driving (FSD)" chip. These non-standard, non-sale chips are custom-developed to meet the specific application needs of these companies, because they cannot buy the chips they want from traditional chip manufacturers. Even the traditional FPGA manufacturer Xilinx has begun to transform from a chip to a platform company, and its focus will shift to high-performance data centers and specific application fields that have strict and flexible requirements for computing. The products it provides are also Expand from FPGA chips to software, AI computing power and platform services.
VC investment in the semiconductor industry has grown rapidly since 2017, and AI chip start-ups are the most favored by VCs. However, in the next 2-3 years, these AI unicorns who have obtained huge financing will take their chips and look for application scenarios everywhere. Horizon, with a financing amount of up to 600 million US dollars, has begun to cultivate in the field of autonomous driving and AIoT, while Graphcore, which advocates the concept of computing graphs (graphs, representing knowledge models and applications, and all machine learning models are expressed in the form of graphs), is in its Investors Dell EMC and Microsoft have found uses for their IPUs. There are many other AI chip startups looking for their own "sweet spot".
5. The "open" computing architecture stimulates open source hardware innovation, and the rapid development of the RISC-V ecosystem impacts the global chip design community and the Arm ecosystem
From the perspective of computer instruction set architecture (ISA), x86 and Arm are historical choices, but what follows will be the "golden decade of computer architecture" proclaimed by Professor Patterson of UC Berkeley. Moore's Law, which has dominated the development of integrated circuits for many years, is coming to an end, and the von Neumann architecture that has supported computer development for many years has also begun to highlight its limitations. General-purpose CPUs, GPUs, FPGAs, and ASICs all have their own expertise and limitations. On this basis, the complexity of computing is added. To meet the challenges raised by emerging applications, fundamental architectural innovations must be carried out.
RISC-V has set off an upsurge in open source hardware and open chip design, and has now been supported by many large and medium-sized enterprises, scientific research institutions and start-up companies around the world. The ecology and community that have grown up around RISC-V are also developing rapidly. From the basic RISC-V ISA, core IP, development environment and software tools, and venture capital are all promoting the further expansion of the RISC-V ecosystem. Against the background of the Sino-US technology cold war, the development of China's chip design industry urgently needs an independent and open computing architecture. RSIC-V just caters to this demand, and the rapid development in China in just two years seems to have proved it.
If PC/server has created x86, and smartphones have created Arm, then the next AIoT will support RISC-V to become the mainstream computing architecture, and even the mainstream chip design and development trend. Arm has felt the pressure and has begun to make changes, such as opening up customized instructions and being more open to collaborate with industry partners in the fields of IoT and autonomous driving. While the Arm camp was robbed by RISC-V, Arm also began to enter the PC server market. We will see more Arm processors from companies such as Amazon, Huawei, and Qualcomm penetrate into traditional x86 territories such as PC computers and servers.
6. EDA goes to the "cloud" and supports AI, extending the design scope from chips to systems, thereby improving the consistency of the entire system design
TSMC collaborated with Cadence, Synopsys, Mentor, Amazon AWS and Microsoft Azure to establish a cloud-based virtual design environment (OIP VDE), successfully tape-out the first 64-bit multi-core RISC-V CPU designed in the cloud for SiFive, and in Physical verification of 7nm chips on AMD EPYC processors in as little as 10 hours. Arm is also cooperating with EDA manufacturers to provide its ecological partners with the latest Arm processor cloud design platform, which can now support TSMC 7nm process node. It is a general trend for EDA to go to the cloud, and it will fundamentally change the chip design process and mode.
Applying machine learning to chip design has made significant progress. From signal integrity and power integrity, to dividing the product portfolio into system analysis, chip layout, and trusted platform design, AI can set dozens of options in EDA tools , to help speed up the automation process. Cadence uses data analysis to create machine learning models for parasitic parameter extraction in the first stage of AI applications to accelerate long-term calculations. The next stage in bringing AI into EDA tools will target place and route tools, allowing the AI to learn from human designers and recommend optimizations that speed up runtime. Many opportunities exist in the EDA industry to use machine learning techniques to automate decision-making and optimize the overall design process.
The acquisition of Mentor by Siemens also marks the beginning of the expansion of EDA from chip design to system design. Internet giants and system manufacturers purchase EDA tools to design chips and systems themselves, which also accelerates this expansion trend. As the digital twin (Digital Twin) and the virtual physical system (CPS) are gradually implemented from the concept, traditional EDA tools have gradually become an integral part of the entire product life cycle management of intelligent manufacturing, and the design scope covers electronic, electrical, mechanical and thermal characteristics. . At the same time, signal and power integrity, functional and information security, verification and synthesis, and manufacturability (DFM) are shifting from the system side (far right) to the left (shift-left), making increasingly complex systems The design is more coordinated, many design defects can be found and made up early, and the product design cycle and development cost are also greatly reduced.
7. MEMS/sensor "fusion" combined with AI and edge computing will make phones, cars, factories, cities and homes smarter
Sensors/MEMS play a key role in bridging the analog and digital worlds. Their perception and data collection of the surrounding environment gives us an objective and comprehensive understanding of the normal operation of various devices and systems. With the penetration of AI in the Internet of Things and
Related News
Outlook 2023: Eight Trends for the Future Semiconductor Industry
The turbulent 2022 will trigger inventory adjustments in the semiconductor industry, and companies are coping with the short-term down cycle, but the development trend behind it deserves attention. Due to artificial intelligence (AI), AR/VR, Internet of Things (IoT), autonomous vehicles, electric vehicles, high-performance computing (HPC), aerospace, satellite communications, 5G/6G, smart cities and many other applied technologies, all rely on To achieve innovation based on the advancement of semiconductor technology, we believe the following trends will affect the development of the semiconductor supply chain in 2023 and the next few years.
Chipmaking equipment faces chip shortages as wait times lengthen
Semiconductor industry executives say that as the chip shortage affects car production, raises the price of electronics and raises concerns about supply chains in countries around the world, there is now another situation: the equipment needed to make chips is in short supply.
According to the study, Chinese companies have made significant progress in some aspects of semiconductor manufacturing materials, wafer manufacturing equipment and device manufacturing process technology. In some semiconductor manufacturing materials such as silicon wafers, electronic special gases, and CMP materials, Chinese companies have been able to support chip manufacturing at mature process nodes. In major semiconductor manufacturing processes such as deposition, etching, ion implantation, CMP, and cleaning, Chinese equipment manufacturers can support 28nm or even 14nm process node manufacturing in some process steps. Some products can support the 5nm process node in some process steps.
According to the Financial Associated Press, the graphics processing chip giant Nvidia (Nvidia) issued an announcement that the company has received official notification from the United States that if it exports two high-end GPU chips to customers in China (including Hong Kong, China) and Russia - A100 and H100, A new export license is required.
ABB releases a new switch socket, advocating low-carbon life
ABB, the world's leading power and automation technology group, officially launched two new series of switch sockets, "Dening" and "Dejing" in Beijing today. The new product is positioned as a high-end product, specially designed for the interior decoration of domestic middle- and high-income families, with simple style, fashion and high safety. While helping urban fashion people create a high-quality home environment, it also helps promote the quality standards and design level of similar products in the market .
Top 10 Technology Trends in the Semiconductor Industry
The "scale" commercial use of 5G will drive the development of 5G mobile phones, base stations, VR/AR equipment, and emerging applications such as Industry 4.0, autonomous driving, and medical care