Considered a successor to AMD's Graphics Core Next (GCN) microarchitecture, RDNA not only made changes in the number of small cores to the memory and connections within it, it also included The instruction set and the hardware are used by AMD to build the latest GPUs for personal computers, handheld game consoles and other markets.
According to AMD's white paper, GPUs built on RDNA architecture will span a wide range of devices, including notebooks, smartphones , as well as some of the world's largest supercomputers.
Will AMD's GPUs Meet Smartphone's Requirements?
While it is difficult to predict AMD GPU performance through the technical descriptions in the whitepaper, we can see that RDNA offers optimizations suitable for use on mobile devices. According to AMD whitepaper, the new GPUs will have L1 cache, shared with DCU (Dual Compute Units), which helps to reduce power consumption by reducing read and write times. on memory.
The L2 cache cache is also configurable with the ability to deliver levels from 64KB to 512KB depending on the performance, power of the application and the silicon area it targets.
AMD's Graphics Architecture Roadmap.
AMD's mobile GPU architecture will move from supporting 64 work items with GCN architecture to 32 more optimized work items with RDNA. In other words, workloads can be computed on 32 parallel operations per kernel.
The benefit of this parallel computation, AMD says, is to distribute the workload across more cores, improving performance and energy efficiency. This is especially suitable for devices with limited bandwidth such as cell phones, where transferring large amounts of data consumes a lot of energy.
This shows that AMD is paying close attention to memory and power consumption - the two most important areas for any GPU's success on smartphones.
Take advantage of Radeon AI tasks
AMD's GCN architecture, the predecessor to RDNA, also has a special advantage in terms of machine learning or AI workloads. As we all know, AI is playing an increasingly important role in smartphone processors and will continue to become popular over the next five years.
RDNA supports up to 8 4-bit parallel operations and FMA calculations for machine learning tasks.
With the new architecture, but RDNA retains high-performance machine learning components, with support for 64-bit, 32-bit, 16-bit, 8-bit, or even 8-bit parallel arithmetic. The RDNA's ALU Vectors are twice as wide as the previous generation, providing faster processing and Fused Multiply-Accumulate (FMA) operations with lower energy consumption compared to potentials. previous generation.
The FMA math is so common in machine learning applications that ARM's Mali-G77 GPU must have its own hardware block to handle those calculations.
In addition, RDNA also introduced asynchronous compute channel ACE (Asynchronous Compute Tunneling) to manage shader workloads. AMD claims this "allows compute and graphics workloads to coexist harmoniously on the GPU". In other words, RDNA handles parallel graphics and machine learning workloads much more efficiently, minimizing the need for a separate AI processor.
RDNA is designed to be more flexible
Besides the above advantages, the AMD whitepaper also mentions the series of other improvements that are implemented in this new microarchitecture. But the most interesting is probably the Engine Shader and RDNA's new Shader Array array.
Block diagram of the Radeon RX 5700 XT GPU, one of the first GPUs to use the RDNA architecture.
" To increase performance from low to high, other GPUs can increase the number of shader arrays and change the resource balance within each of these arrays," the AMD whitepaper says . So this will depend on what your target platform is, as well as the number of DCU processors, L1 and L2 cache sizes, even the number of changes in the render backend.
Nvidia and ARM do the same on their CUDA and Mali GPUs when increasing or decreasing the number of processor cores depending on the performance and power requirements needed. But RDNA is different from the above approaches. It offers the flexibility to fine-tune performance and therefore power consumption in each Shader Array. Instead of just adjusting the number of computations like the competition, the GPU can adjust both the number of shader arrays and the Render Backend, as well as the cache volume.
This will provide a more flexible platform, with a design optimized for better expansion or contraction of AMD's past products. Even so, these are only theoretical factors, so what kind of performance can be achieved on a limited platform like smartphones remains a factor to consider.
When will the collaboration GPUs between Samsung and AMD launch
As Samsung announced in its most recent earnings report, we are still "about two years away" from when the company will release a new GPU based on the RDNA architecture. This suggests it is likely to appear in 2021. At that point, it's possible that this GPU will have more tweaks and changes over the current RX 5700, especially when AMD will optimize further. ability to consume energy.
However, with the details in the RDNA white paper, we have a glimpse of AMD's plan to bring its famous GPU architecture to low-power devices and smartphones. The bottom line here is a more energy-efficient architecture, optimized mixed compute workloads, and a design multiplied with a high degree of flexibility to suit a wide variety of applications.
Refer to Android Authority