The time of monolithic processors and graphics processing units has come to an end. When we talk about “monolithic” we mean a single, uniform design. Currently, these designs have high consumption and, above all, many complexities in the manufacturing process.
AMD was the first to abandon this design in favor of the chiplet system for its Ryzen processors. Intel has opted for a design that copies ARM’s big.LITTLE design, which combines two types of cores. Two different solutions to overcome different technical problems.
NVIDIA switches to “LEGO” GPUs
We have to say that this is not the first time we have heard about modular GPUs. AMD would have been planning or developing a solution of this type in the purest Ryzen style for some time. But for now, they are just rumors.
Apparently, NVIDIA if you would have your first fully modular solution, the first being GPU from the market with chiplet design. This solution is not designed for gaming graphics cards, it is designed for Data Centers. Specifically, it would be a solution for high load computing and artificial intelligence.
The main strength of modular solutions is that manufacturing complexity is reduced. We already saw how Intel stagnated for 5 years with 14 nm due to its processor design. “Chunking” a processor has worked very well for AMD, allowing it to create very flexible solutions.
Hopper and Ada Lovelacethe last two architectures of NVIDIAare from monolithic design and they have turned out well. They offer very good returns and have large profit margins for the company. Although, they present a quite excessive consumptionwhich is the biggest complaint from users.
Blackwell It will be the architecture of NVIDIA to change its design, to create the first Chiplet-based GPU. These solutions are mainly focused on Data Centers and Artificial Intelligence. Theoretically they should be presented to mid-2024.
Kopite (who filters the information) insists that they are solutions for Data Centers and AI. Ada-Next solutions for gaming graphics cards will not be of this type.
After the dramas of GA100 and GH100, it seems that GB100 is finally going to use MCM.
September 18, 2023 • 15:02
Chiplets, the problem of their “packaging”
The idea is good, but it would be like assembling a LEGO, although more complex. The set of chips that make up the chiplet They must be communicated and it cannot be any way. Are needed communication routes high speed to avoid latencies or reduce them to a minimum.
TSMCwho manufactures NVIDIA chips and AMD offers the technology of CoWoS packaging. It would be this base of communication routes on which assemble the chiplets. The problem is that both companies are fighting for priority to have access to this solution.
We can reduce it to a bid, with whoever can offer the most money winning preference. Obviously, currently it is NVIDIA, who is making shovelfuls of money thanks to AI.
You should know that AMD is working on its own packaging technology using several unique IPs. Intel, for its part, has a completely own solution that they design and manufacture themselves. NVIDIA, for its part, would be working on a custom design for this layer, since they need something very specific that does not interfere with performance for AI.
We don’t know much more at this time, since it’s all rumors. It is said that Blackwell could have a greater number of CUDA Cores, Tensor Cores and RT Cores, as well as changes in NVLINK. But of course, there is still a long way to go before we have more information.