One of the functions that GPUs have is the ability to encode and decode video. The latter do it very quickly and allow them to play video file formats at a speed fast enough to be viewed without problems and without a very large processing load. So you may have wondered if it is possible to use one in a complementary way. That is, use a second graphics card for streaming.
The problem is that generating a video file is much more expensive and if we want to do it while playing and streaming, then the technical demands skyrocket and the work of the hardware video codec is limited. The solution? Extract juice from the rest of the resources of the graphics card, which means cutting it from the game itself. This brings us to a very simple question.
Can I use a second graphics card for streaming?
Most likely, you have wondered, unfortunately it is not possible and there is a reason for it and we are going to explain it to you. Each graphics card can access two different memory pools: its own memory and system memory. They cannot directly access the RAM of other devices that share the PCI Express interface due to the fact that they have neither access nor coherence mechanisms. In other words, the second graphics card has no way of accessing the first’s memory and if it somehow could it wouldn’t know about the recent changes.
Which brings us to another question: can you combine a dedicated with an integrated? Well, yes, but the synchronization work between both graphics cards must be done to the millimeter and it is something that can only be achieved if both, iGPU and dGPU, are from the same manufacturer and architecture. This is something that Intel intends to exploit with its ARC graphics cards combined with its Intel Core and AMD in the Ryzen-Radeon duo. The idea is none other than whether the graphics card integrated into the processor is in charge of encoding video or serving as support for it.
Unfortunately, this is something very new and based on the fact that more than 85% of PC graphics card users use NVIDIA. So, it is clear that such a scenario can only be used by a few users. What’s more, with the loss of dual configurations like SLI and Crossfire communication between two graphics cards is no longer possible.
The situation could change in the future, or not
The reason for this is the CXL standard, which among other benefits adds memory coherence to all PCI Express devices, which would allow communication between several graphics cards at the same time. That is, you could have a GPU generating the game frame and then a bunch of add-on cards dividing up the frame encoding at high speed.
Let’s not forget that video codecs encode the image in blocks, and therefore can divide the work among themselves. What’s more, it will be something common in the most advanced Cloud Gaming systems if it is not already. The objective? It will minimize the latency in the broadcast of the game without sacrificing image quality. The problem is that everything indicates that the CXL for now will be something only for workstations and servers and not for the home market.