AMD won't have users 'paying for features they never use' when it comes to AI in GPUs

AMD's David Wang on stage.
(Image credit: AMD)

While Nvidia's liberal use of AI in its graphics card architecture could put HAL 9000 to shame, AMD's relationship with artificial intelligence has been somewhat more tentative. According to David Wang, AMD's Senior Vice President of Engineering at Radeon Technologies, the company is looking into integrating further AI acceleration into its GPUs than it has today—though you can expect a much more sparing application than the green team.

With AMD's latest graphics cards, such as the RX 7900 XTX, packing AI acceleration for the first time, it's clear to see the red team has begun its adoption of artificial intelligence. In the other corner, Nvidia is now on its fourth iteration of its AI acceleration cores, called Tensor Cores, and has been improving frame rates using AI for some time.

Making use of those impressive Tensor Cores, Nvidia's DLSS 3 upscaler creates new frames by processing current and previous frames with the Optical Flow Accelerator each new RTX 40-series card is touting. But what Wang reckons is that now all these new GPUs come with large scale AI accelerators, the green team is stuck jamming AI into all its processes to make effective use of it.

"That's their GPU strategy, which is great, but I don't think we should have the same strategy", says Wang in a 4gamer interview late last year, as he mulls over Nvidia's liberal approach to AI (machine translated).

"Nvidia is actively trying to use AI technology even for applications that can be done without using AI technology."

But Wang goes on to reveal that AMD is looking into implementing AI into its next-generation 3D graphics pipeline.

AMD made it clear a while back that its own DLSS-alternative doesn't need machine learning to work, though I do wonder if AI will play a pivotal role in the upcoming FidelityFX Super Resolution 3 (FSR 3)—the company's upcoming DLSS-a-like upscaler with "Fluid Motion Frames" frame generation tech. 

However, the new approach with RDNA 3 has been to implement AI, with the new AI Matrix Accelerator block inside the Navi 31 GPU, but only where it's really needed.

"We are focused on including the specs that users want and need to give them enjoyment in consumer GPUs. Otherwise, users are paying for features they never use."

"Even if AI is used for image processing, AI should be in charge of more advanced processing," says Wang. The plan is to ensure any AI tech AMD brings to the table isn't limited to image processing. 

(Image credit: Future)

Nvidia also uses AI for a plethora of application, such as intelligent noise cancellation and allowing uncanny valley levels of eye contact. Instead of future AMD graphics card adopters creeping us out with AI-powered eye contact algorithms, Wang hints that one main focus for AMD's AI applications could be on "the movement and behaviour of enemy characters and NPCs" in the future.

The use of AI to empower game NPCs is something we hear a lot right now, and admittedly does sound like a good use for AI acceleration beyond just enhancing game visuals.

Steam in your hands

Steam Deck with an image from Elden Ring overlayed on the screen

(Image credit: Future, FromSoftware)

Steam Deck review: Our verdict on Valve's handheld PC.
Accessories for the Steam Deck: Get decked out
Steam Deck availability: How to get one.
Steam Deck battery life: What's the real battery life of the new device?
Steam Deck - The emulation dream machine: Using Valve's handheld hardware as the ultimate emulator.

Beyond AI, Wang notes that AMD and one of its partners have been researching "self-contained drawing" technology for upcoming graphics cards. It's something that's gaining momentum across the industry, which "generates graphics processing tasks only on the GPU and consumes them on the GPU itself without the help of the CPU."

This self-contained drawing process is "A new technology that not only eliminates the data transmission between the system memory on the CPU side and the graphics memory on the GPU side as much as possible, but also eliminates the mechanism for transmitting drawing commands from the CPU, so it can achieve considerably high performance."

If this technology lives up to its promises, we could see the CPU as even less important to performance than it already is today.

Considering the red team is already dominating our best graphics card guide, even without masses of artificially intelligent processes, says a lot. Though that's largely down to AMD's RDNA 2 cards sitting at all-time-low prices than their overall performance. It'll be interesting to see how AMD's deep dive into AI will improve the next round of Radeon graphics, anyways.

TOPICS
Katie Wickens
Hardware Writer

Screw sports, Katie would rather watch Intel, AMD and Nvidia go at it. Having been obsessed with computers and graphics for three long decades, she took Game Art and Design up to Masters level at uni, and has been rambling about games, tech and science—rather sarcastically—for four years since. She can be found admiring technological advancements, scrambling for scintillating Raspberry Pi projects, preaching cybersecurity awareness, sighing over semiconductors, and gawping at the latest GPU upgrades. Right now she's waiting patiently for her chance to upload her consciousness into the cloud.