You can run Nvidia CUDA applications natively on Radeon GPUs thanks to ZLUDA, the open-source project that AMD once funded

The classic Classroom developed by Christophe Seux for Blender
(Image credit: Christophe Seux | Blender)

You don't need to be a professional content creator or an expert in data analysis to know that when it comes to GPUs for these roles, one manufacturer dominates the market: Nvidia. That's almost entirely down to CUDA, a programming platform that's been around for years. AMD has its own system called ROCm but it's nowhere near as popular. Well, thanks to the tireless efforts and work of one person, you can now enjoy all the benefits of CUDA applications on a Radeon graphics card, without changing a single line of code.

The work in question is called ZLUDA and the person is Andrzej Janik. As explained by Phoronix, he first created the system while working at Intel and it was used to permit Intel GPUs to run CUDA applications. After leaving that company, he was then contracted by AMD to do the same thing with its ROCm platform but the chip giant ultimately shelved the whole project after a few years.

However, there was a nice ending to it all, as Janik was contractually permitted to continue the work as an open-source project. And that's where it's all at right now (you can grab it off Github) and Phoronix tested the latest version of ZLUDA on a ROCm platform, using the classic CUDA-based benchmark Blender (as shown at the top of this story).

And it works really well. For example, in the Classroom benchmark for Blender, it took 20.89 seconds for a Radeon RX 7900 XTX to render the scene using the standard Radeon HIP software platform, where using ZLUDA (with Blender running in CUDA mode), the render time dropped a little down to 18.44 seconds.

While a 12% reduction in processing time doesn't sound a lot, the fact that this was achieved simply by using a software layer to convert everything is really impressive. No part of Blender's code needed to be altered to make this happen and ZLUDA has the potential to open up the compute and content creation market more for AMD's GPUs.

Your next upgrade

Nvidia RTX 4070 and RTX 3080 Founders Edition graphics cards

(Image credit: Future)

Best CPU for gaming: The top chips from Intel and AMD.
Best gaming motherboard: The right boards.
Best graphics card: Your perfect pixel-pusher awaits.
Best SSD for gaming: Get into the game ahead of the rest.

It's not a 100% perfect system, of course, and some Geekbench tests show that ZLUDA can sometimes be significantly faster and slower than the usual OpenCL that the software uses. ZLUDA can only 'translate' standard CUDA applications too, so anything written using OptiX for example, just won't work.

Phoronix reports that Janik isn't particularly hopeful about maintaining the pace of development of ZLUDA, without any financial backing from the industry, but since it's open source, he may well get the support needed.

It is a little odd that AMD decided to abandon the project and I can only assume that it wanted to focus entirely on raising the status and uptake of ROCm, rather than just let CUDA continue to dominate.

For now, though, professionals with Radeon graphics cards might want to give ZLUDA a thorough test because in that sector, time is money. Or less time is more money, as is the case here. Something like that, anyway.

TOPICS
Nick Evanson
Hardware Writer

Nick, gaming, and computers all first met in 1981, with the love affair starting on a Sinclair ZX81 in kit form and a book on ZX Basic. He ended up becoming a physics and IT teacher, but by the late 1990s decided it was time to cut his teeth writing for a long defunct UK tech site. He went on to do the same at Madonion, helping to write the help files for 3DMark and PCMark. After a short stint working at Beyond3D.com, Nick joined Futuremark (MadOnion rebranded) full-time, as editor-in-chief for its gaming and hardware section, YouGamers. After the site shutdown, he became an engineering and computing lecturer for many years, but missed the writing bug. Cue four years at TechSpot.com and over 100 long articles on anything and everything. He freely admits to being far too obsessed with GPUs and open world grindy RPGs, but who isn't these days?