The AMD Radeon RX Vega 56 and Vega 64 preview

Vega. Two years of waiting to reach for the stars in the fastest of graphics cards is finally here. Let's be frank, this is what we wanted last year, right around the time that AMD was busy launching Polaris for the mainstream market. But Vega wasn't ready, not even close. AMD had to come to terms with 14nm FinFET, and Polaris was the chosen vessel for that. Still, Nvidia managed to launch a full suite of new GPUs using 14nm and 16nm FinFET over the course of around six months. Why did it take so much longer for AMD to get a high-end replacement for Fiji and the R9 Fury X out the door? I have my suspicions (HBM2, necessary architectural tweaks, lack of resources, and more), but the bottom line is that the RX Vega launch feels late to the party.

Or perhaps the Vega architecture is merely ahead of its time? AMD has talked about numerous architectural updates that have been made with Vega, including DSBR, HBCC, 40 new instructions, and more. You can read more about the architectural updates in our Vega deep dive and preview earlier this month. In the professional world, some of these could be massively useful—professional graphics can be a very different beast than gaming. But we're PC Gamer, so what we really care about is gaming performance—and maybe computational performance to a lesser degree.

Today, finally, we get to taste the Vega pudding. Will it be rich and creamy, with a full flavor that takes you back to your childhood… or will it be a lumpy, overcooked mess? That's what I aim to find out. And rather than beating around the bush, let's just dive right into the benchmarks. I'll have additional thoughts below on ways that Vega could improve over time, but this is what you get, here and now. Here are the specs for AMD's current generation of GPUs, including the Vega and the 500 series. I don't have the liquid cooled Vega 64 available for testing, but it should be up to eight percent faster than the air-cooled Vega 64, based on boost clocks.

If you look at the raw numbers, Vega holds promise—the Vega 64 has higher theoretical computational performance and more memory bandwidth than the GTX 1080 Ti—but AMD's GCN architecture has traditionally trailed behind Nvidia GPUs with similar TFLOPS. DX12 games meanwhile tend to track a bit closer to the theoretical performance, and certain tasks (like cryptocurrency mining) can do very well on AMD architectures. But the power requirements of AMD GPUs have also been higher than the Nvidia equivalents, and that remains true with Vega.

Based on pricing, the direct competition for the Vega 64 is Nvidia's GTX 1080, while the Vega 56 will take on the GTX 1070. For the mainstream cards, if we ignore the currently inflated prices due to miners, the RX 580 8GB takes on the 1060 6GB, and the RX 570 4GB takes on the 1060 3GB. At nearly every major price, AMD now has a direct competitor to Nvidia's GPUs. And at every price point, AMD's cards use more power. The one exception is the top of the stack, where the GTX 1080 Ti (and Titan Xp) remain effectively unchallenged. And while there were hopes that AMD was going to blow our socks off with 1080 Ti performance at 1080 pricing, that unfortunately isn't going to happen.

Before we hit the benchmarks, let's talk about the testbed for a moment. You can see the full list of hardware on the right, and I still haven't really found a need to upgraded from my over-two-years-old Haswell-E rig. No surprise there, since I'm running overclocked and it was a top-of-the-line enthusiast rig when we put it together. I have made one change for 2017, and that's an overclock of the CPU to 4.5GHz, instead of the previous 4.2GHz. It's well within reach of my i7-5930K, and performance remains highly competitive at those clocks. I had considered swapping to the i9-7900X, but time hasn't permitted me to retest everything—and honestly, the 7900X is hardly any faster for gaming purposes.

Let that be a lesson to anyone looking to build a long-term gaming PC: start at the top, and while you'll probably need to upgrade the graphics card more often, the rest of your build is likely to keep you ahead of the game for many years. We're still at the point where Core i5 chips work well enough for the vast majority of games, though that's slowly starting to change now, and a good 6-core or 8-core processor will likely keep you happy until well into the 2020s.

One other thing to note is that I haven't retested the GTX 1060 or Polaris cards lately (the numbers are a few months old), but I did retest the 1080 Ti, 1080, 1070, and Fury X using the latest drivers (384.94 for Nvidia, 17.7.2 for AMD) running the current versions of all the games—performance has improved, sometimes significantly, over the past year or more since the hardware was released. Some of that comes from driver tuning, but equally important has been software updates from the game developers, where many titles have been tweaked to work better on Ryzen 5/7 systems, and/or to run better with DX12. That's had a knock-on effect for the 6-core i7-5930K, so thanks AMD!

RX Vega vs. the World: a 15-round heavyweight battle

I tend to mix up the games I use for testing periodically, but we haven't really had anything recent that makes for a good benchmark (ie, reliable and repeatable results, in a game that lots of people are playing—and I know, several of the games I already benchmark fail to meet those criteria, but adding a new questionable game to replace an existing one doesn't really help). As such, the gaming benchmark suite remains largely unchanged.

I have switched from Ashes of the Singularity to the 'improved' Ashes of the Singularity: Escalation, which is supposed to be more multi-core CPU friendly. I've also added Mass Effect: Andromeda, simply because it's relatively demanding in terms of both CPUs and GPUs. Finally, I've dropped both Doom (Vulkan) and Shadow Warrior 2, but for different reasons. For Doom, the utility I use to collect frametimes (OCAT) broke with the Windows 10 Creators Update—it might work now, but it didn't for several months so I couldn't test newer cards. Shadow Warrior 2 meanwhile has some strange variability in framerates that I've noticed over time, where the dynamic weather can sometimes drop performance 10 percent or so—not good for repeatable results.

What I'm left with is 15 games, released between 2015 and 2017. There are multiple AMD Gaming Evolved titles (Ashes, BF1, Civ6, DXMD, Hitman, and TW: Warhammer) and also quite a few Nvidia TWIMTBP games (Division, GR: Wildlands, RotTR, and some technologies in Dishonored 2 and Fallout 4). There are also DX12-enabled games, quite a few of which I test in DX12 on all GPUs. The two DX12 games that I don't test using DX12 on Nvidia hardware are Battlefield 1 (slightly worse performance) and Warhammer (significantly worse performance), but Ashes, Deus Ex, Division, Hitman, and Tomb Raider are all tested in DX12 mode—it doesn't necessarily help Nvidia in many cases, but it doesn't substantially hurt Nvidia either. Here are the results of testing:

We've been building up to this fight for a while, and many were hoping Vega would be able to take down the heavyweight champion of the world. Clearly, that didn't happen, as the GTX 1080 Ti remains out of reach, though it also remains in a higher priced bracket. Stepping down a notch to the $500 market, things start to look better for AMD. After two years since Team Green defeated Team Red in the 980 Ti vs. Fury X matchup, with the GTX 1080 increasing the performance gap last year, the Vega 64 makes a valiant effort and lands some solid blows. Ultimately, GTX 1080 remains the faster card—and it does so while using less power—but at least AMD is showing up to the fight now.

Boxing analogies aside, Vega 64 runs games fine—more than fine, as it's now the third fastest consumer graphics card (discounting the $1,200 Titan Xp and Titan X (Pascal)). We'd expect nothing less of a $500 graphics card. If you've been holding out for a high-end AMD card, Vega is a worthy upgrade from the Fury X. But that's part of the problem, because this feels like a one generation upgrade in many respects, since AMD skipped updates to the high-end market last year.

I often recommend skipping a generation or two of hardware between upgrades, which means upgrading every 2-3 years on graphics cards. Each generation will typically improve performance by 30 percent or so, and that's not usually enough to warrant an upgrade for most gamers. If you skip a generation, you end up with potentially a 60-70 percent jump in performance. Well, Vega 64 ends up being right around 30 percent faster than Fury X, but where Fury X came up just short of the 980 Ti, Vega 64 drops down to the vanilla 1080.

Looking at individual games, at best Vega 64 is just a hair faster than the GTX 1080, and at worst it can be about 30 percent slower (GTAV), but on average the GTX 1080 leads by just a bit less than 10 percent. And what about power use? At idle it's a wash, but while gaming the Vega 64 used 478W at the outlet on the test system compared to 370W on the GTX 1080—only the much faster 1080 Ti matches the Vega 64 by using 475W!

For reference, the power was checked using the same scene in Rise of the Tomb Raider for each card. Note that here the CPU and the rest of the platform contribute substantially to the total power use, around 150-225W, depending on what the CPU is doing. A faster graphics card means the CPU is working harder as well to keep the GPU fed with data, so while the 1080 Ti and Vega 64 both use the same total power, for the 1080 Ti more of the power use comes from the CPU/system, so if anything the power delta between Vega 64 and the 1080/1080 Ti is even larger.

This is similar to the story we've had with R9 390/390X vs. GTX 980, 980 Ti vs. Fury X, and more recently 1060 vs. RX 580/570 (480/470). Despite all the architectural changes in Vega 10, Nvidia's GPUs are still well ahead when it comes to overall efficiency.

What about the RX Vega 56?

Vega 64 was clearly designed to go after Nvidia's top parts. It came up short of the 1080 Ti but is relatively close to the 1080. The difficulty in pushing clockspeeds to the limit in pursuit of performance is that it often means increasing the chip voltage, which in turn causes an even greater ramp in power requirements. AMD has already demonstrated what can be done by backing off from maximum performance a bit with the R9 Nano, which took the same GPU as the Fury X but capped it at 175W instead of 275W. Performance was around 10 percent lower, but it dropped power requirements by over 35 percent—or in other words, by not chasing every last ounce of performance, AMD was able to greatly improve the efficiency of the Fiji architecture. R9 Nano is around 35 percent better performance per watt compared to R9 Fury X.

I bring all of that up because AMD is going at least partway there with the RX Vega 56—at stock clocks, anyway. Never mind the dual 8-pin power connection, and power draw is still a bit high as I measured 400W while playing games, compared to 314W for the GTX 1070. But looking at just the Vega chips, the Vega 64 is about nine percent faster than the Vega 56 overall, but it uses around 80W (20 percent) more power. In terms of system performance per watt, the Vega 56 is ten percent better than the Vega 64, and if we look just at the graphics card, it's around 25 percent better.

And AMD isn't done with Vega just yet, as we know there will be some sort of Vega Nano card in the future, probably within the next month or two. I expect it will go after a similar 175W TDP as the R9 Nano, and performance will probably be only slightly worse than the Vega 56—though if it's like the R9 Nano, the price may end up closer to the Vega 64. But you don't have to wait for a Nano if you just want better efficiency from Vega. AMD has included a BIOS switch along with a power saving profile that can reduce the power use quite a bit—at the cost of performance, of course. There are also features like Chill that can improve efficiency.

Vega 56 isn't just more efficient than the Vega 64, though—it's also currently cheaper than the GTX 1070, while delivering nearly performance overall. It comes out a few percent ahead, winning some games by double digit percentages and losing others by an equally larger margin, but basically it's at parity. The Vega 56 still uses more power, but now we're talking about 85W. Playing games four hours a day, every day of the year, the added cost in power would be around $12 per year—basically nothing to worry about.

I mentioned in the unboxing of the RX Vega cards that AMD requested reviewers focus on the Vega 56 first if we were short on time, and the reason is obvious: it's the better bargain. And if you want to overclock it, you still can, though that will of course affect the overall efficiency in a negative way. The Vega 56 ends up being a good alternative to the GTX 1070, and an even better one if it's actually readily available at $400, considering the 1070 currently goes for $450 or more due to mining demand. Which brings up another topic.

What about cryptocurrency mining?

Given the shortages on AMD's RX 570/580 cards, which are typically selling at prices 50 percent or more above the MSRP, many have feared—and miners have hoped—that the RX Vega would be another excellent mining option. I poked around a bit to see what sort of performance the Vega 56/64 delivered in Ethereum mining, and so far it's been pretty lackluster, considering the price and power use. The Vega 56 manages around 31MH/s for Ethereum, and the Vega 64 does 33MH/s. Overclocking the VRAM helps boost both cards closer to 40MH/s right now. So at launch, that's not super promising.

The problem is that most of the mining software (whether for Ethereum, Zcash, or some other coin) has been finely tuned to run on AMD's existing Polaris architecture. Given time, we could see substantially higher hashrates out of Vega, and there are rumors that the right combination of VBIOS, drivers, and mining software can hit mining speeds more than double what I measured. Perhaps AMD is intentionally holding back drivers or other tweaks that would boost mining performance, and long-term AMD remains committed to gaming. All we can do is hold our breath and hope that mining doesn't drive prices of Vega into the stratosphere in the future.

Initial overclocking results

I'm not going to include a complete chart of overclocking results for RX Vega right now, mostly because I don't have them. I did some preliminary testing, and things are far more interesting on the Vega 56 card, as you'd expect. Simply raising the HBM2 clockspeed to 930MHz base (close to the same as the Vega 64's 945MHz) and cranking the power limit to 50 percent improved performance to where Vega 56 is effectively tied with Vega 64. Going further, I toyed with the percentage overclock slider and managed to push it up 20 percent higher. In both cases, system power draw while gaming basically matched the Vega 64—480W at the outlet.

Here's the weird thing: in limited testing, all that added clockspeed did very little for performance. And yes, the clockspeed did go up (along with power draw), as Radeon Settings showed sustained clockspeeds of around 1900MHz. But again, this is only limited testing, so I need to go through the full game suite and verify that my clockspeeds are stable, and see if things are better in some other games.

One thing that didn't go so well with overclocking on the Vega 56 was pushing the HBM2 clocks higher. Maybe I have a card that doesn't OC as well, or maybe I need to find the right dials to fiddle with, but 950MHz resulted in screen corruption and a crash to desktop, and 940MHz showed some flickering textures. I suspect Vega 64 might have slightly higher voltages for the HBM2, or just better binning, but I don't expect VRAM clocks to be all that flexible.

As for Vega 64 overclocking, I ran out of time. I'll be looking at that later this week, and I suspect there's at least a bit of headroom if you're willing to crank up the fan speeds. I did that for Vega 56 as well, where 1500 RPM base fan speed and a maximum of 4500 RPM kept the GPU below 70C during my initial tests. But while raising the power limit of Vega 56 by 50 percent went without a hitch, I suspect I'll need to be more cautious with Vega 64—295W plus 50 percent would put the power draw potentially in the 400W or higher range. Yeah, probably not going to work so well.

An architecture for the future

I've talked about the Vega architecture before, and really for most gamers the important thing will be how a graphics card performance. But looking at the Vega design, there are clearly elements that aren't really being fully utilized just yet. Just to quickly recap a few elements of the architecture that have changed relative to Polaris, here's what we know.

AMD notes five key areas where the Vega architecture has changed significantly from the previous AMD GCN architectures: High-Bandwidth Cache Controller, next generation geometry engine, rapid packed math, a revised pixel engine, and an overall design that is built for higher clockspeeds. I've covered all but the last of these previously, so let me focus on the higher clockspeed aspect for a moment.

In order to improve clockspeeds, everything has to become 'tighter'—the chip layout needs to be optimized, and in some cases work has to be split up differently. AMD talked about adding a few extra pipeline stages to Vega as one solution. I've discussed pipelines in CPU architectures before, and the same basic principles apply to GPU pipelines. Shorter pipelines are more efficient in terms of getting work done, but the tradeoff is that short pipelines generally don't clock as high. AMD didn't disclose full details, but while the main ALU (Arithmetic Logic Unit) remains a 4-stage pipeline, other areas like the texture decompression pipeline have a couple of extra stages. The result of all the efforts to optimize for higher clockspeeds is that Vega can hit 1600MHz pretty easily, whereas Fiji topped out in the 1000MHz range.

Other changes are more forward looking. Take the HBCC, the High-Bandwidth Cache Controller, which is supposed to help optimize memory use so that games and other applications can work with datasets many times larger than the actual VRAM. AMD showed some demonstrations of what the HBCC can do in professional apps, including real-time 8K video scrubbing on the Radeon Pro SSG. For the RX Vega, however, by default the HBCC is actually disabled in the drivers. Maybe that will change with further testing, but AMD says the HBCC will become more useful over time. What about now?

I turned on the HBCC to see what it does, or doesn't, do for current gaming performance. (Yes, I literally reran every single gaming benchmark, just in case there was an edge case where it helped.) The results were in line with what AMD said: HBCC didn't help performance in most games, with results being within the margin of error. Looking to the future, though, if AMD can get developers to build around its hardware, we could see games pushing substantially higher data sets. This is also early days for Vega, so the drivers could be tuned over the coming months. That's true for Nvidia as well, but GTX 1080/1070 have been out for over a year so most of the gains have already been found. We'll have to wait and see if Vega performance improves given more time.

The Vega block diagram.

The changes to the geometry and shader pipelines are even more of a future benefit possibility, in part because a lot of the goodness will only be realized if developers actually use the new instructions and features. AMD gives and example of a new primitive shader type, where polygon throughput can be greatly increased. Primitive shaders work by skipping the old Direct3D API rendering pipeline approach (eg, input assembly, vertex shading, hull shading, tessellation, domain shading, and geometry shading) and giving more control to the developers. Many modern rendering techniques don't require all of those specific stages, and with the new primitive shaders Vega can deliver around four times the primitive throughput. But since primitive shaders are a new feature, no existing games utilize it.

Rapid packed math isn't quite so new a concept. Instead of the normal FP32 single-precision floating point, packed math is able to do two FP16 operations. For some graphics effects, the loss of precision is acceptable, and AMD gave some examples of rendering engines selectively using FP16 where it made sense to realize an overall performance improvement of 10-20 percent. The pixel shader also features a new draw stream binning rasterizer, which takes a few cues from the tiling graphics architectures to improve bandwidth utilization. Unlike primitive shaders, DSBR can work on existing applications, though the benefit varies—AMD points to the SPECviewperf 12 energy01 test as an example of a case where DSBR doubled the performance.

One other area I haven't talked about yet with Vega is that it also uses AMD's new Infinity Fabric, an SoC-like interconnect that has been talked about a lot with AMD's Ryzen CPUs. Vega uses the Infinity Fabric to connect the GPU cores to the memory controller, PCIe controller, display engine, and video acceleration blocks. But where Infinity Fabric will potentially prove most beneficial is in future APUs, allowing AMD to easily integrate Ryzen CPUs with Vega GPUs. Vega 10 is currently the high-end Vega part, and we'll certainly see lower tier offerings at some point, though maybe only in APUs since Polaris continues to do well as a mainstream and budget part.

What all of this means together is that Vega could improve in relative performance over time—and that's typical of all new graphics architectures. But in the here and now, Vega performs as shown in our benchmarks. Buy for the present, but hope for the future.

Vega is fast, but is it fast enough?

Considering the significantly higher clockspeeds, I had hoped Vega would perform far better than what I'm seeing today. It's not a bad GPU by any means, but it's not going to dethrone Nvidia, at least not right now. Maybe in the future, as more DX12 games become common and developers start using larger data sets and primitive shaders, Vega 64 could come out ahead of the GTX 1080. I wouldn't bet on that happening before we're onto the next generation Navi and Volta architectures, however.

What Vega does have going for it are reasonable prices, and AMD users will certainly appreciate having something that's clearly faster than both the RX 580 and the R9 Fury X. But even AMD's own numbers show Vega 64 as only being a modest upgrade over the R9 Fury X. The real target would be gamers who are still running R9 290/290X (or older) hardware. And for Nvidia users, unless you're unhappy with Team Green, there's no compelling reason to switch over to AMD. Similar performance, same price, much higher power use. Déjà vu.

AMD would love you to buy a FreeSync monitor to go with RX Vega.

Perhaps the biggest selling point for Vega is for people who prefer buying into the AMD ecosystem as a whole, rather than going with Intel and Nvidia. To that end, AMD has Vega 'packs' available that include one of the new Vega graphics cards, along with some games, a $200 discount on a FreeSync monitor, and a $100 discount on a Ryzen CPU and motherboard. Besides the GPU, FreeSync is the biggest draw for gamers, as it helps to improve the overall gaming experience. AMD likes to point out that there are over 200 FreeSync displays now available, compared to less than 40 G-Sync displays. And more importantly, FreeSync displays typically cost less than G-Sync offerings, though we have to be careful and compare only similar quality displays.

If you already own a G-Sync display, none of this really matters. But if you're running an older display and are thinking it's about time to upgrade, AMD makes a compelling case for buying a Vega card with a FreeSync display over the competition's GeForce and G-Sync. If you're in the market for both a display and a graphics card, and go with our favorite option of a 27-inch 144Hz 2560x1440 IPS display, with G-Sync you're looking at the Acer XB271HU for $700. There's a very similar Acer XF270HU FreeSync display that sells for $548. So basically, AMD is right: you can save around $150 for that sort of setup by going with Vega and FreeSync instead of GTX 1070/1080 and G-Sync. Too bad the $200 Vega Pack discount doesn't apply to any FreeSync display, as I'm not as keen on the ultrawidescreen Samsung panel.

For those who aren't part of the AMD ecosystem, and who aren't interested in becoming part, Vega is far less interesting. Its direct competitor, the GTX 1080, is over a year old now, GTX 1080 Ti is available for those who want even higher performance, and we all know Nvidia isn't standing still. The question at this point is how far away Nvidia's Volta GPUs are from public launch, and how fast they might end up being. If Vega had been able to catch the 1080 Ti, Volta GPUs might have arrived as early as this fall. Having now seen Vega, however, it could be spring 2018 before we get a GTX 1180 (or whatever Nvidia ends up calling it).

The good news is that when it comes to high-end GPUs, AMD now shows up in the discussion again. It hasn't really done that in a meaningful way for over a year. Considering the high-end GPU market reportedly moves over a million units a quarter, Vega is sure to get at least some piece of the pie, where previously AMD wasn't even in the conversation. The RX Vega 56 is the more compelling choice, given the lower price and otherwise similar performance. Now we just need to see if the cards stay in stock, or if demand from non-gaming users snaps up all the cards in short order.

The RX Vega 64 is available starting today. As discussed previously, there are Vega Packs with games and some other goodies, including the Aqua Pack that has a liquid-cooled Vega 64. The Vega 56 meanwhile will go on sale August 28, so you might want to wait a couple of weeks. [Update: No surprise here, all the Vega 64 cards sold out in rapid fashion. This is typical of new hardware launches, as we've seen the same with GTX 1080, 1080 Ti, 1070, RX 580, 570, etc. Hopefully supply improves and demand subsides, but it will likely depend in part on whether mining performance improves.]

Jarred Walton

Jarred's love of computers dates back to the dark ages when his dad brought home a DOS 2.3 PC and he left his C-64 behind. He eventually built his first custom PC in 1990 with a 286 12MHz, only to discover it was already woefully outdated when Wing Commander was released a few months later. He holds a BS in Computer Science from Brigham Young University and has been working as a tech journalist since 2004, writing for AnandTech, Maximum PC, and PC Gamer. From the first S3 Virge '3D decelerators' to today's GPUs, Jarred keeps up with all the latest graphics trends and is the one to ask about game performance. 

TOPICS