Radeon VII review: AMD's cutting-edge return to enthusiast gaming
- 08 February, 2019 01:00
Radeon VII breaks new ground for AMD, and for graphics cards in general. It’s the company’s first truly high-end 4K GPU, capable of surpassing 60 frames per second at High or Ultra settings. It’s the first-ever consumer graphics card built using the next-gen 7nm manufacturing process, and the first to ship with a massive 16GB of ultra-fast high-bandwidth memory (HBM). Radeon VII is even the first AMD graphics card that shifts away from reporting the GPU temperature alone to monitoring a more holistic array of 64 thermal sensors spread across the die. This is impressive hardware, the likes of which gamers haven’t seen before.
It’s no GeForce killer, though. The $700 Radeon VII trades performance blows with the similarly priced Nvidia GeForce RTX 2080 and even the two-year-old GTX 1080 Ti. Nvidia’s recent embrace of adaptive sync monitors eliminates AMD’s FreeSync monitor pricing advantage. And AMD’s graphics card lacks the real-time ray tracing hardware offered by GeForce RTX GPUs, though very few games take advantage of those capabilities at this point.
But don’t let those trade-offs deter you. Nvidia’s offerings have plenty of their own limitations, and AMD’s Radeon VII is a very competitive bleeding-edge beast of a graphics card. Let’s dig into why.
AMD Radeon VII specs and features
AMD’s name for this card contains several clever nods. Not only is the Radeon VII the first 7nm consumer graphics card, it’s the second generation of the company’s Vega architecture, following in the footsteps of the Radeon RX Vega 56 and 64. Here’s how the three GPUs compare in raw under-the-hood specs:
Even though the Radeon VII harbors fewer streaming processors than Vega 64, it demolishes its predecessor in sheer performance, as you’ll see in our benchmarks later. There are several reasons for that. First off, AMD tuned Radeon VII to run at much higher clock speeds than Vega, with maximum boost clocks roaring ahead by more than 200MHz—no small feat. (Note: The “peak engine clock” specification listed in the chart above refers to “the highest achievable frequency” in certain content creation workloads, while the traditional “boost clock” specification is for games.)
AMD also optimized the second-generation Vega architecture to provide lower latency, as well as more bandwidth to the render output units (ROPS). Those tweaks help improve gaming performance, while “increased floating point and integer accumulators” help boost results in compute workloads, a big focus for AMD with Radeon VII.
AMD also tweaked temperature monitoring significantly in Radeon VII. Traditionally, AMD graphics cards reported and adjusted performance based on a GPU temperature taken from a single sensor near a thermal diode. Modern GPUs, by contrast, come laden with temperature sensors: Radeon VII contains a whopping 64 spread across the chip—twice the number on the Vega 64.
AMD’s graphics card takes advantage of all that hardware with a new “Junction Temperature” reading that handles thermal throttling and fan control using all the available data. AMD claims the switch offers more dependable throttling behavior and slightly increased performance in thermally limited scenarios, like many (but not all) gaming workloads.
You can have your cake and eat it too, though, as Radeon Software’s Wattman overclocking tool reports both the new Junction Temperature as well as the standard GPU temperature.
The shift from a 14nm to 7nm manufacturing process didn’t just improve GPU performance. AMD managed to shrink the GPU die from 495 square millimeters in Vega 64 to 331 in Radeon VII. As a result, the company crammed two more 4GB stacks of HBM memory onto the chip, bringing the total number of stacks up to four and the total memory capacity to 16GB. That’s twice as much as you’ll find in Nvidia’s RTX 2080, and even 5GB more than you’ll find in the lofty $1,200 GeForce RTX 2080 Ti.
Just as impressive: Radeon VII features a 4,096-bit memory interface, compared to Vega 64’s 2,048-bit interface, giving the card as astonishing overall memory bandwidth of 1 terabyte per second. Sweet holy moly. By comparison, Vega 64 offers 484GBps of memory bandwidth; the GeForce RTX 2080 offers 448GBps; and the RTX 2080 Ti offers 616GBps.
Such lofty memory capabilities offer benefits to gamers and content creators alike. Radeon VII shines brightest as a 4K gaming GPU, and games that offer 4K textures will often gobble up all the memory you can throw at it. A 16GB frame buffer offers abundant future-proofing if memory demands continue to expand, and it could also prove advantageous today if a 4K game exceeds the 8GB buffer offered by the RTX 2080. When a game surpasses the onboard memory total of your video card, it needs to tap into your much slower overall system memory instead, which can result in stutter-inducing frame time lag. For content creators, editing 4K or 8K videos can monopolize tremendous amounts of memory. Radeon VII can handle those workloads without breaking a sweat.
Radeon VII also comes loaded with connectivity options, in the form of an HDMI port and three DisplayPorts. It lacks the VirtualLink USB-C connector that debuted in Nvidia’s RTX 20-series GPUs, but virtual reality headsets that support the newly created standard don’t exist yet, anyway. The card requires a pair of 8-pin power connectors to supply the 300 watts of energy needed to fuel it—a mere 5W increase over the Vega 64, despite Radeon VII’s significant performance uptick.
The card itself looks absolutely stunning from top to bottom, returning to the stark brushed aluminum design introduced in the woefully rare Radeon RX Vega 64 Limited Edition. One key difference: While the Vega 64 Limited Edition included a single blower-style fan on its shroud that helped expel air out of the back of your PC, the Radeon VII follows in the footsteps of Nvidia’s GeForce RTX Founders Edition cards by switching to a more traditional multi-fan setup that pushes the heat dissipated by your GPU into your case instead. Three black fans adorn the shroud to assist in the endeavor.
A red cube with a “Radeon” R lights up the outer corner of the graphics card when it’s running, an aesthetic matched by an illuminated red Radeon logo on the edge of the card. You can’t change the color of the LEDs. Normally, that’s not a big deal, but custom third-party Radeon VII graphics cards aren’t expected to be available when the card launches on February 7, so RGB fiends probably won’t be able to get their fix in the near-term.
As a modern Radeon graphics card, Radeon VII also supports FreeSync 2 HDR, virtual super resolution, the Radeon Overlay, per-game overclocking, and all the other nifty features baked into AMD’s superb Radeon Software Adrenalin 2019 edition. For a limited time, AMD will also toss in three free games—The Division 2, Devil May Cry 5, and Resident Evil 2—when you buy the Radeon VII.
But enough about the technical details. On to the games!
Next page: Test system configuration, benchmarks gaming begin
Our test system
Our dedicated graphics card test system is packed with some of the fastest complementary components available, to put any potential performance bottlenecks squarely on the GPU. Most of the hardware was provided by the manufacturers, but we purchased the cooler and storage ourselves.
- Intel Core i7-8700K processor ($360 on Amazon)
- EVGA CLC 240 closed-loop liquid cooler ($120 on Amazon)
- Asus Maximus X Hero motherboard ($260 on Amazon)
- 64GB HyperX Predator RGB DDR4/2933 ($416 for 32GB on Amazon)
- EVGA 1200W SuperNova P2 power supply ($180 on Amazon)
- Corsair Crystal 570X RGB case, with front and top panels removed and an extra rear fan installed for improved airflow ($170 on Amazon)
- 2x 500GB Samsung 860 EVO SSDs ($100 on Amazon)
To see how the $700 Radeon VII stacks up against the current competition, we’re comparing it to Nvidia’s $500 GeForce RTX 2070, $800 GeForce RTX 2080, and $1,200 GeForce RTX 2080 Ti Founders Edition graphics cards. We’re also including benchmarks for the $740 PNY GeForce GTX 1080 Ti and AMD’s $500 Radeon RX Vega 64 reference card.
Each game is tested using its in-game benchmark at the highest possible graphics presets. We disable VSync, frame rate caps, and all GPU vendor-specific technologies—like AMD TressFX, Nvidia GameWorks options, and FreeSync/G-Sync, and we enable temporal anti-aliasing (TAA) to push these high-end cards to their limits. If any setting differs from that, we’ll mention it.
AMD Radeon VII gaming benchmarks
Let’s kick things off with Strange Brigade ($50 on Humble), a cooperative third-person shooter where a team of adventurers blasts through hordes of mythological enemies. It’s a technological showcase, built around the next-gen Vulkan and DirectX 12 technologies and infused with features like HDR support and the ability to toggle asynchronous compute on and off. It uses Rebellion’s custom Azure engine. We test with async compute off.
Spoiler alert: Radeon VII puts in its strongest performance by far here, easily outclassing both the PNY GTX 1080 Ti and the Nvidia RTX 2080 FE—two similarly priced graphics cards—by more than 10 frames per second across all resolutions, and toppling the older Radeon RX Vega 64 by over 40 percent at 4K resolution.
Shadow of the Tomb Raider
Shadow of the Tomb Raider ($60 on Humble) concludes the reboot trilogy, and it’s utterly gorgeous—even the state-of-the-art GeForce RTX 2080 Ti barely manages to average 60 fps with all the bells and whistles turned on at 4K resolution. Square Enix optimized this game for DX12, and recommends DX11 only if you’re using older hardware or Windows 7, so we test with that. Shadow of the Tomb Raider uses an enhanced version of the Foundation engine that also powered Rise of the Tomb Raider.
The three $700 graphics cards turn in virtually identical performances, including the Radeon VII. Again, the newer card outclasses Vega 64 by just shy of 40 percent.
Far Cry 5
Finally, a DirectX 11 game! Far Cry 5 ($60 on Humble) is powered by Ubisoft’s long-established Dunia engine. It’s just as gorgeous as its predecessors were, and even more fun.
Radeon VII once again manages to hang tough with Nvidia’s powerful pair of $700 GPUs, flirting with 60 frames per second even with everything cranked at 4K resolution. Its lead over Vega 64 greatly diminishes in this game though, at just over 26 percent faster.
Next page: Gaming benchmarks continue
Ghost Recon Wildlands
Move over, Crysis. If you crank all the graphics options up to 11, like we do for these tests, Ghost Recon Wildlands ($50 on Humble) and its AnvilNext 2.0 engine absolutely melts GPUs.
Ghost Recon Wildlands also prefers Nvidia’s GPU architecture in general, putting AMD’s new card very slightly behind the GTX 1080 Ti and RTX 2080 in raw frame rates. In terms of real-world experience it’s effectively a dead heat, though. Radeon VII once again claims a roughly 26 percent victory over AMD’s Vega 64.
Middle-earth: Shadow of War
Middle-earth: Shadow of War ($50 on Humble) adds a strategic layer to the series’ sublime core gameplay loop, adapting the Nemesis system to let you create an army of personalized Orc commanders. It plays like a champ on PC, too, thanks to Monolith’s custom LithTech Firebird engine. We use the Ultra graphics preset but drop the Shadow and Texture Quality settings to High to avoid exceeding 8GB of VRAM usage in our testing scenario, because graphics cards that exceed 8GB of capacity are rare indeed. Radeon VII’s 16GB frame buffer would easily let you crank those settings back up if you wanted, though.
Once again, while the Radeon VII technically falls behind Nvidia’s similarly priced GPUs by a few frames per second, they offer virtually identical real-world experiences.
The latest in a long line of successful games, F1 2018 ($60 on Humble) is a benchmarking gem, supplying a wide array of both graphical and benchmarking options—making it a much more reliable option that the Forza series. It’s built on the fourth version of Codemasters’ buttery-smooth Ego game engine. We test two laps on the Australia course, with clear skies.
The Radeon VII lags behind the GTX 1080 Ti and RTX 2080 by a more noticeable 7.5 and 10.6 percent, respectively, at 4K resolution. Nevertheless, AMD’s card easily delivers buttery-smooth 4K gaming that surpasses the 60-fps gold standard.
Next page: Gaming benchmarks continue
Ashes of the Singularity: Escalation
Ashes of the Singularity ($40 on Humble) was one of the very first DX12 games, and it remains a flagbearer for the technology to this day thanks to the extreme scalability of Oxide Games’ next-gen Nitrous engine. With hundreds of units onscreen simultaneously and some serious graphics effects in play, the Crazy preset can make graphics cards sweat. Ashes runs in both DX11 and DX12, but we only test in DX12, as it delivers the best results for both Nvidia and AMD GPUs.
This is another game where the Radeon VII trails Nvidia’s GPUs by 6 or 7 percent at 4K resolution. That shouldn’t be very noticeable to the human eye, and AMD’s card again has no problems hovering around 60 fps, even with all the eye candy cranked.
We’re going to wrap things up with a couple of older games that aren’t really visual barn-burners, but still top the Steam charts day in and day out. These are games that a lot of people play. First up: Grand Theft Auto V ($30 on Humble) with all options turned to Very High, all Advanced Graphics options except extended shadows enabled, and FXAA enabled. GTA V runs on the RAGE engine and has received substantial updates since its initial launch.
This game tends to vastly prefer Nvidia GPUs, and the Radeon VII trails the older GTX 1080 Ti by a decent amount. But interestingly, thanks to tweaks in the GeForce RTX 2080’s technological configuration, Radeon VII comes out ahead of it at 4K resolution. Nvidia’s modern option takes back the lead if you shift the resolution down to 1440p or 1080p, though. Radeon VII is also 31 percent faster than Vega 64.
Rainbow Six Siege
Finally, let’s take a peek at Rainbow Six Siege ($40 on Humble), a game whose audience just keeps on growing, and one that still feels like the only truly next-gen shooter after all these years. Like Ghost Recon Wildlands, this game runs on Ubisoft’s AnvilNext 2.0 engine, but Rainbow Six Siege responds especially well to graphics cards that lean on async compute features.
Nvidia greatly enhanced the async compute capabilities of its graphics architecture in the new RTX 20-series lineup. As a result, the RTX 2080 opens up a huge lead over Radeon VII, even though AMD’s new card performs fairly evenly with the older GTX 1080 Ti. If you’re a Siege fan, you’ll want to opt for RTX over RVII.
Next page: Content creation benchmarks
AMD Radeon VII content creation benchmarks
AMD wants to tout the Radeon VII’s content creation chops, too. So for this review, my colleague Gordon Mah Ung ran additional tests focused on this use case.
Our Content Creation Testbed
For content creation, we used a machine a little better suited to actual content creation artists: AMD’s 32-core Threadripper 2990WX CPU in an MSI X399 MEG Creation motherboard. The build used Windows 10 RS5 and 32GB of DDR4/3200 in quad-channel configuration. The OS was installed on a HyperX SATA SSD with a Plextor M8Pe SSD for workloads that might be disk-bound. We used the latest available drivers for the Radeon VII and the GeForce RTX 2080 Founders Edition card. OpenCL was used for the Radeon VII, while the GeForce ran on CUDA.
Our first test uses Adobe Premiere Creative Cloud 2019 to export a 4K video using the the H.264 YouTube 4K preset and the max render quality option. The first portion of the video is mostly a straight encode, while the latter half layers on GPU-taxing graphics and B-roll. The entire clip is also color corrected.
The results gave the Radeon VII a very small lead, but let’s call it a tie. For the most part, these results match performance data from AMD for 4K content where the two cards are nearly even. We should note that AMD says using 8K resolution video actually opens the gap more.
Our next test used the Chaos Group’s V-ray benchmark to measure performance when rendering a ray-traced scene on the GPU. The Radeon again has a small lead of about five percent.
The elephant in the room. however, are those two words: ray traced. While the current V-ray benchmark does not support Microsoft’s DirectX Ray and by extension, Nvidia’s RTX, it will. And once that happens, you can expect the performance win to shift in a big way to Nvidia.
One could argue, however, that straight up OpenCL performance matters more in the here and now. To measure OpenCL performance we used LuxMark 3.1 (available here) to gauge performance of both the Radeon VII and the GeForce RTX 2080.
The winner: Radeon VII in a big way. LuxMark (based on LuxRender) gave the Radeon VII everything from as little as 10 percent to 38 percent advantage over the GeForce RTX 2080.
Radeon VII Content Creation Conclusion
For the most part, we’d say the Radeon VII pretty much equals or exceeds the RTX 2080 in several content creation tasks. But the answer is never that simple. Like games, content creation engines tend to be fairly specialized. Rather than simply saying one is the winner, you should focus on which is the winner for what you do. —Gordon Mah Ung
Next page: Power, thermals, noise, and synthetic benchmarks
AMD Radeon VII power draw, thermals, and noise
We also tested Radeon VII using 3DMark’s highly respected Fire Strike synthetic benchmark. Fire Strike runs at 1080p, Fire Strike Extreme runs at 1440p, and Fire Strike Ultra runs at 4K resolution. All render the same scene, but with more intense graphical effects as you move up the scale, so that Extreme and Ultra flavors stress GPUs even more. We record the graphics score to eliminate variance from the CPU.
Yep, everything falls about where you’d expect after observing the gaming benchmarks, which is always the case with Fire Strike. It’s a good “sanity check” tool.
We test power draw by looping the F1 2018 benchmark for about 20 minutes after we’ve benchmarked everything else, and noting the highest reading on our Watts Up Pro meter. The initial part of the race, where all competing cars are onscreen simultaneously, tends to be the most demanding portion.
Moving to the 7nm process has done wonders for Vega’s power efficiency. It’s still not quite on a par with Nvidia’s results, but it’s a close enough that efficiency can’t be considered an AMD drawback anymore. It’s a colossal improvement over the hot, hungry Vega 64.
We typically test thermals by leaving HWInfo’s sensor monitoring tool open during the F1 2018 5-lap power draw test, noting the highest maximum temperature at the end. But third-party monitoring software like HWInfo and SpeedFan haven’t been adjusted to handle the way AMD tweaked Radeon VII’s temperature monitoring, and display the newer (and much higher) Junction Temperature rather than the traditional GPU temperature reading. Because all other graphics cards list the GPU temperature, we need to test that to properly compare performance. As such, we deviated from using HWInfo on the Radeon VII and measured the GPU temperature using AMD’s Wattman tool instead.
Assuming Wattman’s readings are accurate—and they’ve always tracked with HWInfo and SpeedFan in the past—then Radeon VII once again crushes its hot-blooded predecessor, Vega 64. Topping out at 78 degrees Celsius (172.4 degrees Fahrenheit) under load, AMD’s fan-laden 7nm GPU runs at a perfectly acceptable temperature. Heck, it’s chillier than the PNY GTX 1080 Ti’s customized cooler. No complaints here.
You can definitely hear the Radeon VII working when it’s under full load, but not enough to be distracting. Once again, it’s a huge improvement over the Vega 64’s banshee-like screaming.
Next page: Should you buy the AMD Radeon VII?
Should you buy the AMD Radeon VII?
If you’re hunting for a high-performing graphics card capable of playing games with few visual compromises at 4K resolution, or ultra-fast 1440p, then you should definitely consider the Radeon VII—especially if the sky-high $1,200 price tag for Nvidia’s GeForce RTX 2080 Ti scares you off. Don’t bother upgrading to this card if you already have a GTX 1080 Ti, though.
The GeForce RTX 2080 and Radeon VII each cost $700 and deliver very similar real-world performance overall, though the frame rate differences are extreme in some games. Radeon VII pounds the RTX 2080 Founders Edition in Strange Brigade, and the RTX 2080 pounds AMD’s card in Rainbow Six Siege and Ashes of the Singularity. Performance is a wash in most games, but the RTX 2080’s lead expands if you drop all the way down to an ultra-fast 1080p monitor. Nvidia’s GPU holds a small advantage in power efficiency and thermals as well, but the differences between the two cards are once again negligible.
So what about the standout features of each?
Nvidia’s recent FreeSync adoption eliminated a compelling reason to opt for Radeon cards over GeForce. Still, AMD loaded Radeon VII with some eye-catching extras. Radeon VII holds a small-to-large performance advantage over the RTX 2080 in the content creation benchmarks we tested—as expected, given how strong Radeon architectures have typically performed in compute workloads. One thing to consider though: Nvidia’s CUDA is much more popular for compute workloads than the OpenCL tools AMD relies on, and if you need to perform ray tracing, the RTX 2080’s dedicated RT cores could give that card a boost in ray tracing tasks.
The massive 16GB of HBM2 blazing along at 1TBps is another huge win for Radeon VII, doubling up the RTX 2080 in both capacity and overall bandwidth. Such a potent memory configuration provides the Radeon VII with plenty of future-proofing in case 4K textures keep growing in size (as they likely will), and could give AMD’s cutting-edge GPU a leg up if you’re planning to edit videos at ultra-high resolutions, like 4K or 8K.
Nvidia opted to push gaming into the future with its RTX graphics cards. Rather than loading them down with extra memory, Nvidia equipped the GeForce RTX 2080 and its brethren with dedicated RT and tensor core hardware than unlock real-time ray tracing and AI-enhanced gaming capabilities that the Radeon VII simply can’t match. Then again, developers haven’t rushed to roll out RTX technology. While more than 20 games have pledged to support real-time ray tracing or Nvidia’s Deep Learning Super Sampling, you can count the number of games that actually do right now on one hand.
If you create high-resolution videos when you’re not gaming, you’ll probably want to opt for the Radeon VII over the GeForce RTX 2080. If you’re simply a gamer looking for a killer 4K or 1440p gaming experience, your choice boils down to which graphics card offers the better future-proofing option: the Radeon VII’s 16GB of ultra-fast memory, or the GeForce RTX 2080’s nascent ray-tracing and AI hardware? Pick your poison, but don’t sweat it too much, because you can’t go wrong with either of these cards. The Radeon VII is a winner, even if it isn’t an outright GeForce killer.
That said, it is a bummer that two long years after the GTX 1080 Ti’s release, the modern successors from Nvidia and AMD each deliver comparable performance at the exact same price. Each comes loaded with cutting-edge hardware to justify the cost, but fingers crossed graphics card pricing returns to sanity sooner rather than later.