The wait is finally over.
When Nvidia launched its all-new Pascal architecture last year with the GTX 1080 and GTX 1070, gamers the world over waited for AMD to respond, and now it finally has with the Radeon RX Vega. We’ve known this was coming for a long time, as AMD has been talking about it and hyping it for seemingly forever, and now the time for talking is over because we finally have all the information we need to render our verdict on Fiji’s successor. Let’s dive in.
AMD is coming to market with two GPUs based on its 14nm Polaris 10 architecture; the Vega 64 and the Vega 56, priced at $499 and $399 respectively. AMD is also introducing a new GPU + bonuses bundle called Radeon Packs, which you can read about here. Essentially if you spend $100 more on the GPU you can get discounts on FreeSync monitors, Ryzen CPUs, and games. It’s a fun idea, and if you’re in the market for an AMD GPU you should get a FreeSync monitor to go with it, so be sure to check them out. They’re kind of convoluted with how they work so we’ll spare you the details here, but just follow the link above for more details.
The new Radeons RX Vega GPUs are the flagship gaming models of the company’s all-new Polaris architecture, which first debuted with the RX 480 and other midrange GPUs. Now AMD has bumped up everything, going from a 232mm (squared) die on the RX 480 to a 486mm (squared) die on these two GPUs. For comparison, the GP104 die used in the GTX 1080/1070 is 314mm (squared), so Vega is quite a bit larger. It’s even bigger than the GTX 1080 Ti’s 471mm (squared) die. Both of these GPUs are competing directly with the GTX 1080/1070, as neither of them has the brawn to go up against the GTX 1080 Ti, which seemingly will remain the king until the next round when AMD’s Navi arrives at some point in the future, but at that point Nvidia will have Volta ready too, so around and around we go.
Both Radeon Vega GPUs have the same basic specs, though the 64 is clocked higher, has a higher memory clock, and more streaming processors. It also consumes more 85w more power. Both of the GPUs I received for testing looked identical in every way, and are typical Radeon reference design boards with illuminated Radeon logos and dual eight-pin power connectors.
Given the GPUs’ high power requirements, at least relative to their Nvidia counterparts, AMD has implemented a few technologies to reduce power consumption. The reference cards I received featured a switch on the GPU itself that allows you to switch to an alternate VBIOS that shaves between 15-30w off its power consumption depending on which power profile you select. Yes, there are three power profiles available in the Radeon software; Power Saving, Balanced, and Turbo. There’s also Radeon Chill, which limits on-screen action when needed to keep temperatures down.
The big news with these GPUs is that they employ what AMD called a High Bandwidth Cache, which is comprised of 8GB of second-gen High Bandwidth Memory, aka HMB2. Instead of the usual 256-bit or 384-bit path to the GPU this memory sits on top of the die via an interposer, and has an incredible 2048-bit wide path. This allows the memory to run at much slower speeds than the GDDR5 used by Nvidia while achieving similar results. If you’re concerned that 8GB of memory isn’t enough, and you should be if you want to run games at 4k resolution, AMD allows you to allocate system memory that is pooled with the onboard HBM2 via a slider in the software. An AMD rep I spoke with about it called it a “forward looking feature,” noting currently it won’t offer much benefit. By default it seemed to automatically allocate half my test system’s memory, and there was no way to fine-tune it to just use one or two gigabytes, so it seems kind of beta still.
AMD is also introducing a new display refresh technology called Enhanced Sync which seems like it’s designed for folks without a FreeSync display, but AMD notes it is compatible with refresh-sync technologies. According to AMD if your game’s framerate exceeds the display’s refresh rate (typically 60Hz) it allows the framerate to stay where it is instead of lowering it to 60Hz, and shows the last completed frame on each interval, theoretically reducing input lag. If your game dips below the refresh rate it will disable V-sync and allow tearing, again to reduce input lag. You can set it globally for all games or per-game via the Radeon software.
Both the Vega 64 and 56 reference cards I received are identical, and are 10.5″ long and take up two PCI Express slots. Both card have three DisplayPort and one HDMI port. Both cards also feature a strip of LEDs above the PCIe connectors that display the current GPU load. They are labeled GPU Tach and go from one light illuminated when the card is idle to all of them lit up when the GPU is at full throttle. It looks kind of cool, but I sort of wish they’d dance a little bit like the LEDs on RAM.