Original Link: http://www.anandtech.com/show/7400/the-radeon-r9-280x-review-feat-asus-xfx

A little less than 2 years ago AMD launched their latest generation of video card products, the Radeon HD 7000 series. Based on the brand-new TSMC 28nm process, these have been some of the most successful AMD products in years, seeing AMD fight with NVIDIA head-to-head for a time for the performance crown. Perhaps as a testament to that success they’ve quickly been racking up one of the longest retail shelf lives of any AMD product, with the first 7000 series products quickly approaching the 2 year mark.

Over time however all things change, and in the world of video cards it can never come a moment too soon. Previously announced at AMD’s 2014 GPU Product Showcase and launching this week will be AMD’s successor to the Radeon HD 7000 series, the Radeon 200 series. AMD doesn’t have a new process node to work with here and as such we’re not going to see anything mirroring the likes of the 7000 series launch or the 5000 series launch, but by building upon their hardy Graphics Core Next architecture, the company does have some interesting ideas in how to refresh their product lines for the next generation.

Today we’ll be looking at the Radeon 200 series in detail – sans the yet be launched Radeon R9 290X – including both the feature sets and technologies that are arriving alongside the new 200 series cards, and several of the cards themselves. AMD is going to be saving their best for last, but they have a warm-up act that should at least capture the attention of video card enthusiasts everywhere.

AMD’s 2014 GPU Lineup: The Radeon R9 and Radeon R7 Series

Starting with AMD’s 2014 GPU lineup the company is changing the naming of their products. The Radeon HD [product number] naming scheme that has served the company since the launch of the HD 2900XT in 2007 will be going away. Replacing it will be a new naming scheme, in the format of Radeon [product category] [product number].

The new naming scheme means that names like Radeon HD 7970 GHz Edition and Radeon HD 7770 get replaced with names like Radeon R9 280X and Radeon R7 260X. Changes in numbering aside, the use of product categories is new for AMD in the GPU space. At launch there will be the R9 and R7 categories, the former signifying AMD’s enthusiast level products, while the latter signifying AMD’s mainstream level products.

This new naming scheme brings AMD’s GPU naming in-line with how they already name their CPUs – A10, A8, etc – and presumably we’ll see lower R numbers on future integrated GPUs. Meanwhile in the world of retail desktop this new naming scheme also ends up being very reminiscent of NVIDIA’s existing GTX and GT designations for their video cards.

As far as product numbers are concerned, with AMD having already reached 8000 in the HD series they’re essentially starting from the bottom, moving from 4 digit numbers to 3 digit numbers. Unfortunately the insufferable suffixes are back as of this generation, much to the chagrin of text parsers everywhere, after AMD went for more than half of a decade without making heavy use of them. The “X” suffix will be used to indicate the higher performance version of a product line, similar to how AMD has used 70 and 50 in previous version numbers. So far we’ve only seen products with and without an X, e.g. 260X or 250, but given AMD’s penchant for pushing out 3 and sometimes 4 different cards in a single line, X may not be the end of the suffixes.

So what products will comprise the initial R9 and R7 200 series? As part of their public product showcase last month AMD has formally announced all of their initial products by name while withholding the specifications until the product launches themselves. At the top will be the mysterious R9 290X, which will not be launching today, and below it we have the R9 280X, R9 270X, R7 260X, R7 250, and R7 240. The latter 5 cards are all launching this week and AMD has released the complete specifications, which we’ve laid out below.

AMD Radeon 200 Series Specification Comparison
  AMD Radeon R9 290X AMD Radeon R9 280X AMD Radeon R9 270X AMD Radeon R7 260X
Stream Processors (A Lot) 2048 1280 896
Texture Units (How Many?) 128 80 56
ROPs (We Don't Know) 32 32 16
Core Clock (For Sure) 850MHz 1000MHz 1000MHz?
Boost Clock (But) 1000MHz 1050MHz 1100MHz
Memory Clock (It's a big number) 6GHz GDDR5 5.6GHz GDDR5 6.5GHz GDDR5
Memory Bus Width (Or so I hear) 384-bit 256-bit 128-bit
VRAM (Yes, please) 3GB 2GB 2GB
FP64 (Hopefully) 1/4 1/16 1/16
TrueAudio Y N N Y
Transistor Count (Many) 4.31B 2.8B 2.08B
Typical Board Power (Good question) 250W 180W 115W
Manufacturing Process TSMC 28nm TSMC 28nm TSMC 28nm TSMC 28nm
Architecture GCN 1.x? GCN 1.0 GCN 1.0 GCN 1.1*
GPU (The Big Kahuna) Tahiti Pitcairn Bonaire
Launch Date Soon 10/11/13 10/11/13 10/11/13
Launch Price (More Than 280X) $299 $199 $139

R9 290 aside, what you’ll notice about the 200 series is that AMD is going to be issuing new SKUs for their existing GPUs. For today’s product launch there will not be any new GPUs, just new configurations of AMD’s existing Southern Islands and Sea Islands GPUs.

AMD has been very explicit in not calling these rebadges, and technically they are correct, but all of these products should be very familiar to our regular readers. Even coming up on 2 years after its launch Tahiti will continue being a magical place for AMD, forming the basis of the R9 280X, in what’s essentially a lower clocked 7970GHz Edition. Meanwhile the Pitcairn based 7870 gains PowerTune Boost capabilities and becomes the R9 270X, and the Bonaire based 7790 also gains boost capabilities while becoming the R7 260X. Finally Oland, the littlest member of the Southern Islands family, finally gets a retail desktop release with the R7 250 and R7 240. Oland was introduced several months ago, but until now has been OEM desktop and laptop exclusive.

Speaking of reused GPUs, with this generation of products AMD is pretty much doing everything they can to drop out of the codename bingo game. Compared to past years AMD is being extra careful not to use codenames in any public comments. Their new high-end GPU, or even their existing GPUs, are not something they will name by the codenames we’ve come to know them by, instead referring to them by configuration and/or the products they’re in. AMD has been bitten by the use of codenames in the past – the enthusiast community at large spent the better part of a year salivating over rumored “Sea Islands” GPUs that never came – but this does appear to be an attempt by the company to obfuscate the relationship between products and the GPUs they contain, so that they can instead focus on performance.

To that end this means we’re still going to be looking at the same GCN feature set schism. R9 280X and R9270X are of course based on the original GCN architecture, while the Bonaire powered R7 260X is based on AMD’s revised GCN architecture. Since AMD has still not officially assigned names to these architectures, and because “Sea Islands” has been badly mangled by now, we’re going to continue referring to these architectures as GCN 1.0 and GCN 1.1 respectively. At least until such a time where they get a proper name out of AMD.

AMD Radeon Product Evolution
Predecessor GPU Successor
Radeon HD 7970 GHz Edition Tahiti Radeon R9 280X
Radeon HD 7870 Pitcairn Radeon R9 270X
Radeon HD 7790 Bonaire Radeon R7 260X
Radeon HD 7770 Cape Verde Retired
(OEM Only) Oland Radeon R7 250/240

With that said, while AMD is doing their best to drop codenames, they are technically still alive and kicking. Our R9 270X reference card is labeled Curacao, for example, despite the fact that it’s based on the venerable Pitcairn GPU. So AMD still has codenames internally, apparently including new names for existing GPUs.

Moving on, while the GPUs behind today’s cards are unchanged, the cards themselves are not. All of the cards receive new firmware with new functionality that’s not present in the 7000 series. Chief among this is the ability to drive three TMDS-type displays (DVI/HDMI) off of one card, which we’ll get to in detail in a little bit. R7 260X in particular also gains audio capabilities via AMD’s TrueAudio technology, which is finally being activated after being shipped deactivated on the equivalent 7790.

Of course a bit part of today’s launch isn’t just about branding or firmware features, but it’s also about pricing. AMD is essentially using this launch to formalize their ever-continuing price cuts, and to reframe all of their products in their new positions as lower priced, lower tier parts. For this same reason we’re also going to see an evolution in card designs from AMD’s partners, not only as iteration on previous designs but as cost optimizations to help meet their new price targets.

On the manufacturing side of matters, with any new GPU manufacturing node still well on the horizon this is a rather straightforward and logical move for AMD. With today’s launch AMD is shifting their stack down while introducing products based on their new, larger high-end GPU (est. 425mm2) to make up for the lack of progress on the manufacturing side. GCN is already a very solid architecture, and while any architecture is going to have attributes that can be tweaked for performance, there’s only so much that can be done without a smaller process node.

Ultimately while AMD doesn’t have new GPUs or access to a new manufacturing node today, their cuts over the years have kept video card prices on their regular downward slope while the hardware itself has become marginally faster. The 7970, introduced just shy of 2 years ago at $550, is now the faster $299 R9 280X, the 7870 at $350 is now the faster R9 270X, etc. Without a new manufacturing node AMD can’t move the power/performance curves – in fact power consumption is going to be up slightly to compensate for the higher clockspeeds – but price cuts have pushed what was the high-end farther down the price/performance curve. The final piece of the picture though will be the R9 290X, which is a story for another time.

AMD Radeon Price Evolution
Launch Price Video Card Current Price (Closest Card)
$549 Radeon HD 7970 $299
$349 Radeon HD 7870 $199
$149 Radeon HD 7790 $139

The fact that AMD is mostly replacing old cards with new cards based on the same GPUs as those old cards does put us as reviewers and enthusiasts in an odd spot though. Despite the fact that GPUs like Tahiti and Pitcairn are coming up on 2 years old there’s no sign of retirement for the GPUs themselves, only the first generation of products based on them. The typical shelf life for a GPU is 1-2 years, the extreme cases being highly successful products like AMD’s Juniper GPU (5770) or NVIDIA’s G92 GPU (9800).

However in the case of the 200 series and its reused GPUs AMD is explicitly calling this their 2014 GPU lineup, with no indication that these products are anything less than permanent. With any new manufacturing process still well into the future we can safely say that we have reliable visibility into future AMD GPUs. We’re clearly going into a 3rd year of Tahiti and Pitcairn, and it may end up being most of a year by the time we’re finally done. If this proves to be the case then this would certainly rewrite the book on GPU shelf lives.

Meanwhile with the 200 series hitting the scene, the Radeon 7000 series is being prepared for retirement. AMD and their partners are giving the 7000 series a price cut and a push to clear out that inventory and to focus on their 2014 parts. As such the various 7000 series cards are expected to come down in price to match or beat their 200 series equivalents, though this will only last so long as the inventory of old cards does.

Moving on to retail matters, buyers of the 200 series will want to pay attention to the fact that the 200 series is not currently part of Never Settle Forever. AMD’s video game bundle program is not being extended to the 200 series at this time, so on top of the price cuts for the 7000 series they will also have an active game bundle while the 200 series does not. AMD for their part isn’t saying anything concrete about the future, but they are  strongly hinting that this is temporary, and that the 200 series will be added later after the 7000 series is cleared out in order to give buyers an extra incentive to pick up a 7000 series card first. Given AMD’s situation it makes a lot of sense, but it means we’re in a weird position where in the first time for over a year it’s NVIDIA that has the better bundle, with their current Batman: Arkham Origins bundle.

AMD Never Settle Forever: Radeon Rewards Tiers (Oct. 2013)
Card Tier Number of Free Games Cur. Number of Games on Tier
200 Series N/A 0 0
7900 Series Gold 3 11
7800 Series Silver 2 9
7790/7770 Bronze 1 7

For launch purposes, this week’s launch will be a hard launch. Based on existing GPUs, there’s no chance of an inventory problem or shortage to contend with. But AMD is lifting the NDA on these cards earlier than they’re scheduled to go on sale. The new 200 series cards won’t officially become available for sale until Friday the 11th, so there will be a few days’ gap between the launch and retail sales.

Finally on a housekeeping note, we’ll be splitting up our coverage of the 200 series launch over two articles. As AMD is launching a number of cards this week – we have 4 in hand just for today’s NDAs – we’ll be covering the 200 series in general and the R9 280X as one article, while focusing on the lower end R9 270X and R7 260X as a second article. So if you’re looking for R9 270X or R260X performance, please be sure to check in later.

Mantle: A Low-Level Graphics API For GCN

Before we dive into the hardware itself we want to spend a bit of time talking about AMD’s new software and technology initiatives for the upcoming year. Most of what we’ll discuss in the next few pages isn’t 200 series in particular, but these are items that will be of importance to AMD’s ecosystem over the next year and beyond.

We’ll start of course with Mantle, AMD’s low level graphics API for GCN. Mantle was first announced at AMD’s public GPU 2014 technology showcase, as part of AMD’s greater plan to leverage their next generation console relationship. Mantle is fundamentally a low level API designed to interact extremely closely with AMD’s GCN architecture GPUs, and in doing so will let them achieve greater performance than either Direct3D or OpenGL in some situations by bypassing the abstractions and overhead that can slow down the rendering process.

Unlike some of AMD’s other technology announcements the company presented everything regarding Mantle in their public session, so there isn’t any new previously-NDA’d material to discuss here. AMD won’t be discussing Mantle in any more detail until their Developers Summit next month. So everything we know about Mantle we’ve previously covered in our Understanding Mantle article, and as such this is going to be a summary of that article.

What is Mantle? Mantle is a new low-level graphics API specifically geared for AMD’s Graphics Core Next architecture. Whereas standard APIs such as OpenGL and Direct3D operate at a high level to provide the necessary abstraction that makes these APIs operate across a wide variety of devices, Mantle is the very opposite. Mantle goes as low as is reasonably possible, with minimal levels of abstraction between the code and the hardware. Mantle is for all practical purposes an API for doing bare metal programming to GCN. The concept itself is simple, and although low-level APIs have been done before, it has been some time since we’ve seen anything like this in the PC space.

Mantle exists because high level API have drawbacks in exchange for their ability to support a wide variety of GPUs. Abstractions in these APIs hide what the hardware is capable of, and are what allows widely disparate hardware to support the same standards. This is how for example both AMD’s VLIW5 and GCN architectures can both be Direct3D 11 compliant, despite the vast differences in their architectures, data structures, and data flows.

At the same time however the code that holds these abstractions together comes with its own performance penalty; there is always a tradeoff. The principle tradeoffs here include memory management – it’s potentially faster if you know exactly where everything is and can load exactly the data you want ahead of time – and CPU overhead from issuing commands to the GPU, due to all of the work that must be done via abstraction to prepare those calls for the target GPU and JIT compiling code as necessary. More commonly known as draw calls, these are the individual calls sent to the GPU to get objects rendered. A single frame can be composed of many draw calls, upwards of a thousand or more, and every one of those draw calls takes time to set up and submit.

Although the issue is receiving renewed focus with the announcement of Mantle, we have known for some time now that groups of developers on both the hardware and software side of game development have been dissatisfied with draw call performance. Microsoft and the rest of the Direct3D partners addressed this issue once with Direct3D 10, which significantly cut down on some forms of overhead.

But the issue was never entirely mitigated, and to this day the number of draw calls high-end GPUs can process is far greater than the number of draw calls high-end CPUs can submit in most instances. The interim solution has been to attempt to use as few draw calls as possible – GPU utilization takes a hit if the draw calls are too small – but there comes a point where a few large draw calls aren’t enough, and where the CPU penalty from generating more draw calls becomes notably expensive.

As a result we have Mantle. A low-level API that cuts the abstraction and in the process makes draw calls cheap (among other features).

However while the performance case for Mantle is significant on its own, it’s far from the only purpose Mantle serves in AMD’s plans. Low-level APIs are not easy to work with, and given the effort needed to develop engines against such APIs it’s unlikely one would ever take off on its own in this manner. So for all of the performance benefits of using Mantle, we must also talk about how AMD is going to leverage their console connection with Mantle both to help make porting easier for multiplatform developers, and at the same time use those games and developers to get the API off of the ground.

In the world of game console software both high level and low level APIs are commonly used. High level APIs are still easier to use due to abstraction hiding the ugly bits of the hardware from programmers, but when you’re working with a fixed platform with a long shelf life, low level APIs not only become practical, they become essential to extracting the maximum performance out of a piece of hardware. As good as a memory manager or a state manager is, if you know your code inside and out then there are numerous shortcuts and optimizations that are opened up by going low level, and these are matters that hardcore console developers chase in full. However because these optimizations are tied to the hardware underlying the console itself, when it comes time to port a game to the PC these optimizations are lost, as the game needs to be able to operate entirely within high-level APIs such as Direct3D and OpenGL. Or at least they used to, until Mantle.

Mantle in this context is a way to allow multiplatform developers to take their already optimized GCN rendering code and bring it over to the PC. They’ll still have to write Direct3D/OpenGL code for NVIDIA/Intel/ImgTech GPUs, but for AMD GPUs they can go lower and faster, and best of all they already have most of the code necessary to do this. Coming from the consoles to the PC over Mantle should be very portable.

How portable? The answer surprised even us. Based on our conversations with AMD and what they’re willing to say (and not say) we are of the belief that Mantle isn’t just a low level API, but rather Mantle is the low level API. As in it’s heavily derived (if not copied directly) from the Xbox One’s low level graphics API. All of the pieces are there; AMD will tell you from the start that Mantle is designed to leverage the optimization work done for games on the next generation consoles, and furthermore Mantle can even use the Direct3D High Level Shader Language (HLSL), the high level shader language Xbox One shaders will be coded against in the first place.

Now let’s be very clear here: AMD will not discuss the matter let alone confirm it, so this is speculation on our part. But it’s speculation that we believe is well grounded. Based on what we know thus far, we believe Mantle is the fundamentals of the Xbox One’s low level API brought to the PC.

By being based on the Xbox One’s low level API, Mantle isn’t just a new low level API for AMD GCN cards, whose success is defined by whether AMD can get developers to create games specifically for it, but Mantle becomes the bridge for porting over Xbox One games to the PC. Nothing like this has ever been done before, so quite how it will play out as a porting API is still up in the air, but it’s the kind of unexpected development that could have significant ramifications for multiplatform games in the future.

Of course an API is only as useful as the software that uses it, and consequently AMD has been working on this matter before they even announced Mantle. As AMD tells it, Mantle doesn’t just exist because AMD wants to leverage their console connection, but it exists because developers want to leverage it too, and indeed developers have been coming to AMD for years asking for such a low level API for this very reason. To that end a big part of Mantle’s creation is rooted in satisfying these requests, rather than just being something AMD created on its own and is trying to drum up interest for after the fact.

With at least one developer already knocking on their door, AMD’s immediate strategy is to get Mantle off the ground with a showcase game, all the while focusing less on individual game developers and more on middleware developers to implement Mantle support. In a roundabout way AMD is expecting middleware to become the new level of abstraction for most game developers in this upcoming generation, due to the prevalence of middleware engines. As game developers make ever increasing use of middleware over limited-reuse in-house game engines, downstream developers in particular will be spending their time programming against the middleware and not the APIs it sits on top of, making it easy to work in Mantle support.

Consequently, AMD for their part believes that if they can get Mantle support into common middleware like DICE’s Frostbite engine, then the downstream games using those products will be in a good position to offer Mantle support with little to no effort on the part of the individual game developer. Put in the humongous effort once at the middleware level, and AMD won’t have to repeat it with individual developers.

As the first part of that plan, the aforementioned showcase game and engine for Mantle will be DICE’s Battlefield 4, which will receive Mantle support in an update in December. Electronic Arts is planning on making extensive use of the Frostbite engine within their company for this generation, and with DICE’s developers being among the premiere technical development houses of this generation, BF4 and Frostbite are exactly what AMD needs to showcase Mantle and get their foot in the door. DICE for their part has proven to be quite enthusiastic about the concept, which helps to validate AMD’s earlier claim about developers having been asking for this in the first place, and also sets up a model for future developers to work from. DICE is just one developer, but with any luck for AMD they are the first of many developers.

Moving on, there is a downside to all of this that we need to point out however, and that is the risk of fragmentation and incompatibility. Unlike consoles, PCs are not fixed platforms, and this is especially the case in the world of PC graphics. If we include both discrete and integrated graphics then we are looking at three very different players: AMD, Intel, and NVIDIA. All three have their own graphics architectures, and while they are bound together at the high level by Direct3D feature requirements and some common sense design choices, at the low level they’re all using very different architectures. The abstraction provided by APIs like Direct3D and OpenGL is what allows these hardware vendors to work together in the PC space, but if those abstractions are removed in the name of performance then that compatibility and broad support is lost in the process.

Low-level APIs, while very performance effective, are the antithesis to the broad compatibility offered by high level APIs. On their own low-level APIs are not a problem, but there is always the risk that development of the low-level code for a game takes precedent over the high-level code, which in turn would impact the users of non-GCN GPUs. We’ve seen this once before in the days of 3dfx’s Glide, so it’s not an entirely unfounded fear.

At the risk of walking a very fine line here, like so many aspects of Mantle these are not questions we have the answer to today. And despite the reservations this creates over Mantle this doesn’t mean we believe Mantle should not exist. But these are real concerns, and they are concerns that developers will need to be careful to address if they intend to use Mantle. Mantle’s potential and benefits are clear, but every stakeholder in PC game production needs to be sure that if Mantle takes off that it doesn’t lead to a repeat of the harmful aspects of Glide.

Wrapping things up, when AMD first told us about their plans for Mantle, it was something we took in equal parts of shock, confusion, and awe. The fact that AMD would seek to exploit their console connection was widely expected, however the fact that they would do so with such an aggressive move was not. If our suspicions are right and AMD is bringing over the Xbox One low level API, then this means AMD isn’t just merely exploiting the similarities to Microsoft’s forthcoming console, but they are exploiting the very heart of their console connection. To bring over a console’s low level graphics API in this manner is quite simply unprecedented.

However at this point we’ve just scratched the surface of Mantle, and AMD’s brief introduction means that questions are plenty and answers are few. The potential for improved performance is clear, as are the potential benefits to multiplatform developers. What’s not clear however is everything else: is Mantle really derived from the Xbox One as it appears? If developers choose to embrace Mantle how will they do so, and what will the real performance implications be? How will Mantle be handled as the PC and the console slowly diverge, and PC GPUs gain new features?

The answers to those questions and more will almost certainly come in November, at the 2013 AMD Developer Summit. In the interim however, AMD has given us plenty to think about.

AMD Display Technologies: 3x DVI/HDMI Out, Tiled Display Support, & More

Although AMD doesn’t have new GPUs to show off today, that doesn’t mean their various hardware groups have been sitting by idle. Even with their existing hardware AMD can make at least some small changes via firmware and drivers, and this is something AMD’s Display Technology group, led by AMD Fellow David Glen, has been working on for the 200 series.

There won’t be any HDMI 2.0 support here (sorry guys, that needs new hardware) but they’ve been working on making improvements to Eyefinity surround setups. As is well known about the 7000 series, it was limited to 2 independent TMDS interface (DVI/HDMI) displays at once. Unlike the packet based DisplayPort interface, which operates at a single clockspeed and can vary the number of packets sent to adjust the resulting bandwidth, TMDS style interfaces adjust the clockspeed of the interface itself to match the needs of the display. As a result while you can drive a large number of DisplayPort interface monitors off of a single shared clock generator, you need a dedicated clock generator for each and every TMDS interface monitor. AMD only put 2 clock generators for TMDS interfaces on their silicon, hence they could only drive 2 such monitors at once.

Radeon 7000 Series DVI/HDMI Output Options: 3, Choose 2

With the 200 series this isn’t changing – it’s the same silicon after all – but AMD has implemented some new tricks to partially mitigate the issue. Thanks to some firmware and board level changes, with the 200 series AMD is now able to attach multiple TMDS transmitters/interfaces to the same clock generator, allowing one clock generator to be used to drive multiple displays. As a result it’s now possible to drive up to 3 TMDS interface displays off of a single 200 series card, albeit with restrictions.

The catch here is that these can’t be independent displays, and this change is primarily intended towards enabling Eyefinity with cheap, DVI/HDMI-only monitors. To utilize clock sharing and to drive 3 such monitors off of a single card, all 3 monitors must be timing-identical, which functionally speaking almost always requires the monitors to be completely identical. Furthermore the sharing of the clock generator can only be engaged/disabled upon boot, so the 3rd display cannot be hot-plugged and must be present at boot time. Consequently this is by no means as unrestricted and easy as having native support for 3 TMDS interface displays, but for Eyefinity it will get the job done.

Radeon 200 Series DVI/HDMI Output Options: All 3 Together, As Long As They're Identical

Of course this restriction only applies to using 3 TMDS interface monitors off of a single card natively. Using the DisplayPort, either with a native monitor or through an active DP-to-DVI/HDMI adapter, still allows the same fully independent functionality as before.

Moving on to something a bit more applicable to all Radeon users, as our regular readers are aware AMD is a significant participant in the VESA standards body, the group responsible for the DisplayPort standard. As part of the general trend in consumer electronics, the VESA group has been gearing up for 4K “UltraHD” displays, including rolling out updates for their various standards to better manage the emergence of those displays.

AMD, vis-à-vis the VESA, is rolling out support for VESA Display ID 1.3 in their newest drivers, for availability in the Radeon HD 7000 series and above. Display ID 1.3’s significant addition is that it formalizes support for so-called tiled displays, which implement very high resolutions such as 4K in the form of multiple lower resolution tiles that identify and behave like separate monitors. Tiled displays are atypical for PC displays, which are historically based on a single tile/stream, and for the immediate purposes of the PC industry are something of a half-way house for 4K @ 60Hz on the PC, as timing controllers for monitors to do 4K @ 60Hz natively simply do not exist yet. This is why monitors such as the recently released Asus PQ321 utilize tiles.

Ultimately tiled 4K displays are a transitionary technology as they’ll be replaced with native (single tile) 4K displays next year when suitable timing controllers hit the market, but in the interim Display ID 1.3 is the formal solution to that problem, along with allowing the VESA to lay the groundwork for future, even larger tiled displays.

To this end, Display ID 1.3 implements support for tiled monitors by adding a new data block to the descriptor, the Tiled Display Topology Data Block. The TDTDB is used by displays and other sink devices to tell source devices about the existence of the tiles, the format/resolutions they use, and the relative positioning of the tiles. Coupled with DisplayPort 1.2, which can carry multiple display streams over a single connector via MST technology, and it’s possible to hook up a tiled 4K display via a single DisplayPort connection, with Display ID providing the necessary data for the video card to make it seamlessly work.

Looking towards the future, AMD has also explicitly mentioned plans for supporting native 4K @ 60Hz monitors in the future, once the necessary timing controllers become available. Curiously only the R9 290 series is mentioned as supporting this mode (note that it’s based on new silicon), but as we’re a year out we’ll see how that goes when the time comes.

Finally, as another improvement coming to the 200 series, AMD’s Discrete Digital Multipoint Audio (DDMA) support is getting an upgrade. First introduced alongside the 7000 series, DDMA allows for audio-capable HDMI/DisplayPort monitors to coexist, and for each to present themselves as an independent sound sink. The idea behind this technology is to enable uses where having discrete speakers dedicated to each monitor would come in handy, such as video conferencing.

However utilizing DDMA as it originally shipped required software to support sending audio to multiple independent devices at once. Some software supported this and some did not. So as a driver level tweak AMD is implementing an alternative mode where the driver presents a 6 channel setup as a single sink, and then splits up those channels among the actual monitors. The use cases are a bit more limited here – AMD proposes using it for TrueAudio even though no one is going to be positioning a monitor behind themselves – but it’s a simple hack that none the less allows using the speakers from additional monitors in additional cases where the application itself doesn’t natively support it.

TrueAudio Technology: GPUs Get Advanced Audio Processing

The final major technical announcement coming out of AMD’s 2014 GPU product showcase was TrueAudio. For as much as Mantle caught us off guard on the software side, TrueAudio completely caught us off guard on the hardware side, as AMD implemented this very much under our noses.

Like Mantle, TrueAudio was something AMD covered in significant detail in their public session, and consequently we’ve already covered it in some depth in a previous article. However they have until now held back more details about the technology, which we can now share with you.

In a nutshell, TrueAudio is a return to the concept of hardware accelerated audio processing, with AMD leveraging their position to put the necessary hardware on the GPU. Hardware accelerated audio processing in the PC space essentially died with Windows Vista, which moved most of the Windows audio stack into software. Previously the stack was significantly implemented through drivers and as such various elements could be offloaded onto the sound card itself, which in the case of 3D audio meant having the audio card process and transform DirectSound 3D calls as it saw fit. However with Vista hardware processing and hardware access to those APIs was stripped, and combined with a general “good enough” mindset of software audio + Realtek audio codecs, the matter was essentially given up on.

Now even with the loss of traditional hardware acceleration due to Vista, you can still do advanced 3D audio and other effects in software by having the game engine itself do the work. However this is generally not something that’s done, as game developers are hesitant to allocate valuable CPU time to audio and other effects that are difficult to demonstrate and sell. Further complicating this is of course the current generation consoles, which dedicate a relatively small portion of what are already pretty limited resources to audio processing. As a result the baseline for audio is at times an 8 year old console, or at best a conservative fraction of one CPU core.

AMD for their part is looking at reversing this trend by integrating audio DSPs into their hardware. If developers have task-specific hardware, as AMD postulates, then they will be willing to take advantage of this hardware for improved audio processing and effects, comfortable in the fact that they aren’t having to give up other resources for the improved audio.

As for why AMD is doing this, it comes down to several factors. One of the biggest is as to be expected: product differentiation. AMD is always looking to differentiate themselves from Intel and NVIDIA on more than just price and performance, and this is one such way to do it. At the same time as AMD’s GPU division is significantly focused on gaming, there is a contingent within it that has wanted to do something like this because it’s something they haven’t worked on before; they have CPUs and GPUs, but nothing for audio processing. Finally, while AMD isn’t explicitly mentioning the next generation consoles in any of this, the fact of the matter is that with the Xbox One getting audio DSPs (albeit different ones than TrueAudio) now is going to be the best time for AMD to push the idea of doing it on PCs too, before everyone gets set in their ways for another generation of hardware.

Diving into the hardware aspects of TrueAudio, as one of the more unusual aspects new to the 200 series this a feature that’s unfortunately not going to be present on all 200 series cards. Only Bonaire and newer GPUs – presumably anything that’s GCN 1.1 – will feature TrueAudio. That means the functionality is limited to 260X and 290X. 280X, 270X, and the rest will never have this feature, as the audio hardware is simply not present on those older GPUs.

Now the astute among you will realize that Bonaire isn’t a new GPU either, and this is where AMD has caught us off guard. Bonaire has had the necessary TrueAudio hardware since the very beginning, some 7 months ago. On the 7790 and similar 7000/8000 series cards it simply was not enabled. Only with 260X (and any other 200 series Bonaire parts) will it be shipping enabled.  AMD has been hiding it right under our noses the entire time, which does make the confusion over what is and isn’t Sea Islands all the greater.

In any case, with this week’s release of TrueAudio enabled hardware AMD is also releasing the full architectural details of their TrueAudio technology. In this case AMD is taking an off-the-shelf solution, Tensilica’s HiFi EP DSPs, with AMD providing the glue that binds them together and integrating them onto the die of their GPUs.

Tensilica’s audio DSPs are task-specific programmable hardware, somewhere between fixed function and fully programmable in design, allowing for customized effects and processing to be done while still keeping the size and power costs low. The underlying hardware is programmable in C, while AMD for their part will be providing a TrueAudio API to access the hardware with. We don’t have a ton of details on the architecture of the DSP, but Tensilica’s product sheets imply that we’re looking at a VLIW architecture of some kind.

Moving on to memory, each DSP possesses 32KB of I-Cache and D-Cache, along with its own 8KB of scratch RAM. Additional memory is available from a 384KB shared cache for all of the DSPs, and finally shared VRAM access, allowing up to 64MB of VRAM to be allocated to the audio DSPs.

AMD is not telling us exactly how many audio DSPs are actually on each card, but we do know it’s between 1 and 10. Based on the size of the shared cache and memory sizes we suspect it’s a power-of-two number such as 8 or 4, with the former being the most likely. Furthermore we know that both the 290X and the 260X have the same number of DSPs, so despite the graphical performance gap as far as we know the two should be at parity for audio performance.

Going up a level we have AMD’s I/O glue, which includes the shared memory, VRAM access, additional registers, and of course all of the routing and DMA functionality necessary to make this work. AMD is taking a particularly keen interest in this aspect, as they know they need to get audio samples on and off of their DSPs quickly without tying up CPU resources or incurring too great a latency penalty. Their streaming DMA engine, complete with the ability to do scatter/gather memory accesses, will play a big part in this.

Going further up the audio stack, as we’ve mentioned in the past TrueAudio is an audio processing solution, not an audio presentation solution. With their DSPs AMD can process audio but they need to pass it back to the sound card for presentation. From a technical standpoint this is a bit tricky due to latency concerns – and is why the streaming DMA engine is so important – but video cards would make for a lousy environment for analog audio components anyhow. Furthermore separating processing and presentation means it’s a drop-in solution that will work with existing audio setups, be it a dedicated sound card, an integrated audio codec, USB audio, or even audio over DisplayPort/HDMI to a monitor or receiver.

Not unlike their Mantle efforts, AMD is taking a middleware-centric attitude with TrueAudio. Rather than only chasing down individual developers, AMD is first and foremost going after middleware developers in an attempt to get TrueAudio support worked into their various audio middleware packages. Success here means that every developer that uses these audio middleware packages (and that’s most of them) will at least have basic access to TrueAudio.  For their part AMD has already lined up Wwise developer Audiokinetic for TrueAudio support, and 3rd party developer GenAudio is producing plugins for Wwise, FMOD, and more.

AMD is naturally also lining up the necessary showcase titles for their new technology. Eidos will be including TrueAudio support in their upcoming Thief game, and newcomer Xaviant pledging support for their in development magical loot game, Lichdom.

With that said, the trick here for AMD, more so than Mantle, will be getting additional developers to put TrueAudio to good use. The use of plugins and other drop-in audio solutions certainly makes the task for developers easier, but TrueAudio is only as useful and effective as the audio tasks it’s given. The technology itself is proven, so that much isn’t in doubt. What is up in the air is whether developers and consumers, who have settled in this “good enough” environment, are interested in willing to make the additional effort to get better audio in games, similar to how the progression of graphics is simply seen as a given.

The immediate use cases for TrueAudio will be relatively straightforward. Having dedicated hardware with guaranteed performance means that much better 3D audio spatialization algorithms can be implemented, to the benefit of headphones, 2.1 speaker, and 5.1 speaker users alike. Most games implement little more than simple 2D panning and occlusion here, with the most basic of spatialization for 2.1/headphones users (if the game has it at all), so there’s clear room to benefit here by bringing (back) full 3D audio to all of those groups. Similarly the matters of reverb and other advanced audio techniques have only just begun to be touched.

Wrapping things up, based on existing experiences with 3D audio and testing out AMD’s tech demos we’re rather bullish on the technology itself and the benefits thereof. I had a chance to briefly try Xaviant’s Lichdom audio demo, which is already TrueAudio enabled. As someone who’s already a headphones-only gamer, this ended up being more impressive than any game/demo I’ve tried in the past. Xaviant has positional audio down very well – at least as good as Creative’s CMSS3D tech – and elevation effects were clearly better than anything I’ve heard previously. They’re also making heavy use of reverb, to the point where it’s being overdone for effect, but what’s there works very well.

And to be clear here, nothing here is really groundbreaking on a technical level; it’s merely a better implementation of existing ideas on positioning and reverb. But after a several year span of PC audio failing to advance (if not regressing) this is a welcome change to once again see positional audio and advanced audio processing taken seriously. Compared to contemporary software driven game audio in particular, this should be a big step up if it’s done right.

On a final note, unfortunately while TrueAudio is ready and enabled on the hardware at the moment, AMD’s hardware is coming before any software is ready. So while we’ve had the chance to try it out in AMD’s custom demos, there aren’t any games or demos publically available at this time, so we haven’t had the chance to test it any further. Hopefully we’ll have something soon, as it would be a bigger disappointment if nothing was ready in time for the 290X launch.

Launching This Week: Radeon R9 280X

The highest performing part of today’s group of launches will be AMD’s Radeon R9 280X. Based on the venerable Tahiti GPU, the R9 280X is the 6th SKU based on Tahiti and the 3rd SKU based on a fully enabled part.

AMD GPU Specification Comparison
  Asus Radeon R9 280X DCU II TOP XFX Radeon R9 280X DD (Ref. Clocked) AMD Radeon HD 7970 GHz Edition AMD Radeon HD 7970
Stream Processors 2048 2048 2048 2048
Texture Units 128 128 128 128
ROPs 32 32 32 32
Core Clock 970MHz 850MHz 1000MHz 925MHz
Boost Clock 1070MHz 1000MHz 1050MHz N/A
Memory Clock 6.4GHz GDDR5 6GHz GDDR5 6GHz GDDR5 5.5GHz GDDR5
Typical Board Power >250W? 250W 250W 250W
Width Double Slot Double Slot Double Slot Double Slot
Length 11.25" 11" N/A N/A
Warranty 3 Years Lifetime N/A N/A
Launch Date 10/11/13 10/11/13 06/22/12 01/09/12
Launch Price $309 $329? $499 $549

In a nutshell, the R9 280X is designed to sit somewhere in between the original 7970 and the 7970 GHz Edition. For memory it has the same 3GB of 6GHz GDDR5 as the 7970GE, while on the GPU side it has PowerTune Boost functionality like the 7970GE, but at lower clockspeeds. At its peak we’re looking at 1000MHz for the boost clock on R9 280X versus 1050MHz on the 7970GE. Stranger yet is the base clock, which is set at just 850MHz, 75MHz lower than the 7970’s sole GPU clock of 925MHz and 150MHz lower than the 7970GE’s base clock. AMD wasn’t able to give us a reason for this unusual change, but we believe it’s based on some kind of balance between voltages, yields, and intended power consumption.

With that in mind, even with the lower base clock because this is a boost part it will have no problem outperforming the original 7970, as we’ll see in our performance section. Between the higher memory clocks and boost virtually always active, real world performance is going to be clearly and consistently above the 7970. At the same time however performance will be below the 7970GE, and as the latter is slowly phased out it looks like AMD will let its fastest Tahiti configuration go into full retirement, leaving the R9 280X as the fastest Tahiti card on the market.

As an aside, starting with the R9 280X and applicable to all of AMD’s video cards, AMD is no longer advertising the base GPU clockspeed of their parts. The 7970GE for example, one of the only prior boost enabled parts, was advertised as “1GHz Engine Clock (up to 1.05GHz with boost)”. Whereas the 280X and other cards are simply advertised as “Up to 1GHz” or whatever the boost clock may be.

As of press time AMD hasn’t gotten back to us on why this is. There’s really little to say until we have a formal answer, but since these cards are rarely going to reach their highest boost clockspeed (the fact that we can’t see the real clockspeed only further muddles matters) we believe it’s important that both the base clock and boost clock are published side-by-side, the same way as AMD has done it in the past and NVIDIA does it in the present. In that respect at least some of AMD’s partners have been more straightforward, as we’ve seen product fliers that list both clocks.

Getting back to the matter of 280X, let’s put the theoretical performance of the card in perspective. As R9 280X is utilizing a fully enabled Tahiti GPU we’re looking at a full 2048 stream processors organized over 8 CU arrays, paired with 32 ROPs. Compared to the original 7970 this gives R9 280X between 92% and 108% of the 7970’s shader/ROP/texture throughput, and 109% of the memory bandwidth. Or compared to the 7970GE we’re looking at 85% to 95% of the shader/ROP/texture throughput and 100% of the memory bandwidth.

Since this is another Tahiti part, TDP hasn’t officially changed from the 7970GE. The official TDP is 250W and the use of boost should keep actual TDP rather close to that point, though the use of lower clockspeeds and lower voltages means that in practice the TDP will be somewhat lower than 7970GE’s. For idle TDP AMD isn’t giving out an official number, but that should be in the 10W-15W range.

Moving on, the MSRP on the R9 280X will be $300. This puts the card roughly in the middle of the gulf between NVIDIA’s GeForce GTX 760 and GTX 770 with no direct competition outside of a handful of heavily customized GTX 760 cards. Against AMD’s lineup this will be going up opposite the outgoing 7970 cards, depending on which the R9 280X can be anywhere between faster and equal to the outgoing cards, but unlike the 7970s the R9 280X won’t have the Never Settle Forever game bundle attached.

Finally, because the R9 280X is based on the existing Tahiti GPU, this is going to be a purely virtual launch. AMD’s partners will be launching custom designs right out of the gate, and while we don’t have a product list we don’t expect any two cards to be identical. AMD has put together some reference boards utilizing a newly restyled cooler for testing and photo opportunities, but these reference boards will not be sampled or sold. Instead they’ve sent us a pair of retail boards which we’ll go over in the following sections: the XFX Radeon R9 280X Double Dissipation, and the Asus Radeon R9 280X DirectCU II TOP.

Please note that for all practical purposes we’ll be treating the XFX R9 280X DD as our reference 280X board, as it ships at the 280X reference clocks of 850MHz base, 1000MHz boost, 6000MHz VRAM. We expect other retail cards to be similar to the XFX card, although there’s still some outstanding confusion from XFX on whether their card will be a $299 card or not.

Fall 2013 GPU Pricing Comparison
  $650 GeForce GTX 780
  $400 GeForce GTX 770
Radeon R9 280X $300  
  $250 GeForce GTX 760
Radeon R9 270X $200  
  $180 GeForce GTX 660
  $150 GeForce GTX 650 Ti Boost
Radeon R7 260X $140  


XFX Radeon R9 280X Double Dissipation

The first of our R9 280X cards is our reference-like sample, XFX’s Radeon R9 280X Double Dissipation. The R9 280X DD is XFX’s sole take on the 280X, utilizing a new and apparently significantly revised version of XFX’s Double Dissipation cooler, and paired with a 280X operating at the 280X’s reference clocks of 850MHz core, 1000MHz boost, and 6GHz RAM. Since there isn’t an overclock here, XFX will primarily be riding on their cooler, build quality, and other value add features.

Diving right into the design of the card, the R9 280X DD is a fairly traditional open air cooler design, as is common for cards in this power and price range. This basic design is very effective in moving large amounts of heat for relatively little noise, making the usual tradeoff of moving some of the cooling workload onto the system’s chassis (and its larger, slower fans) rather than doing the work entirely on its own.

In XFX’s case this is a new design, having forgone their older Double Dissipation design that we first saw on their 7970BEDD back in 2012.  What’s changed? Without going into minute details, practically everything. At first glance you’re unlikely to even recognize this as an XFX card due to the fact that this is the first such product from XFX using this design, which aesthetically looks almost nothing like their old design.

First and foremost XFX has gone for the oversized cooler approach, something that’s become increasingly common as of late, equipping the card with one of the larger coolers we’ve ever seen. At 100mm in diameter the two fans on XFX’s design are among the biggest we’ve ever seen, pushing the card to just over 11.1 inches long while causing the heatsink and shroud to stand about 0.75” taller than the board itself.

Drilling down, XFX is using a two segment heatsink, the combined length of which runs the complete length of the card. Providing heat conduction between the GPU and the heatsink is a set of 6 copper heatpipes mounted into a copper base plate. 4 of these heatpipes run towords the rear of the card and the other 2 to the front, perpendicular to XFX’s vertical fin heatsink. Meanwhile cooling for the various discrete components on the board, including the memory, is provided by a separate cut-out baseplate that covers most of the card. There isn’t any kind of connection between the baseplate and the heatsink proper, so it’s the baseplate and any airflow over it that’s providing cooling for the MOSFETs it covers.

Moving on to XFX board, it looks like XFX isn’t doing anything particularly exotic here. XFX is using their standard Duratec high-end components, which includes using solid caps and chokes (typical for all cards in this power category) along with their IP-5X dust free fan. A quick component count has us counting 7 power phases, which would be the reference amount for a 280X, meaning we’re looking at 5 phases for the GPU, and another 2 phases for the memory and I/O.

Meanwhile for I/O XFX implements the common Radeon display I/O configuration of 2x DL-DVI, 1x HDMI, and 2x Mini DisplayPort 1.2. All the while external power delivery is provided by a set of 6pn + 8pin power connectors, as to be expected for a 250W card. With that in mind XFX’s design should have at least some overclocking headroom, but XFX doesn’t provide any overclocking software so you’ll need to stick with Catalyst Overdrive or 3rd party utilities such as MSI Afterburner.

Finally, as a Double Dissipation product the 280X DD is covered by XFX’s lifetime warranty policy, contingent on registering the card within 30 days of purchase. Interestingly XFX remains one of the few board partners that still offers any kind of lifetime warranty, making them fairly exceptional in that regard. As for pricing we’re listing the XFX card at $329 at the moment, though there is still some confusion over whether that’s the final price or not as our XFX rep seemed unsure of that. As is sometimes the case in this industry, we get the impression that they were waiting to see what other manufacturers were going to charge, in which case we suspect the actual launch price will be lower than that. We’ll update this article once we have final pricing information available.

Asus Radeon R9 280X DirectCU II TOP

Our other sample card sent over by AMD for today’s launch is a sample of what a factory overclocked cards will look like. For this AMD sent over Asus’s Radeon R9 280X DirectCU II TOP, Asus’s traditional high-end custom cooled factory overclocked card.

Asus ships their TOP card card at 970MHz for the base GPU clock, 1070MHz for the GPU boost clock, and 6400MHz for the memory, which compared to the R9 280X reference clocks is a very significant overclock of 120MHz (14%) for the core clock, 70MHz (7%) for the boost clock, and 400MHz (7%) for the memory overclock. The narrowing of the gap between the core clock and the boost clock is particularly interesting, as it means the Asus card operates in a smaller range of clockspeeds than reference cards do (100MHz versus 150MHz). The core overclock in particular virtually guarantees that the card will be operating at higher clockspeeds than most reference clocked 280Xs when they’re boosting, never mind when the Asus card is also boosting. The fact that PowerTune Boost on the 280X is equivalent in operation to how it was on the 7970GE – which is to say opaque – means that it’s difficult to predict exactly how this overclock will affect its performance, so for that we’ll have to turn to our performance numbers later.

Diving into the design of the 280X DCUII TOP, while Asus’s design is fundamentally yet another dual fan open air cooler, upon further examination it’s clear that for their design Asus has gone with something that can safely be described as exotic and unusual. This is almost immediately apparent in looking at their DirectCU II cooler, or more specifically the fans on it. While the right fan is a standard 95mm axial fan, the left fan is a 100mm fan that is easily the oddest fan we’ve seen in quite some time.

Asus calls it “CoolTech” and it’s essentially an effort to build a fan that’s both an axial fan and a blower (radial) fan at the same time, explaining the radial-like center and axial-like outer edge of the fan. Asus tells us that they’re shooting for a fan that can move air over a wider angle than a traditional axial fan, and while we’re hardly qualified to evaluate that claim, it is regardless certainly something we’ve never seen before.

Asus’s choice in fans aside, for their 280X card Asus has also gone with a fairly large single segment heatsink to provide heat dissipation for their GPU. The DirectCU II heatsink itself measures 10.5 inches and brings the total length of the card out to about 11.5 inches. Embedded in the heatsink are 5 heatpipes that run between the GPU core and various points on the heatsink, the largest heatpipe measuring 10mm in diameter. Meanwhile a smaller separate heatsink is mounted to the MOSFETs on the board to provide cooling for those, with the Hynix 6GHz GDDR5 RAM chips running bare. Asus tells us this design is 20% cooler and much quieter than the reference design for 280X, but since that design isn’t in retail it’s something of a moot point as Asus’s competition will be other custom designs.

Moving on, like most of Asus’s customized high-end cards the company has outfit the 280X DCUII TOP with their DIGI+ digital VRM management IC and Super Alloy Power discrete components. As to be expected, Asus is promoting these component choices as improving overclocking stability while further improving the lifespan of the components themselves. Perhaps more importantly, Asus has gone with a 10 phase power implementation on their card to give the card more overclocking headroom on the power side, outfitting the card with 8 power phases for the GPU as opposed to the typical 5 phases, and the same 2 phase memory/IO setup. As we’ll see in our look at performance and power consumption Asus already seems to be running this card at over 250W, so even before end-user overclocking they’re already making use of their own overclocking headroom to provide the factory overclock and the power needed to operate it.

Speaking of overclocking, the 280X DCUII TOP comes with Asus’s GPU Tweak overclocking software for further end-user overclocking. This is the first time we’ve seen their GPU Tweak software in a video card review, and it’s clear right off the bat that they’ve been watching MSI closely and have implemented something very similar to MSI’s Afterburner software.

The end result is a very competent overclocking suite that offers all of the overclocking and monitoring functionality we’ve come to expect from a good overclocking utility, including a wide array of monitoring options and support for GPU voltage control. Asus’s taste in skins is unfortunate – a low contrast red on black – but otherwise the UI itself is similarly solid. To that end GPU Tweak won’t match Afterburner on some of its more fringe features such as recording and overlays, but as a pure overclocking utility it stands up rather nicely.

On a side note, Asus also throws in a live streaming utility called GPU Tweak Streaming, apparently aimed at the DOTA/League of Legends/Let’s Play crowd. Having no real experience with such utilities I can hardly comment on it, but at a superficial level it seems to do what it’s supposed to.

Moving on, let’s briefly talk about I/IO options. In a slight deviation from what we normally see for a Radeon card, Asus has dropped the two Mini DisplayPorts for a single full-size DisplayPort. Given the seemingly random nature by which various board partners go about choosing which port to use we can hardly speculate on why this is, but all things considered I’m not sure why Asus would want fewer connectivity options. This leaves Asus with 2x DL-DVI, 1x HDMI, and 1x DisplayPort 1.2 for connectivity, for a total of 4 ports.

Winding things down, I also wanted to quickly call attention to a couple of specific design decisions Asus made with their card. The first is a rather useful change Asus made with their PCIe power connectors. Asus has reversed the power connectors so that they’re facing the rear of the card rather than the front, and consequently the clips on the plugs don’t dig into the heatsink. This is the first time we’ve seen anyone reverse the connectors like this, and it’s a handy change that makes unplugging the card much easier. And as a side benefit, they’ve also put LEDs on the card that indicate whether there is a working PCIe power connection, just in case you’re the forgetful type who doesn’t always remember to plug in those connectors (like myself).

Reversed PCIe power sockets; LED power indicators

At the same time however the odd shape of the shroud over the card deserves a brief mention for an opposite reason. Getting the PCIe plugs in and out will be easy, but with the shroud sticking up almost an inch and a half towards the front of the card, screwing and unscrewing the card’s bracket requires nimble fingers or a good magnetized screwdriver, making it more difficult (though by no means impossible) than it really should be.

Finally, let’s quickly talk warranties and pricing. Asus is offering their standard 3 year warranty with this card, which although is not quite as long as XFX’s warranty is at least typical for this industry. Meanwhile on pricing Asus has very much gone for the kill, pricing the card at just $10 over MSRP, or $309. As we’ll see the factory overclock alone is good for a several percent improvement in performance over a stock 280X, never mind Asus’s cooling performance and value added features such as their software. Although this is hardly a representative sample of all 280X cards on the market, in this light the 280X DCUII TOP is looking especially good.

The Drivers, The Test & Our New Testbed

With the product introductions and specifications out of the way, let’s dive into the test.

The launch drivers for the 200 series sampled to the press are Catalyst 13.11 Beta 1, with a version number of 13.200.16, making them a newer build on the same branch as the current 13.10 Beta 2 drivers. As such there are no known functional differences between the current drivers for the 7000 series and the launch drivers for the 200 series. With that said we did encounter one specific bug in these drivers, which resulted in flickering lighting in Crysis 3 on high quality settings.

Note that this also means that these drivers also only contain Phase 1 of AMD’s Crossfire frame pacing fixes. This means frame pacing for Crossfire for single monitor displays is fully implemented, however frame pacing for multi monitor displays and 4K displays is not. Based on AMD’s most recent comments a fix is not expected until November, and while we don’t seriously see owners settling down to run Eyefinity or 4K displays off of 280X in CF – at least not until 290X arrives for evaluation – it’s unfortunate AMD wasn’t able to get this problem fixed in time for the 200 series launch.

Catalyst 13.11B1 Frame Pacing
  Single Display Eyefinity / 4K Tiled
D3D11 Y N
D3D10 Y N
D3D9 N N
OpenGL N N

Moving on, this article will mark the debut of our new testbed and benchmark suite. Both were due for a refresh so we’re doing so in conjunction with the launch of the 200 series.

For our testbed we have done a complete overhaul, the first one in 4 years. The trusty Thermaltake Spedo case that has been the skeleton of our testbed has been replaced with an NZXT Phantom 630. Similarly we’ve gone and replaced all of the internal components too; an IVB-E based 4960X operating at 4.2GHz for 40 lanes of validated PCIe 3.0 functionality, an ASRock Fatal1ty X79 Professional motherboard to operate our cards on, and 32GB of G.Skill’s lowest latency (CAS 9) DDR3-1866 RAM. Meanwhile storage is being backed by a Samsung 840 EVO 750GB, and power via a Corsair AX1200i PSU. Finally cooling is handled by a Corsair H110 closed loop cooler, and meanwhile the Phantom 630 leaves an open fan mount for us to tinker with closed loop GPU coolers (such as the Asus ARES II) in the future.

As for the new benchmark suite, we’ve gone through and appropriately updated our games list. New to the GPU 14 test suite are Company of Heroes 2, Total War: Rome 2, GRID 2, and Metro: Last Light (ed: Metro 2). With the holiday games season upon us, we expect to add at least one more game, along with swapping out Battlefield 3 for Battlefield 4 shortly after that is released.

Finally, though we won’t make use of its 4K capabilities in this review given the limited performance of R9 280X, Asus sent over one of their new PQ321 monitors for our testing needs. While still very much bleeding edge, we’ll be taking a look at 4K performance in the near future as appropriate cards arrive.

CPU: Intel Core i7-4960X @ 4.2GHz
Motherboard: ASRock Fatal1ty X79 Professional
Power Supply: Corsair AX1200i
Hard Disk: Samsung SSD 840 EVO (750GB)
Memory: G.Skill RipjawZ DDR3-1866 4 x 8GB (9-10-9-26)
Case: NZXT Phantom 630
Monitor: Asus PQ321 + Samsung 305T
Video Cards: XFX Radeon R9 280X Double Dissipation
Asus Radeon R9 280X DirectCU II TOP
AMD Radeon HD 7970 GHz Edition
AMD Radeon HD 7970
AMD Radeon HD 7950 Boost
NVIDIA GeForce GTX 780
NVIDIA GeForce GTX 770
NVIDIA GeForce GTX 760
Video Drivers: NVIDIA 331.40 Beta
AMD Catalyst 13.11 Beta 1
OS: Windows 8.1 Pro


Metro: Last Light

Kicking off our look at performance is 4A Games’ latest entry in their Metro series of subterranean shooters, Metro: Last Light. The original Metro: 2033 was a graphically punishing game for its time and Metro: Last Light is in its own right too. On the other hand it scales well with resolution and quality settings, so it’s still playable on lower end hardware.

Metro: Last Light - 2560x1440 - High Quality

Metro: Last Light -1920x1080 - Very High Quality

Metro: Last Light -1920x1080 - High Quality

The first benchmark in our revised benchmark suite finds our 280X cards doing well for themselves, and surprisingly not all that far off from the final averages. Setting the baseline here, as we expected the Tahiti based 280X performs in between the original 7970 and 7970 GHz Edition, thanks to the 280X’s use of PowerTune Boost but at lower clockspeeds than the 7970GE. Consequently this isn’t performance we haven’t seen before, but it’s very much worth keeping in mind that the 7970GE was a $400 card while the 280X is a $300 card, so approaching the 7970GE for $100 less is something of a significant price cut for the performance.

As for the immediate competitive comparison, we’ll be paying particular attention to 2560x1440, which should be the sweet spot resolution for this card. At 2560 we can see that the reference clocked 280X doesn’t just hang with the $400 GTX 770 but actually manages to edge it out by just over a frame per second. As a preface we’re going to see these two cards go back and forth throughout our benchmarks, but to be able to directly compete with NVIDIA’s fastest GK104 card for $100 less is a significant accomplishment for AMD.

Finally, let’s quickly talk about the Asus 280X versus the XFX 280X. Asus winning comes as no great shock due to their factory overclock, but now we finally get to see the magnitude of the performance gains from that overclock. At 2560 we’re looking at just shy of a 9% performance gain, which is in excess of both the boost clock overclock and the memory overclock. The specific performance gains will of course depend in the game in question, but this means that the performance gains in at least one instance are being impacted by the base clock overclock, the larger of Asus’s factory overclocks.

Company of Heroes 2

Our second benchmark in our benchmark suite is Relic Games’ Company of Heroes 2, the developer’s World War II Eastern Front themed RTS. For Company of Heroes 2 Relic was kind enough to put together a very strenuous built-in benchmark that was captured from one of the most demanding, snow-bound maps in the game, giving us a great look at CoH2’s performance at its worst. Consequently if a card can do well here then it should have no trouble throughout the rest of the game.

Company of Heroes 2 - 2560x1440 - Maxium Quality + Med. AA

Company of Heroes 2 - 1920x1080 - Maxium Quality + Med. AA

Company of Heroes 2 - 1920x1080 - High Quality + Low AA

Like Metro this is the first time we’ve deployed this benchmark in a review. The results as it turns out are extremely good for AMD, with the reference clocked 280X surpassing the GTX 770 by over 16%. Given the price disparity between the cards simply tying the GTX 770 would be a good outcome, so surpassing it is even better. Of course this is basically a best case scenario, so not every game will see a lead like this.

On a side note, it’s mildly amusing to see that the 280X delivered 30fps on the dot. For an RTS game that’s a perfectly reasonable average framerate, so the 280X ends up being just fast enough to deliver the necessary performance for 2560 in this game.

Finally, on a lark we threw in the GTX 780 results, primarily to visualize the gap between the GTX 780 and its half-priced competition in preparation for the launch of the 290X. Never did we expect to see a 280X card top the GTX 780, but sure enough that’s what happens here, with the Asus factory overclocked 280X passing the GTX 780 by 0.1fps. This isn’t really a fair comparison due to the factory overclock, but it’s interesting none the less to see a Tahiti card keep up with GTX 780.

Company of Heroes 2 - Min. Frame Rate - 2560x1440 - Maxium Quality + Med. AA

Company of Heroes 2 - Min. Frame Rate - 1920x1080 - Maxium Quality + Med. AA

Company of Heroes 2 - Min. Frame Rate - 1920x1080 - High Quality + Low AA

CoH2 also gives us a reliable look at minimum framerates, which like the average framerate over the whole benchmark appears to be entirely GPU bound. The 30fps average for the 280X at 2560 may be playable, but players sensitive to dips in the framerate will not appreciate these minimums. To get the minimum framerate about 30fps we have to go all the way down to 1080p at high quality.

Company of Heroes 2 - Delta Percentages

Finally, while we didn’t have time to collect FCAT results for every card we were able to collect limited FCAT results for the most important cards. With our delta percentages method we’re looking for sub-3% frame time deltas for single-GPU cards, which is actually something that everyone has trouble with in CoH2. What the minimum frametimes hint at is that this game has a periodic frame time spike to It, that although it won’t be a problem for RTS gameplay, will similarly set off players sensitive to changes in frame times. This appears to be the work of the game and benchmark itself, as all of our cards struggle here in a similar manner.

Bioshock: Infinite

Bioshock Infinite is Irrational Games’ latest entry in the Bioshock franchise. Though it’s based on Unreal Engine 3 – making it our obligatory UE3 game – Irrational had added a number of effects that make the game rather GPU-intensive on its highest settings. As an added bonus it includes a built-in benchmark composed of several scenes, a rarity for UE3 engine games, so we can easily get a good representation of what Bioshock’s performance is like.

Bioshock Infinite - 2560x1440 - Ultra Quality + DDoF

Bioshock Infinite - 1920x1080 - Ultra Quality + DDoF

Bioshock Infinite - 1920x1080 - Ultra Quality

Bioshock has always favored NVIDIA GPUs to some extent, and of course this will carry over to 280X. The 280X is still well clear of the cheaper GTX 760, but it’s not going to be catching the GTX 770 any time soon. Nor will the factory overclock of the Asus card close that gap, though it will at least get the performance of a 280X card above 50fps.

Bioshock Infinite - Delta Percentages

Looking at our FCAT data quickly, the data is unremarkable. Bioshock has more variance than most other games, but even so everyone is able to stay under 3%.

Battlefield 3

Our major multiplayer action game of our benchmark suite is Battlefield 3, DICE’s 2011 multiplayer military shooter. Its ability to pose a significant challenge to GPUs has been dulled some by time and drivers, but it’s still a challenge if you want to hit the highest settings at the highest resolutions at the highest anti-aliasing levels. Furthermore while we can crack 60fps in single player mode, our rule of thumb here is that multiplayer framerates will dip to half our single player framerates, so hitting high framerates here may not be high enough.

Battlefield 3 - 2560x1440 - Ultra Quality + 4x MSAA

Battlefield 3 - 1920x1080 - Ultra Quality + 4x MSAA

Battlefield 3 - 1920x1080 - Ultra Quality + FXAA-High

Our Battlefield 3 benchmark is another game that traditionally favors NVIDIA, and that’s especially the case here. The 280X is generally well ahead of the GTX 760, but in this case the two are almost at parity. We’re essentially looking at GTX 760 performance for the 280X under BF3. If Battlefield 4 performs similarly, then AMD’s interest in Mantle and its performance improvements will be well placed.

Battlefield 3 - Delta Percentages

Looking once more at our FCAT results, the delta percentages are extremely unremarkable. For most games this is going to be little more than a checklist; neither party has significant problems with single-GPU configurations at this time.

Crysis 3

Still one of our most punishing benchmarks, Crysis 3 needs no introduction. With Crysis 3, Crytek has gone back to trying to kill computers and still holds “most punishing shooter” title in our benchmark suite. Only in a handful of setups can we even run Crysis 3 at its highest (Very High) settings, and that’s still without AA. Crysis 1 was an excellent template for the kind of performance required to drive games for the next few years, and Crysis 3 looks to be much the same for 2013.

Crysis 3 - 2560x1440 - High Quality + FXAA

Crysis 3 - 1920x1080 - High Quality + FXAA

Crysis 3 - 1920x1080 - Medium Quality + FXAA

Crysis 3 is another game that somewhat favors NVIDIA, though not to the extent of other games. At 2560 we’re looking at performance that’s closer to the GTX 760 than it is the GTX 770, which is rather befitting of the 280X’s $300 status, putting it almost exactly where we’d expect it given the price.

Meanwhile checking in again on our factory overclocked Asus 280X, we have another case where the performance improvement is outpacing the boost and memory clock overclocks, this time coming in at 9%. It’s scenarios like these that make Asus’s $10 premium such a bargain for the performance.

Crysis 3 - Delta Percentages

Once again, FCAT tells us that our delta percentages are well within tolerance.

Crysis: Warhead

Up next is our legacy title for 2013/2014, Crysis: Warhead. The stand-alone expansion to 2007’s Crysis, at over 5 years old Crysis: Warhead can still beat most systems down. Crysis was intended to be future-looking as far as performance and visual quality goes, and it has clearly achieved that. We’ve only finally reached the point where single-GPU cards have come out that can hit 60fps at 1920 with 4xAA, never mind 2560 and beyond.

Crysis: Warhead - 2560x1440 - Enthusiast Quality + 4x MSAA

Crysis: Warhead - 1920x1080 - Enthusiast Quality + 4x MSAA

Crysis: Warhead - 1920x1080 - E Shaders/G Quality

As a fairly old single player game we don’t put too much stock into Crysis’ performance, but we do like to track it for historical purposes and to see how well newer cards handle a somewhat older game. To that end it’s always interesting to note just how well AMD’s cards do here; Crysis loves memory bandwidth and 280X has plenty to spare.

Crysis: Warhead - Min. Frame Rate - 2560x1440 - Enthusiast Quality + 4x MSAA

Crysis: Warhead - Min. Frame Rate - 1920x1080 - Enthusiast Quality + 4x MSAA

Crysis: Warhead - Min. Frame Rate - 1920x1080 - E Shaders/G Quality


Total War: Rome 2

The second strategy game in our benchmark suite, Total War: Rome 2 is the latest game in the Total War franchise. Total War games have traditionally been a mix of CPU and GPU bottlenecks, so it takes a good system on both ends of the equation to do well here. In this case the game comes with a built-in benchmark that plays out over a forested area with a large number of units, definitely stressing the GPU in particular.

For this game in particular we’ve also gone and turned down the shadows to medium. Rome’s shadows are extremely CPU intensive (as opposed to GPU intensive), so this keeps us from CPU bottlenecking nearly as easily.

Total War: Rome 2 - 2560x1440 - Extreme Quality + Med. Shadows

Total War: Rome 2 - 1920x1080 - Extreme Quality + Med. Shadows

Total War: Rome 2 - 1920x1080 - Very High Quality + Med. Shadows

With Rome 2 AMD and NVIDIA once again flip places, with 280X besting even the GTX 770 by a few percent. All of these enthusiast/high-end cards are just fast enough to keep Rome playable in this situation, with average framerates hovering just a bit over 30fps.

Total War: Rome 2 - Delta Percentages

RTS games can be a mixed bag for frametimes as we’ve seen in the past, but Rome presents no such problem. Everyone stays below 3% here.

Hitman: Absolution

The second-to-last game in our lineup is Hitman: Absolution. The latest game in Square Enix’s stealth-action series, Hitman: Absolution is a DirectX 11 based title that though a bit heavy on the CPU, can give most GPUs a run for their money. Furthermore it has a built-in benchmark, which gives it a level of standardization that fewer and fewer benchmarks possess.

Hitman: Absolution - 2560x1440 - Ultra

Hitman: Absolution - 1920x1080 - Ultra

Hitman: Absolution - 1920x1080 - Medium + Tess + 16xAF

Hitman is another title AMD’s GPUs do rather well in, leading to the 280X surpassing the GTX 770 by just shy of 9%. It seems silly to be comparing a $300 video card to what’s currently a $400 video card – and in the process not a battle AMD is explicitly setting out to fight – but it just goes to show just how competitive these two cards really are.

Meanwhile if you throw in a factory overclocked card like the Asus, then we can just crack 60fps at 2560. Though on a percentage basis the performance lead over the stock clocked 280X is trending close to the average at 7%.

Hitman: Absolution - Min. Frame Rate - 2560x1440 - Ultra

Hitman: Absolution - Min. Frame Rate - 1920x1080 - Ultra

Hitman: Absolution - Min. Frame Rate - 1920x1080 - Medium + Tess + 16xAF

Moving on to minimum framerates quickly, the picture does not significantly change. Hitman bottoms out in the high 40s for the stock 280X, a bit more than 10fps below the average.

Hitman: Absolution - Delta Percentages

The FCAT delta percentages remain unremarkable.


The final game in our benchmark suite is also our racing entry, Codemasters’ GRID 2. Codemasters continues to set the bar for graphical fidelity in racing games, and with GRID 2 they’ve gone back to racing on the pavement, bringing to life cities and highways alike. Based on their in-house EGO engine, GRID 2 includes a DirectCompute based advanced lighting system in its highest quality settings, which incurs a significant performance penalty but does a good job of emulating more realistic lighting within the game world.

GRID 2 - 2560x1440 - Maximum Quality + 4x MSAA

GRID 2 - 1920x1080 - Maximum Quality + 4x MSAA

GRID 2 - 1920x1080 - High Quality + 4x MSAA

With the game set at its highest quality settings we find that the 7970 and up – including the 280X – are just fast enough to deliver 60fps even at 2560. On a competitive basis the 280X once again surpasses the GTX 770, although not by the margins we saw with DIRT: Showdown in our old benchmarking suite.

GRID 2 - Delta Percentages

Our last round of delta percentages are the least exciting yet, with frametime deltas staying under 1%.


As always we’ll also take a quick look at synthetic performance, though as 280X is just another Tahiti card, there shouldn't be any surprises here. These tests are mostly for comparing cards from within a manufacturer, as opposed to directly comparing AMD and NVIDIA cards. We’ll start with a quick look at tessellation performance with TessMark.

Synthetic: TessMark, Image Set 4, 64x Tessellation

If nothing else, TessMark quickly confirms that our 280X is boosting to near its boost clock here, judging from the performance advantage over the 925MHz 7970.

Moving on, we have our 3DMark Vantage texture fillrate test, which does for texels and texture mapping units what the previous test does for ROPs.

Synthetic: 3DMark Vantage Texel Fill

Synthetic: 3DMark Vantage Pixel Fill

3DMark Vantage’s pixel and fillrate tests quickly serve as proxy tests for GPU and memory clockspeeds in this case. Both of which of course put the 280X at very close to the 7970GE in performance.


Jumping into compute, as with our synthetic benchmarks we aren’t expecting too much new here. Outside of DirectCompute GK104 is generally a poor compute GPU, which makes everything very easy for the Tahiti based 280X. At the same time compute is still a secondary function for these products, so while important the price cuts that go with the 280X are not quite as meaningful here.

As always we'll start with our DirectCompute game example, Civilization V, which uses DirectCompute to decompress textures on the fly. Civ V includes a sub-benchmark that exclusively tests the speed of their texture decompression algorithm by repeatedly decompressing the textures required for one of the game’s leader scenes. While DirectCompute is used in many games, this is one of the only games with a benchmark that can isolate the use of DirectCompute and its resulting performance.

Compute: Civilization V

With Civilization V we’re finding that virtually every high-end GPU is running into the same bottleneck. We’ve reached the point where even GPU texture compression is CPU-bound.

Our next benchmark is LuxMark2.0, the official benchmark of SmallLuxGPU 2.0. SmallLuxGPU is an OpenCL accelerated ray tracer that is part of the larger LuxRender suite. Ray tracing has become a stronghold for GPUs in recent years as ray tracing maps well to GPU pipelines, allowing artists to render scenes much more quickly than with CPUs alone.

Compute: LuxMark 2.0

AMD simply rules the roost when it comes to LuxMark, so the only thing close to 280X here are other Tahiti parts.

Our 3rd compute benchmark is Sony Vegas Pro 12, an OpenGL and OpenCL video editing and authoring package. Vegas can use GPUs in a few different ways, the primary uses being to accelerate the video effects and compositing process itself, and in the video encoding step. With video encoding being increasingly offloaded to dedicated DSPs these days we’re focusing on the editing and compositing process, rendering to a low CPU overhead format (XDCAM EX). This specific test comes from Sony, and measures how long it takes to render a video.

Compute: Sony Vegas Pro 12 Video Render

Again AMD’s strong compute performance shines through, with 280X easily topping the chart.

Our 4th benchmark set comes from CLBenchmark 1.1. CLBenchmark contains a number of subtests; we’re focusing on the most practical of them, the computer vision test and the fluid simulation test. The former being a useful proxy for computer imaging tasks where systems are required to parse images and identify features (e.g. humans), while fluid simulations are common in professional graphics work and games alike.

Compute: CLBenchmark 1.1 Fluid Simulation

Compute: CLBenchmark 1.1 Computer Vision

Despite the significant differences in these two workloads, in both cases 280X comes out easily on top.

Moving on, our 5th compute benchmark is FAHBench, the official Folding @ Home benchmark. Folding @ Home is the popular Stanford-backed research and distributed computing initiative that has work distributed to millions of volunteer computers over the internet, each of which is responsible for a tiny slice of a protein folding simulation. FAHBench can test both single precision and double precision floating point performance, with single precision being the most useful metric for most consumer cards due to their low double precision performance. Each precision has two modes, explicit and implicit, the difference being whether water atoms are included in the simulation, which adds quite a bit of work and overhead. This is another OpenCL test, as Folding @ Home has moved exclusively to OpenCL this year with FAHCore 17.

Compute: Folding @ Home: Explicit, Single Precision

Compute: Folding @ Home: Implicit, Single Precision

Compute: Folding @ Home: Explicit, Double Precision

Depending on the mode and the precision, we can have wildly different results. The 280X does well in FP32 explicit, for example, but in implicit mode the 280X is now caught between the GTX 770 and GTX 760. But if we move to double precision then AMD’s native ¼ FP64 execution speed gives them a significant advantage here.

Wrapping things up, our final compute benchmark is an in-house project developed by our very own Dr. Ian Cutress. SystemCompute is our first C++ AMP benchmark, utilizing Microsoft’s simple C++ extensions to allow the easy use of GPU computing in C++ programs. SystemCompute in turn is a collection of benchmarks for several different fundamental compute algorithms, as described in this previous article, with the final score represented in points. DirectCompute is the compute backend for C++ AMP on Windows, so this forms our other DirectCompute test.

Compute: SystemCompute v0.5.7.2 C++ AMP Benchmark

Although not by any means a blowout, yet again the 280X vies for the top here. When it comes to compute, the Tahiti based 280X is generally unopposed by anything in its price range.

Power, Temperature, & Noise

As always, last but not least is our look at power, temperature, and noise. Next to price and performance of course, these are some of the most important aspects of a GPU, due in large part to the impact of noise. All things considered, a loud card is undesirable unless there’s a sufficiently good reason – or sufficiently good performance – to ignore the noise.

With the Tahiti based 7970GE, we saw AMD push some very high voltages when boosting in order to hit their 1050MHz clockspeed targets. With 280X on the other hand they can back off at least a bit, which should help real world power consumption some.

Radeon HD 7970/200 Series Voltages
Asus 280X Boost Voltage XFX 280X Boost Voltage Ref 7970GE Base Voltage Ref 7970GE Boost Voltage Ref 7970 Base Voltage
1.2v 1.2v 1.162v 1.218 1.175v

On both our stock and factory overclocked 280X cads we see a boost voltage of 1.2v, which as expected is a bit lower than the 1.218v the 7970GE drew under the same conditions.

We also have a quick look at clockspeeds while gaming, although there’s little to report here. Without the ability to see the intermediate clockspeeds on 280X we can only tell whether it’s boosting or not. In every game on both 280X cards, these cards are always in a boost state.

Radeon R9 280X Average Clockspeeds (Reported)
  Asus 280X XFX 280X
Boost Clock 1070MHz 1000MHz
Metro: LL
Battlefield 3
Crysis 3
Crysis: Warhead
TW: Rome 2

Idle Power Consumption

One of the advantages of our new testbed is that IVB-E and the testbed as a whole draw a lot less power under load and idle. This makes it easier to isolate video card power consumption from the rest of the system, giving us more meaningful results.

In this case though there are no surprises to be found with idle power consumption given just how similar all of these cards are while idling.

Load Power Consumption - Metro:LL

Up next is our new gaming power load test, for which we’re using Metro: Last Light. This was initially calibrated against a GTX 780, in which we found that Metro is both highly repeatable, runs long enough (when looped) to fully exercise a video card, and the load it puts on video cards as a percentage of allowable TDP is considerably average among all games.

To that end Metro paints an interesting picture of power consumption for the 280X. Despite its identical to the 7970GE TDP of 250W, real power consumption is down versus that card, and at least at the wall is identical to the 230W GTX 770 (not that NVIDIA and AMD measure TDP in the same way). What this tells us is that alongside their similar on average performance, the GTX 770 and 280X also draw similar amounts of power under gaming workloads.

Meanwhile Asus’s 280X draws more power, closer to a 7970GE, but this is not unexpected for a factory overclock.

Load Power Consumption - FurMark

FurMark on the other hand, being the TDP buster that it is, paints a different picture of the situation. The 280X can generate and sustain a much higher power workload than the comparable GTX 770, and still more yet than the original 7970. FurMark isn’t a game and that’s why we primarily use it as a diagnostic tool as opposed to a real world test, but it does lend credit to the fact that when pushed to its limits 280X is still a high TDP part.

At the same time because FurMark is such a consistent TDP test, the outcome of this test leads us to believe that the Asus 280X isn’t just overclocked, but Asus has also increased their TDP/PowerTune limits to avoid bottlenecking there. The power consumption here is consistent with the XFX card having its PowerTune limit turned up, which implies that the Asus card is closer to a 300W card under maximum load. The gaming performance is very good as we’ve seen, but there is a cost.

Idle GPU Temperature

Like most open air coolers, our 280X cards do well enough here. 31C-32C is where most cards will idle at.

Load GPU Temperature - Metro:LL

Of all of the Tahiti cards in this article, it’s our XFX 280X that delivers the best temperatures under load. 63 is downright chilly for a 250W card, indicating the card has plenty of thermal headroom.  The Asus card by comparison doesn’t fare quite as well, but we don’t even bat an eye until we hit 80C.

It’s worth noting that both cards also do well against the GTX 700 series here, though this is entirely down to the use of open air coolers. As good as these coolers are you won’t be stuffing either card in a cramped case with limited ventilation; for that you need a blower.

Load GPU Temperature - FurMark

As to be expected FurMark drives up our temperatures further. The XFX 280X is no longer our coolest card overall – that goes to the Tahiti based 7970GE – but of the two 280X cards it’s still the cooler one. The Asus meanwhile reaches 76C, which is still a reasonable temperature but it does mean the card doesn’t have a ton of thermal headroom left on its default fan curve. Though if our suspicions are right about the Asus card operating at a higher TDP, then this would at least explain in part the higher temperatures.

Idle Noise Levels

With this being the first article on our new testbed we re-ran the XFX result thrice to make sure we weren’t making any errors, but indeed these results are accurate. Whereas every other card dropped off at around 38dB the XFX 280X bested them with 36.8dB. Even among open air coolers this is a very impressive card at idle. In comparison the Asus is merely average in its near-silence.

Load Noise Levels - Metro:LL

Once we start looking at load noise levels however, the picture changes completely. As impressive as the XFX card was at idle, it doesn’t begin to compare to the Asus card under load. We have a card that’s channeling nearly 250W of heat out and away on a sustained basis, and yet for all of that work it generates just 41.5dB(A) of noise on our testbed. This is simply absurd in the most delightful fashion. Most of the cards in our data collection idle at just 2dB lower than this, never mind noise under load. As a result this is incredibly close to being functionally silent; in the case of our testbed the Asus card isn’t even the principle noise source when it’s under load.

Load Noise Levels - FurMark

Last, but not least we have noise under FurMark. Although the Asus eventually has to ramp up and leave it’s low-40s comfort zone, at 46.8dB it’s still the quietest card around by 2dB(A). The XFX 280X meanwhile is merely average, if not a tinge worse for an open air cooler. 50.9dB(A) is plenty reasonable, it just pales in comparison to the Asus card.


With our look at the stock performance of our 280X cards complete, let’s take a brief look at overclocking.

When it comes to overclocking this is going to be a somewhat unfair competition for the two cards. The Asus card has by the very necessity of its existence already been binned. Furthermore while the Asus card supports voltage adjustments the XFX card does not (MSI Afterburner says it does, but adjusting the value has no effect). As such we get to drive what’s already a better GPU harder and with more voltage than the other. Still, this will give us the chance to see where everything will top out at.

Radeon R9 280X Overclocking
  XFX Radeon R9 280X DD Asus Radeon R9 280X DCU II TOP
Shipping Core Clock 850MHz 970MHz
Shipping Boost Clock 1000MHz 1070Mhz
Shipping Memory Clock 6GHz 6.4GHz
Shipping Boost Voltage 1.2v 1.2v
Overclock Core Clock 880MHz 1010MHz
Overclock Boost Clock 1030MHz 1110MHz
Overclock Memory Clock 6.6GHz 6.8GHz
Overclock Max Boost Voltage 1.2v 1.263v

As it turns out, neither card overclocked by very much. The XFX card, lacking additional voltage, could only do 30MHz more, for a 4% base/3% boost overclock. Better luck was found on the memory with a 600MHz (10%) overclock there. The Asus card meanwhile was good for 40MHz more, for a 4% base/4% boost overclock, while its memory could do an additional 800MHz (13%). But at the same time this required dialing the voltage up to 1.263v – as high as we’re willing to go for this card. The power cost of doing that will be extreme.

With our 280X cards primarily bottlenecked by GPU performance as opposed to memory performance, the performance gains from our overclocking adventure is limited. 3% on average for both cards is 3% for free, but it’s barely a useful overclock. We typically need 5% before overclocks start becoming interesting and significant enough to improve playability or make higher graphical settings practical.

To the credit of the Asus card and its cooler, despite the increased clockspeeds, voltage, and power consumption, it’s able to keep GPU temperatures and load noise to reasonable levels given the circumstances. Still, with the increase in power required to achieve this overclock (particularly in the worst case scenario of FurMark) it’s hard to argue that the additional overclocking was worth the performance gains. With such an extensive factory overclock this is a card that may be better off left at factory clocks.

The XFX card meanwhile suffers much less of a power ramp up due to the lack of voltage control, but we’re still looking at something of a wash on the power/performance front.

Final Words

Bringing this review to a close, as we mentioned back at the beginning of this article the initial launch of the Radeon 200 series was something of a warm-up act. AMD’s Big Kahuna, R9 290X, is not yet here and will be a story of its own. But in the meantime the company has laid out their plans to kick off 2014 and the rest of the products they intend to do it with.

Ultimately as far as performance is concerned there’s not a lot to say here. As a refresh of the existing Tahiti, Pitcairn, and Bonaire GPUs and the products based on them, the performance of the resulting cards compared to their immediate predecessors is only a few percent better on average. What AMD is doing is more than putting on a new coat of paint on the 7000 series but at the same time let’s be clear here: these products are still largely unchanged from the products we’ve seen almost 2 years ago.

If nothing else, today’s launch is a consolidation of products and a formalization of prices. The number of products based on the existing GPUs has been cut down significantly, as there’s now only 1 card per GPU as opposed to 2-3. Meanwhile AMD gets to formally set lower prices on existing products, and in the process redefine what’s high-end, what’s enthusiast, and what’s mainstream, as opposed to trying to flog cards like the 7970 as sub-$300 enthusiast parts. So in one sense it’s a price cut with a twist, ending with AMD officially bringing down prices to roughly where you’d expect them to be nearly two years in.

Of course the fact that AMD also needs to get rid of the 7000 series at the same time isn’t going to do them any favors. There’s no getting around the fact that similar 7000 series products are going to be equal to or cheaper than 200 series products, at least for the immediate launch. Once those supplies dry up however the 200 series will settle into a more typical product stratification, including AMD’s partners reacting to competitive pressure and adjusting prices and bundles accordingly. Similarly, once that’s done we’d fully expect to see the return of the Never Settle Forever program for these cards.

Along those lines, the fact of the matter is that even with a new name and formally lower prices this is going to be a tough market for AMD. NVIDIA isn’t taking this lightly and while they haven’t seen the need to cut prices on parts like the GTX 770 and GTX 760 – due in part to the fact that AMD’s products are occupying the significant price gaps between them – in the hyper-competitive sub-$200 market NVIDIA has already lowered prices.  The players haven’t changed and the game hasn’t changed, only the location has.

With that said, given AMD’s current pricing and performance they seem to have done a good job picking a favorable location. In previous times AMD would go up against the $400 GTX 770 with their similarly priced 7970GE; now they're doing so with the slightly slower R9 280X, priced at $100 less. Consequently NVIDIA has the slightest edge of about 5% on our new benchmark suite, but it’s close; much closer than a $300 card should be. On a pure price/performance basis then the R9 280X is not to be ignored, as it’s going to come within a few percent of GTX 770 – depending on the game of course – for $100 less. Those are big savings that are hard to argue with and will help make R9 280X a “win” for AMD.

The wrinkle in all of this math of course is going to be custom cards. As impressive as the 280X is at stock, as represented by XFX’s Radeon R9 280X Double Dissipation, for all of $10 over MSRP Asus has put together one whopper of a card with the Radeon R9 280X DirectCU II TOP. Owing to Asus’s over-the-top design it’s absurdly quiet under load while delivering performance 7% better than a stock clocked R9 280X, and in fact is powerful enough to edge out a GTX 770. And did we mention it’s $309? The fact that Asus seems to have gone a bit high on power consumption means it’s going to take a bit more to drive this than the typical R9 280X, but with Asus having controlled load temperatures and noise so well we aren’t having to make the usual noise/performance tradeoffs that overclocked cards traditionally come with. The XFX card in that respect, though completely solid on its own, is significantly overshadowed by Asus’s offering.

Hardware aside, what AMD is doing with the R9 280X and the 200 series as a whole is going to be worth keeping an eye on. AMD’s Mantle API initiative has a lot of promise behind it, but we’re going to have to see what the company has to say in November and what the launch of Mantle-enabled games bring before passing judgment. We’re purposely tempering our performance expectations here, keeping in mind that not every game and every scenario is bottlenecked by those things a low-level API can help with. However the claims are sound and they’ve picked up a very good showcase title in Battlefield 4, so we shall have to see what AMD delivers with.

Along those same lines, TrueAudio for its part has captured our complete attention, again warranting enthusiasm with a wait-and-see approach. It’s a shame AMD had to introduce this technology in a half-supported manner, leaving cards like the 280X without it. But on the whole the technology and the larger initiative shows a lot of promise, contingent of course on getting developers to use it. As a headphones gamer in particular I am extremely interested in seeing this take off so that we can finally resume making progress on 3D audio, as what AMD has presented is what I believe will be the first real shot at that in the better part of half of a decade.

Log in

Don't have an account? Sign up now