When we first took a look at AMD’s Radeon R9 290 back in November, what we found was a card that was on the whole a mixed bag. On the one hand the amount of performance offered for the price was unequaled. AMD decided they wanted to take on the $500 GeForce GTX 780 and win, and they did just that by delivering performance better than the GTX 780 for just $400. This made 290 a very potent card for gamers still looking for a value at the high-end.

On the other hand AMD had to make significant sacrifices elsewhere to get there. AMD’s reference cooler already struggled to keep the 290X without generating excessive noise, and for the 290 AMD needed to turn up the fan speed even further in order to ensure the 290’s average performance was very close to its maximum performance. The end result was that while the reference 290 was fast it was also obnoxiously loud, especially in comparison to the high-end cards of the last few years. Ultimately this meant that for buyers concerned about noise the options available were either to take a performance reduction or pay quite a bit more for a GTX 780.

With that said, we have known since that first day that the stories of the 290 and 290X weren’t yet complete. While the reference cards set the bar for performance and (for better or worse) drive the overall perception of the series, the modern board partner system means that in time we can look forward to partners eventually releasing semi-custom and fully-custom cards, which use custom coolers and custom boards respectively. Customization allows the board partners to differentiate from each other by designing cards around different capabilities – be it size, cooling, or overclocking – in the process creating a wide spectrum of cards for a wide spectrum of use cases. Or with respect to the 290 in particular, customization offers partners a chance to go back and try to improve on the reference 290’s weakness, its noisy cooler.

Ever since the 290 review there has been a lot of chatter and questions about when we’d see the first customized cards show up, and the answer is that customized cards are finally here. There are already a handful of models on the shelves now with a number more to arrive over the next few weeks, and over the coming weeks we’ll be taking a look at several of those models. The first such card we’ll be looking at is Sapphire’s first customized card, the Sapphire Radeon R9 290 Tri-X OC.

AMD GPU Specification Comparison
  AMD Radeon R9 290X Sapphire Radeon R9 290 Tri-X AMD Radeon R9 290
Stream Processors 2816 2560 2560
Texture Units 176 160 160
ROPs 64 64 64
Core Clock 727MHz 699MHz 662MHz
Boost Clock 1000MHz 1000MHz 947MHz
Memory Clock 5GHz GDDR5 5.2GHz GDDR5 5GHz GDDR5
VRAM 4GB 4GB 4GB
Typical Board Power ~300W ~300W ~300W
Width Double Slot Double Slot Double Slot
Length 10.95" 12" 10.95"
Warranty N/A 2 Years N/A
MSRP $549 $449 $399

 

Meet The Sapphire Radeon R9 290 Tri-X OC

The first customized 290 series card in our hands, Sapphire’s Radeon R9 290 Tri-X OC is a rather straightforward semi-custom card. Sapphire has taken AMD’s reference design and replaced AMD’s reference blower with their recently introduced Tri-X open air cooler, which as we’ll see significantly changes the cooling/performance equilibrium compared to the reference 290. At the same time Sapphire has also given the 290 Tri-X OC a mild factory overclock to boost its out of the box performance and differentiate it from the reference 290 and competing customized 290s. The end result is a card that is intended to be faster and quieter than the reference 290 we saw barely more than a month ago.

Diving into specifics, we’ll start with the 290 Tri-X OC’s specifications. As Sapphire typically does numerous tiers of factory overclocked cards – OC, VaporX, and Toxic – the 290 Tri-X OC represents the lowest tier for a factory overclocked part and comes with a mild but meaningful factory overclock. On the GPU side Sapphire has essentially erased the 290’s clockspeed disadvantage versus the 290X, bumping the maximum boost clock back up from 947MHz to 1000Mhz, an improvement of 53MHz (6%). Meanwhile the memory clockspeed has also received a slight bump, going from 5GHz to 5.2GHz, a smaller 200MHz (4%) increase. Compared to the reference 290, the reference card was typically cooled just well enough to sustain its maximum boost clock, so the real world clockspeed difference between Sapphire’s card and the reference card should match the maximum boost clocks mentioned above, while the real world performance difference will be more tapered as few games scale perfectly with clockspeeds.

More significant however is Sapphire’s replacement of the reference 290’s blower with their in-house Tri-X cooler. First introduced back in October on their R9 280X Toxic, the Tri-X is Sapphire’s latest design for a high performance open air cooler. As the years have gone by and designs have been continually tuned, the board partners have increasingly settled on a handful of basic cooler designs, with these large multi-fan open air coolers being among the most common. Sapphire in that respect is no different, with the Tri-X cooler implementing these principles to create a very sound, very effective design.

For the 290 Tri-X OC Sapphire is using a slightly different variation of their Tri-X cooler than what we saw on the 280X Toxic. The variably sized 80mm/90mm fans are gone in favor of a trio of equally sized 85mm fans, maintaining the triple fan design of the Tri-X while ever so slightly changing the airflow and aesthetics. The shroud has also been changed to accommodate the new fan sizes, though functionally it’s identical to the previous one. The three fan setup ultimately ensures that Sapphire has plenty of airflow to work with even at low fan speeds and that virtually every inch of the heatsink is covered by the airflow coming off of those fans.

Speaking of heatsinks, the heatsink assembly on the Tri-X has also received some minor modifications in order to accommodate the 290 series. Most significantly here, Sapphire has put together a new baseplate to match the component locations on the reference 290 PCB and to cover the VRM circuitry. Otherwise the primary heatsink itself remains unchanged from what we saw with the 280X Toxic. Here Sapphire uses a two segment vertical fin design with 5 copper heatpipes to move heat between the GPU and the heatsink. Two pipes going to the first segment located over the GPU while the other three go to the segment at the tail end of the card, with the largest of these heatpipes measuring 10mm in diameter.

As for the board itself, as was previously mentioned this is a semi-custom card, so Sapphire is melding their custom cooler to AMD’s reference board – complete with the AMD logo. This isn’t an overclocking-centric card, so AMD’s board is a reasonable choice here, especially if it means getting cards out quickly. Using AMD’s board also means that the power requirements and I/O options are identical to the reference 290. This means we’re looking at a peak power consumption of about 300W, augmented by a 6pin + 8pin PCIe power socket set at the top of the card. Meanwhile I/O is 2x DL-DVI, 1x DisplayPort, and 1x HDMI. This also means that the BIOS selection switch is present; it of course didn’t do anything special for the 290, so Sapphire has repurposed it to allow selecting between UEFI and BIOS type VBIOSes.

On a related note, because Sapphire is using AMD’s reference board this means that Sapphire’s cooler is larger than the board itself. The board is 10.5” long, while the Tri-X cooler brings the total length of the card to a clean 12”. Though despite the size difference, Sapphire has done an extremely good job with their build quality here, more than resolving the issues we saw with our 280X Toxic sample. Sapphire has mounted the board to the Tri-X cooler at every single screw point available on the board, including 3 points at the rear of the card, securely attaching the card to the cooler. There is no flex or bending of any kind in the board or the cooler, so the card is as solid as solid can be.

Finally, let’s quickly talk about warranties, pricing, and availability. As with all of their cards, Sapphire is offering a 2 year warranty on the 290 Tri-X OC, which is middling for a video card warranty these days. Meanwhile for pricing Sapphire is setting the MSRP for the card at $449, $50 above the official MSRP for the reference 290. The price difference is admittedly more than we’re used to seeing for a low tier semi-custom card, but Sapphire is coming into this with a very strong hand, as we’ll see in our power/temp/noise testing. Ultimately the pricing Sapphire will be able to maintain will be based on the quality of other board partners’ cards and their own pricing. Compared to the reference 290 Sapphire is in a good position, but until we’ve seen other customized cards it’s hard to say just how they’re going to compare.

The other factor of course will be what retail prices are like when this card arrives at etailers, which is currently scheduled for the end of this week. Cryptocoin mania has continued to rage on over the past month, which has resulted in highly distorted video card prices. This should (theoretically) abate soon, but for the time being there’s a good chance the 290 Tri-X OC is going to premiere at closer to $550 than $450. Given the exceptional nature of what’s going on right now we’re going to make our comparisons using MSRP pricing on the basis that all of this is temporary, but it’s something that bears mentioning.

290 Tri-X OC Thermal Management & The Test
Comments Locked

119 Comments

View All Comments

  • ShieTar - Tuesday, December 24, 2013 - link

    "Curiously, the [idle] power consumption of the 290 Tri-X OC is notably lower than the reference 290."

    Well, it runs about 10°C cooler, and silicone does have a negative temperature coefficient of electrical resistance. That 10°C should lead to a resistance increase of a few %, and thus to a lower current of a few %. Here's some nice article about the same phenomenon observed going from a Stock 480 to an Zotac AMP! 480:

    http://www.techpowerup.com/reviews/Zotac/GeForce_G...

    The author over there was also initially very surprised. Apparently kids these days just don't pay attention in physics class anymore ...
  • EarthwormJim - Tuesday, December 24, 2013 - link

    It's mainly the leakage current which decreases as temperature decreases, which can lead to the reductions in power consumption.
  • Ryan Smith - Tuesday, December 24, 2013 - link

    I had considered leakage, but that doesn't explain such a (relatively) massive difference. Hawaii is not a leaky chip, meanwhile if we take the difference at the wall to be entirely due to the GPU (after accounting for PSU efficiency), it's hard to buy that 10C of leakage alone is increasing idle power consumption by one-third.
  • The Von Matrices - Wednesday, December 25, 2013 - link

    In your 290 review you said that the release drivers had a power leak. Could this have been fixed and account for the difference?
  • Samus - Wednesday, December 25, 2013 - link

    Quality vrms and circuitry optimizations will have an impact on power consumption, too. Lots of factors here...
  • madwolfa - Wednesday, December 25, 2013 - link

    This card is based on reference design.
  • RazberyBandit - Friday, December 27, 2013 - link

    And based does not mean an exact copy -- it means similar. Some components (caps, chokes, resistors, etc.) could be upgraded and still fill the bill for the base design. Some components could even be downgraded, yet the card would still fit the definition of "based on AMD reference design."
  • Khenglish - Wednesday, December 25, 2013 - link

    Yes power draw does decrease with temperature, but not because resistance drops. Resistance dropping has zero effect on power draw. Why? Because processors are all about pushing current to charge and discharge wire and gate capacitance. Lower resistance just means that happens faster.

    The real reason power draw drops is due to lower leakage. Leakage current is completely unnecessary and is just wasted power.

    Also an added tidbit. The reason performance increases while temperature decreases is mainly due to the wire resistance dropping, not an improvement in the transistor itself. Lower temperature decreases the number of carriers in a semiconductor but improves carrier mobility. There is a small net benefit to how much current the transistor can pass due to temperature's effect on silicon, but the main improvement is from the resistance of the copper interconnects dropping as temperature drops.
  • Totally - Wednesday, December 25, 2013 - link

    Resistance increases with temperature -> Power draw increases P=(I^2)*R.
  • ShieTar - Thursday, December 26, 2013 - link

    The current isn't stabilized generally, the current is: P=U^2/R.

    " Because processors are all about pushing current to charge and discharge wire and gate capacitance. Lower resistance just means that happens faster."

    Basically correct, nevertheless capacitor charging happens asymptotic, and any IC optimised for speed will not wait for a "full" charge. The design baseline is probably to get the lowest charging required for operation at the highest qualified temperature. Since decreasing temperature will increase charging speed, as you pointed out, you will get to a higher charging ratio, and thus use more power.

    On top of that, the GPU is not exclusively transistors. There is power electronics, there are interconnects, there are caches, and who knows what else (not me). Now when the transistors pull a little more charge due to the higher temperature, and the interconnects which deliver the current have a higher resistance, then you get additional transmission losses. And that's on top of higher leakage rates.

    Of course the equation gets even more fun if you start considering the time constants of the interconnects itself, which have gotten quiet relevant since we got to 32nm structures, hence the high-K materials. Though I have honestly no clue how this contribution is linked to temperature.

    But hey, here's hoping that Ryan will go and investigate the Power drop with his equipment and provide us with a full explanation. As I personally don't own a GPU which gets hot in idle (can't force the fan below 30% by software and won't stop it by hand) I cannot test idle power behavior on my own, but I can and did repeat the Furmark-Test described in the link above, and also see a power-saving of about 0.5W per °C with my GTX660. And thats based on internal power monitoring, so the mainboard/PCIe slot and the PSU should add a bit more to that:

    https://www.dropbox.com/s/javq0dg75u40357/Screensh...

Log in

Don't have an account? Sign up now