AMD's Radeon HD 6990: The New Single Card King
by Ryan Smith on March 8, 2011 12:01 AM EST- Posted in
- AMD
- Radeon HD 6990
- GPUs
Compute Performance
Moving on from our look at gaming performance, we have our customary look at compute performance. With AMD’s architectural changes from the 5000 series to the 6000 series, focusing particularly on compute performance, this can help define the 6990 compared to the 5970. However at the same time, neither benchmark here benefits from the dual-GPU design of the 6990 very much.
Our first compute benchmark comes from Civilization V, which uses DirectCompute to decompress textures on the fly. Civ V includes a sub-benchmark that exclusively tests the speed of their texture decompression algorithm by repeatedly decompressing the textures required for one of the game’s leader scenes.
New as of Catalyst 11.4, AMD’s performance in our Civilization V DirectCompute benchmark now scales with CrossFire at least marginally. This leads to the 6990 leaping ahead of the 6970, however the Cayman architecture/compiler still looks to be sub-optimal for this test. The 5970 has a 10% lead even with its core clock disadvantage. This also lets NVIDIA and their Fermi architecture establish a solid lead over the 6990, even without the benefit of SLI scaling.
Our second GPU compute benchmark is SmallLuxGPU, the GPU ray tracing branch of the open source LuxRender renderer. While it’s still in beta, SmallLuxGPU recently hit a milestone by implementing a complete ray tracing engine in OpenCL, allowing them to fully offload the process to the GPU. It’s this ray tracing engine we’re testing.
There’s no CrossFire scaling to speak of in SmallLuxGPU, so this test is all about the performance of GPU1, and its shader/compute performance at that. At default clocks this leads to the 6990 slightly trailing the 6970, while overclocked this leads to perfect parity with it. Unfortunately for AMD this is a test where NVIDIA’s focus on compute performance has really paid off; coupled with the lack of CF scaling and even a $240 GTX 560 Ti can edge out the $700 6990.
Ultimately the take-away from this is that for most desktop GPU computing workloads, the benefit of multiple GPU cores is still unrealized. As a result the 6990 shines as a gaming card, but is out of its element as a GPU computing card unless you have an embarrassingly parallel task to feed it.
130 Comments
View All Comments
smookyolo - Tuesday, March 8, 2011 - link
My 470 still beats this at compute tasks. Hehehe.And damn, this card is noisy.
RussianSensation - Tuesday, March 8, 2011 - link
Not even close, unless you are talking about outdated distributed computing projects like Folding@Home code. Try any of the modern DC projects like Collatz Conjecture, MilkyWay@home, etc. and a single HD4850 will smoke a GTX580. This is because Fermi cards are limited to 1/8th of their double-precision performance.In other words, an HD6990 which has 5,100 Gflops of single-precision performance will have 1,275 Glops double precision performance (since AMD allows for 1/4th of its SP). In comparison, the GTX470 has 1,089 Gflops of SP performance which only translates into 136 Gflops in DP. Therefore, a single HD6990 is 9.4x faster in modern computational GPGPU tasks.
palladium - Tuesday, March 8, 2011 - link
Those are just theoretical performance numbers. Not all programs *even newer ones* can effectively extract ILP from AMD's VLIW4 architecture. Those that can will no doubt with faster; others that can't would be slower. As far as I'm aware lots of programs still prefer nV's scalar arch but that might change with time.MrSpadge - Tuesday, March 8, 2011 - link
Well.. if you can oly use 1 of 4 VLIW units in DP then you don't need any ILP. Just keep the threads in flight and it's almost like nVidias scalar architecture, just with everything else being different ;)MrS
IanCutress - Tuesday, March 8, 2011 - link
It all depends on the driver and compiler implementation, and the guy/gal coding it. If you code the same but the compilers are generations apart, then the compiler with the higher generation wins out. If you've had more experience with CUDA based OpenCL, then your NVIDIA OpenCL implementation will outperform your ATI Stream implementation. Pick your card for it's purpose. My homebrew stuff works great on NVIDIA, but I only code for NVIDIA - same thing for big league compute directions.stx53550 - Tuesday, March 15, 2011 - link
off yourself idiotm.amitava - Tuesday, March 8, 2011 - link
".....Cayman’s better power management, leading to a TDP of 37W"- is it honestly THAT good? :P
m.amitava - Tuesday, March 8, 2011 - link
oops...re-read...that was idle TDP !!MamiyaOtaru - Tuesday, March 8, 2011 - link
my old 7900gt used 48 at loadD:
Don't like the direction this is going. In GPUs it's hard to see any performance advances that don't come with equivalent increases in power usage, unlike what Core 2 was compared to Pentium4.
Shadowmaster625 - Tuesday, March 8, 2011 - link
Are you kidding? I have a 7900GTX I dont even use, because it fried my only spare large power supply. A 5670 is twice as fast and consumes next to nothing.