Compute

Update 3/30/2010: After hearing word after the launch that NVIDIA has artificially capped the GTX 400 series' double precision (FP64) performance, we asked NVIDIA for confirmation. NVIDIA has confirmed it - the GTX 400 series' FP64 performance is capped at 1/8th (12.5%) of its FP32 performance, as opposed to what the hardware natively can do of 1/2 (50%) FP32. This is a market segmentation choice - Tesla of course will not be handicapped in this manner. All of our compute benchmarks are FP32 based, so they remain unaffected by this cap.

Continuing at our look at compute performance, we’re moving on to more generalized compute tasks. GPGPU has long been heralded as the next big thing for GPUs, as in the right hands at the right task they will be much faster than a CPU would be. Fermi in turn is a serious bet on GPGPU/HPC use of the GPU, as a number of architectural tweaks went in to Fermi to get the most out of it as a compute platform. The GTX 480 in turn may be targeted as a gaming product, but it has the capability to be a GPGPU powerhouse when given the right task.

The downside to GPGPU use however is that a great deal of GPGPU applications are specialized number-crunching programs for business use. The consumer side of GPGPU continues to be underrepresented, both due to a lack of obvious, high-profile tasks that would be well-suited for GPGPU use, and due to fragmentation in the marketplace due to competing APIs. OpenCL and DirectCompute will slowly solve the API issue, but there is still the matter of getting consumer orientated GPGPU applications out in the first place.

With the introduction of OpenCL last year, we were hoping by the time Fermi was launched that we would see some suitable consumer applications that would help us evaluate the compute capabilities of both AMD and NVIDIA’s cards. That has yet to come to pass, so at this point we’re basically left with synthetic benchmarks for doing cross-GPU comparisons. With that in mind we’ve run a couple of different things, but the results should be taken with a grain of salt as they don’t represent any single truth about compute performance on NVIDIA or AMD’s cards.

Out of our two OpenCL benchmarks, we’ll start with an OpenCL implementation of an N-Queens solver from PCChen of Beyond3D. This benchmark uses OpenCL to find the number of solutions for the N-Queens problem for a board of a given size, with a time measured in seconds. For this test we use a 17x17 board, and measure the time it takes to generate all of the solutions.

This benchmark offers a distinct advantage to NVIDIA GPUs, with the GTX cards not only beating their AMD counterparts, but the GTX 285 also beating the Radeon 5870. Due to the significant underlying differences of AMD and NVIDIA’s shaders, even with a common API like OpenCL the nature of the algorithm still plays a big part in the performance of the resulting code, so that may be what we’re seeing here. In any case, the GTX 480 is the fastest of the GPUs by far, beating out the GTX 285 by over half the time, and coming in nearly 5 times faster than the Radeon 5870.

Our second OpenCL benchmark is a post-processing benchmark from the GPU Caps Viewer utility. Here a torus is drawn using OpenGL, and then an OpenCL shader is used to apply post-processing to the image. Here we measure the framerate of the process.

Once again the NVIDIA cards do exceptionally well here. The GTX 480 is the clear winner, while even the GTX 285 beats out both Radeon cards. This could once again be the nature of the algorithm, or it could be that the GeForce cards really are that much better at OpenCL processing. These results are going to be worth keeping in mind as real OpenCL applications eventually start arriving.

Moving on from cross-GPU benchmarks, we turn our attention to CUDA benchmarks. Better established than OpenCL, CUDA has several real GPGPU applications, with the limit being that we can’t bring the Radeons in to the fold here. So we can see how much faster the GTX 480 is over the GTX 285, but not how this compares to AMD’s cards.

We’ll start with Badaboom, Elemental Technologies’ GPU-accelerated video encoder for CUDA. Here we are encoding a 2 minute 1080i clip and measuring the framerate of the encoding process.

The performance difference with Badaboom is rather straightforward. We have twice the shaders running at similar clockspeeds, and as a result we get twice the performance. The GTX 480 encodes our test clip in a little over half the time it took the GTX 280.

Up next is a special benchmark version of Folding@Home that has added Fermi compatibility. Folding@Home is a Standford research project that simulates protein folding in order to better understand how misfolded proteins lead to diseases. It has been a poster child of GPGPU use, having been made available on GPUs as early as 2006 as a Close-To-Metal application for AMD’s X1K series of GPUs. Here we’re measuring the time it takes to fully process a sample work unit so that we can project how many nodes (units of work) a GPU could complete per day when running Folding@Home.

Folding@Home is the first benchmark we’ve seen that really showcases the compute potential for Fermi. Unlike everything else which has the GTX 480 running twice as fast as the GTX 285, the GTX 480 is a fewtimes faster than the GTX 285 when it comes to folding. Here a GTX 480 would get roughly 3.5x as much work done per day as a GTX 285. And while this is admittedly more of a business/science application than it is a home user application (even if it’s home users running it), it gives us a glance at what Fermi is capable when it comes to compuete.

Last, but not least for our look at compute, we have another tech demo from NVIDIA. This one is called Design Garage, and it’s a ray tracing tech demo that we first saw at CES. Ray tracing has come in to popularity as of late thanks in large part to Intel, who has been pushing the concept both as part of their CPU showcases and as part of their Larrabee project.

In turn, Design Garage is a GPU-powered ray tracing demo, which uses ray tracing to draw and illuminate a variety of cars. If you’ve never seen ray tracing before it looks quite good, but it’s also quite resource intensive. Even with a GTX 480, with the high quality rendering mode we only get a couple of frames per second.

On a competitive note, it’s interesting to see NVIDIA try to go after ray tracing since that has been Intel’s thing. Certainly they don’t want to let Intel run around unchecked in case ray tracing and Larrabee do take off, but at the same time it’s rasterization and not ray tracing that is Intel’s weak spot. At this point in time it wouldn’t necessarily be a good thing for NVIDIA if ray tracing suddenly took off.

Much like the Folding@Home demo, this is one of the best compute demos for Fermi. Compared to our GTX 285, the GTX 480 is eight times faster at the task. A lot of this comes down to Fermi’s redesigned cache, as ray tracing as a high rate of cache hits which help to avoid hitting up the GPU’s main memory any more than necessary. Programs that benefit from Fermi’s optimizations to cache, concurrency, and fast task switching apparently stand to gain the most in the move from GT200 to Fermi.

Tessellation & PhysX Image Quality & AA
Comments Locked

196 Comments

View All Comments

  • WiNandLeGeNd - Saturday, March 27, 2010 - link

    I think this was a great review, as mentioned previously, very objective. I think though that I may get a 480, because when I buy a card I keep it for 3 to 4 years before I get a new one, aka every other gen. And seeing that tessellation is really the gift horse of DX11 and how much more tessellation power is in the 480's, I think it could very much pay off in the future. If not then I spent an extra $85 for a tad extra performance as I just pre-ordered one for 485 and the 5870's are at $400 still.

    My only concern is heat and power, but most of the cards have a life time warranty. Hopefully my OCZ GamerXtreme 850W can handle it at max loads. The two 12v rails for the two 6 pin PCI-X connectors are 20 A each, I saw 479w max consumption, however that was furmark, at 12v that's 39.5 amps, so it would be extremely close if there is ever a game to utilize that much power. Although If I recall ATI specifically stated a while back to not use that as it pushes loads that are not possible to see in an actual game, I think they had an issue with the 4000 series burning out power regulators, correct me if I'm wrong.
  • Alastayr - Saturday, March 27, 2010 - link

    I'm with sunburn on this one. Your reasoning doesn't make much sense. You must've not followed the GPU market for the last few years because

    first) "every other gen" would mean a 2 year cycle
    second) Nothing's really gonna pay off in the future, as the future will bring faster cards for a fraction of the price. You'd only enjoy those questionable benefits until Q4, when AMD releases Northern Islands and nVidia pops out GF100b or whatever they'll call it.
    third) Tessellation won't improve further that fast. If at all, developers will focus on the lowest common denominator, which would be Cypress. Fermi's extra horse power will most likely stay unused.
    fourth) Just look at your power bill. The 25W difference with a "typical" Idle scheme (8h/day; 350d/y) comes to 70kWh which where I live translates to around $20 per year. That's Idle *only*. You're spending way more than just $85 extra on that card.
    fifth) The noise will kill you. This isn't a card than just speeds up for no reason. You can't just magically turn down the fan from 60% to 25% and still enjoy Temps of <90°C like on some GTX 260 boards. Turn up your current fan to 100% for a single day. Try living through that. That's probably what you're buying.

    In the end everyone has to decide this for himself. But for someone to propose keeping a GTX 480 in his PC for a whopping 3-4 years... I don't know man. I'd rather lose a finger or two. ;)

    tl;dr I know, I know. But really people. Those cards aren't hugely competetive, priced too high and nV's drivers suck as much as ATi's (allegedly) do nowadays. Whis is to say neither do.

    I could honestly bite me right now. I had a great deal for a 5850 in Nov. and I waited for nV to make their move. Now the same card will cost me $50 more, and I've only wasted time by waiting for the competetive GTX 470 that never was. Argh.
  • Sunburn74 - Saturday, March 27, 2010 - link

    Thats kind of bad logic imo. I'm not fanboy on either side, but it's clear to me that Nvidia targeted the performance of their cards to fit in exactly between the 5970, the 5870, and 5850. Its much harder to release a card not knowing what the other guy truly has as opposed to releasing a card knowing exactly what sort of performance levels you have to hit.

    Two, realistically, think of the noise. I mean ifyou've ever heard a gtx 260 at 100 percent fan speed, thats the sort of fan noises you're going to be experiencing on a regular basis. Its not a mild difference.

    And three, realistically for the premium you're paying for the extra performance (which is not useful right now as there are no games to take advantage of it) as well as for the noise, heat and power, you could simply buy the cheaper 5870, save that 85-150 dollars extra, and sell off the 5870 when the time is right.

    I just don't see why anyone would buy this card unless they were specifically taking advantage of some of the compute functions. As a consumer card it is a failure. Power and heat be damned, the noise the noise! Take your current card up to 100 percent fan speed, and listen to it for a few mins, and thats what you should about expect from these gpus.
  • andyo - Saturday, March 27, 2010 - link

    I too am getting the warning message with Firefox 3.6.2. Posting this on IE. Here's the message:

    http://photos.smugmug.com/photos/820690277_fuLv6-O...">http://photos.smugmug.com/photos/820690277_fuLv6-O...
  • JarredWalton - Saturday, March 27, 2010 - link

    We're working on it. Of course, the "Internet Police" have now flagged our site as malicious because of one bad ad that one of the advertisers put up, and it will probably take a week or more to get them to rescind the "Malware Site" status. Ugh....
  • jeffrey - Saturday, March 27, 2010 - link

    Give the advertiser that put up the bad ad hell!
  • LedHed - Saturday, March 27, 2010 - link

    The people who are going to buy the GTX 480/470 are enthusiast who most likely bought the GTX 295 or had 200 Series SLI. So not including the 295 in every bench is kind of odd. We need to see how the top end of the last gen does against the new gen top end.
  • Ryan Smith - Saturday, March 27, 2010 - link

    What chart is the 295 not in? It should be in every game test.
  • kc77 - Saturday, March 27, 2010 - link

    Well the 295 beats the 470 in most benches so there's no need to really include it in all benches. Personally I think the 480 is the better deal. Although I am not buying those cards until a respin/refresh, those temps and power requirements are just ridiculous.
  • bigboxes - Saturday, March 27, 2010 - link

    I know you "upgraded" your test PSU to the Antec 1200W PSU, but did you go back and try any of these tests/setups with your previous 850W PSU to see if could handle the power requirements. It seemed that only your 480 SLI setup drew 851W in total system in the Furmark load test. Other than that scenario it looks like your old PSU should handle the power requirements just fine. Any comments?

Log in

Don't have an account? Sign up now