A More Efficient Architecture

GPUs, like CPUs, work on streams of instructions called threads. While high end CPUs work on as many as 8 complicated threads at a time, GPUs handle many more threads in parallel.

The table below shows just how many threads each generation of NVIDIA GPU can have in flight at the same time:

  Fermi GT200 G80
Max Threads in Flight 24576 30720 12288

 

Fermi can't actually support as many threads in parallel as GT200. NVIDIA found that the majority of compute cases were bound by shared memory size, not thread count in GT200. Thus thread count went down, and shared memory size went up in Fermi.

NVIDIA groups 32 threads into a unit called a warp (taken from the looming term warp, referring to a group of parallel threads). In GT200 and G80, half of a warp was issued to an SM every clock cycle. In other words, it takes two clocks to issue a full 32 threads to a single SM.

In previous architectures, the SM dispatch logic was closely coupled to the execution hardware. If you sent threads to the SFU, the entire SM couldn't issue new instructions until those instructions were done executing. If the only execution units in use were in your SFUs, the vast majority of your SM in GT200/G80 went unused. That's terrible for efficiency.

Fermi fixes this. There are two independent dispatch units at the front end of each SM in Fermi. These units are completely decoupled from the rest of the SM. Each dispatch unit can select and issue half of a warp every clock cycle. The threads can be from different warps in order to optimize the chance of finding independent operations.

There's a full crossbar between the dispatch units and the execution hardware in the SM. Each unit can dispatch threads to any group of units within the SM (with some limitations).

The inflexibility of NVIDIA's threading architecture is that every thread in the warp must be executing the same instruction at the same time. If they are, then you get full utilization of your resources. If they aren't, then some units go idle.

A single SM can execute:

Fermi FP32 FP64 INT SFU LD/ST
Ops per clock 32 16 32 4 16

 

If you're executing FP64 instructions the entire SM can only run at 16 ops per clock. You can't dual issue FP64 and SFU operations.

The good news is that the SFU doesn't tie up the entire SM anymore. One dispatch unit can send 16 threads to the array of cores, while another can send 16 threads to the SFU. After two clocks, the dispatchers are free to send another pair of half-warps out again. As I mentioned before, in GT200/G80 the entire SM was tied up for a full 8 cycles after an SFU issue.

The flexibility is nice, or rather, the inflexibility of GT200/G80 was horrible for efficiency and Fermi fixes that.

Architecting Fermi: More Than 2x GT200 Efficiency Gets Another Boon: Parallel Kernel Support
Comments Locked

415 Comments

View All Comments

  • yacoub - Thursday, October 1, 2009 - link

    uh-oh, boys, he's foaming at the mouth. time to put him down.
  • SiliconDoc - Thursday, October 1, 2009 - link

    Ah, another coward defeated. No surprise.
  • yacoub - Wednesday, September 30, 2009 - link

    "The motivation behind AMD's "sweet spot" strategy wasn't just die size, it was price."

    LOL, no it wasn't. Not when everyone, even Anandtech staff, anticipated the pricing for the two Cypress chips to be closer to $199 and $259, not the $299 and $399 they MSRP'd at.

    This return to high GPU prices is disheartening, particularly in this economy. We had better prices for cutting edge GPUs two years ago at the peak of the economic bubble. Today in the midst of the burst, they're coming out with high-priced chips again. But that's okay, they'll have to come down when they don't get enough sales.
  • SiliconDoc - Thursday, October 1, 2009 - link

    It was fun for half a year as the red fans were strung along with the pricing fantasy here.
    Now of course, well the bitter disappointment, not as fast as expected and much more costly. "low yields" - you know, that problem that makles ati "smaller dies" price like "big green monsters" (that have good yields on the GT300).
    --
    But, no "nothing is wrong, this is great!" Anyone not agreeing is "a problem". A paid agent, too, of that evil money bloated you know who.
  • the zorro - Thursday, October 1, 2009 - link

    silicon duck, please take a valium i'm worried about you.
  • SiliconDoc - Thursday, October 1, 2009 - link

    Another lie, no worry, you're no physician, but I am SiliconDoc, so grab your gallon red water bottle reserve for your overheating ati card and bend over and self administer you enema, as usual.
  • araczynski - Wednesday, September 30, 2009 - link

    sounds like ati will win the bang for the buck war this time as well. at least it makes the choice easier for me.
  • marc1000 - Wednesday, September 30, 2009 - link

    Some time ago I heard that the nex gen of consoles would run DX11 (Playstation2 and Xbox were DX7, PS3 and X360 DX9. So PS4 and X720 could perfectly be DX11). If this is the case, we are about to see new consoles with really awesome graphics - and then the GPU race would need to start over to more and more performance.

    Do you guys have any news on those new consoles development? It could complete the figure in the new GPU articles this year.
  • Penti - Friday, October 2, 2009 - link

    I think you mean DX9 class hardware, PS3 who has zero DX9 support and XBOX360 has DX9c class support but a console specific version. PS3 was using OpenGL ES 1.0 with shaders and other feature from 2.0 as it was release pre OpenGL ES 2.0 spec. The game engines don't need the DX API. It doesn't matter to game content developers any way.

    Xbox was actually DirectX 8.1 equivalent. As said next gen consoles are years away. Larrabee and fermi will have been long out by then.
  • haukionkannel - Thursday, October 1, 2009 - link

    Rumours says that next generation consoles are releaset around 2013-2014...
    But who can say...

Log in

Don't have an account? Sign up now