A More Efficient Architecture

GPUs, like CPUs, work on streams of instructions called threads. While high end CPUs work on as many as 8 complicated threads at a time, GPUs handle many more threads in parallel.

The table below shows just how many threads each generation of NVIDIA GPU can have in flight at the same time:

  Fermi GT200 G80
Max Threads in Flight 24576 30720 12288

 

Fermi can't actually support as many threads in parallel as GT200. NVIDIA found that the majority of compute cases were bound by shared memory size, not thread count in GT200. Thus thread count went down, and shared memory size went up in Fermi.

NVIDIA groups 32 threads into a unit called a warp (taken from the looming term warp, referring to a group of parallel threads). In GT200 and G80, half of a warp was issued to an SM every clock cycle. In other words, it takes two clocks to issue a full 32 threads to a single SM.

In previous architectures, the SM dispatch logic was closely coupled to the execution hardware. If you sent threads to the SFU, the entire SM couldn't issue new instructions until those instructions were done executing. If the only execution units in use were in your SFUs, the vast majority of your SM in GT200/G80 went unused. That's terrible for efficiency.

Fermi fixes this. There are two independent dispatch units at the front end of each SM in Fermi. These units are completely decoupled from the rest of the SM. Each dispatch unit can select and issue half of a warp every clock cycle. The threads can be from different warps in order to optimize the chance of finding independent operations.

There's a full crossbar between the dispatch units and the execution hardware in the SM. Each unit can dispatch threads to any group of units within the SM (with some limitations).

The inflexibility of NVIDIA's threading architecture is that every thread in the warp must be executing the same instruction at the same time. If they are, then you get full utilization of your resources. If they aren't, then some units go idle.

A single SM can execute:

Fermi FP32 FP64 INT SFU LD/ST
Ops per clock 32 16 32 4 16

 

If you're executing FP64 instructions the entire SM can only run at 16 ops per clock. You can't dual issue FP64 and SFU operations.

The good news is that the SFU doesn't tie up the entire SM anymore. One dispatch unit can send 16 threads to the array of cores, while another can send 16 threads to the SFU. After two clocks, the dispatchers are free to send another pair of half-warps out again. As I mentioned before, in GT200/G80 the entire SM was tied up for a full 8 cycles after an SFU issue.

The flexibility is nice, or rather, the inflexibility of GT200/G80 was horrible for efficiency and Fermi fixes that.

Architecting Fermi: More Than 2x GT200 Efficiency Gets Another Boon: Parallel Kernel Support
Comments Locked

415 Comments

View All Comments

  • silverblue - Thursday, October 1, 2009 - link

    Anand's entitled to make mistakes. You do nothing else.
  • SiliconDoc - Thursday, October 1, 2009 - link

    Oh golly, another lie.
    First you admit I'm correct, FINALLY, then you claim only mistakes from me.
    You're a liar again.
    However, I congratulate you, for FINALLY having the half baked dishonesty under enough control that you offer an excuse for Anand.
    That certainly is progress.
  • silverblue - Friday, October 2, 2009 - link

    And you conveniently forget the title of this article which clearly states 2010.
  • johnsonx - Wednesday, September 30, 2009 - link

    I think there might be something wrong with SiliconDoc. Something wrong in the head.
  • SiliconDoc - Thursday, October 1, 2009 - link

    I think that pat fancy can now fairly be declared the quacking idiot group collective's complete defense.

    Congratulations, you're all such a pile of ignorant sheep, you'll swather together the same old feckless riddle for eachothers emotional comfort, and so far to here, nearly only monkeypaw tried to address the launch lie pointed out.

    I suppose a general rule, you love your mass hysterical delusionary appeasement, in leiu of an actual admittance, understanding, or mere rebuttal to the author's false launch accusation in the article, the warped and biased comparisons pointed out, and the calculations required to reveal the various cover-ups I already commented on.

    Good for you people, when the exposure of bias and lies is too great to even attempt to negate, it's great to be a swaddling jerkoff in union.

    I certainly don't have to wonder anymore.
  • Griswold - Wednesday, September 30, 2009 - link

    So, you're the new village fool?
  • Finally - Thursday, October 1, 2009 - link

    Make that "Global Village Fool 2.0"
    He is an advanced version, y'know?
  • SiliconDoc - Wednesday, September 30, 2009 - link

    Nvidia LAUNCHED TODAY... se page two by your insane master Anand.
    --
    YOU'VE all got the same disease.
  • MonkeyPaw - Wednesday, September 30, 2009 - link

    Is sanity now considered to be a disease? We're not the one's visiting a website in which we so aggressively scream "bias" on (apparently) every GPU article. If you think Anand's work is so offensive and wrong, then why do you keep coming back for more?

    Anyway, I just don't see where you get this "bias" talk. For crying out loud, you can't make many assumptions about the product's performance when you don't even know the clock speeds. You can guess till you're blue in the face, but that still leaves you with no FACTS. Also keep in mind that GT300 will have ECC enabled (at least in Tesla), which has been known to affect latency and clock speeds in other realms. I'm not 100% sure how the ECC spec works in GDDR5, but usually ECC comes at a cost.

    As for "paper launch," ultimately semantics don't matter. However, a paper launch is generally defined as a product announcement that you cannot buy yet. It's frequently used as a business tactic to keep people from buying your competitor's products. If the card is officially announced (and it hasn't), but no product is available, then by my definition, it is a paper launch. However, everyone has their own definition of the term. This article I see more as a technology preview, though nVidia's intent is still probably to keep people from buying an RV870 right now. That's where the line blurs.
  • SiliconDoc - Thursday, October 1, 2009 - link

    A launch date is the date the company claims PRODUCT WILL BE AVAILABLE IN RETAIL CHANNELS.
    No "semantics" you whine about or cocka doodle do up will change that.
    A LAUNCH date officially as has been for YEARS sonny paw, is when the corp says "YOU CAN BUY THIS" as a private end consumer.
    ---
    Anything ELSE is a showcase, an announcement, a preview of upcoming tech, a marketing plan, ETC.
    ---
    YOU LTING ABOUT THE VERY ABSOLUTE FACTS THAT FOR YEARS HAVE APPLIED PERIOD IS JUST ANOTHER RED ROOSTER NOTCH ACQUIRED.

Log in

Don't have an account? Sign up now