Derek's Conjecture Regarding SP Pipelining and TMT

Temporal Multithreading

We know that the execution of instructions in each SP is pipelined. We know that the throughput of SPs is one instruction per clock cycle and that rather than stalling the pipeline the Tesla architecture usually doesn't need to wait for computational results or memory operations because it is highly likely another thread will be ready to execute. Context switches happen every four clocks from the perspective of the SPs within an SM, and in these four clocks each of the 32 threads in the currently active and executing warp will be serviced.

Organization of the executing threads isn't something we know except SMs process warps in two groups of 16 threads. That this point was made is of interest, but there are too many possibilites for us to come up with any real guess as to how they are issuing instructions to 8 physical SPs in groups of 16. Off hand we asked them if the SPs supported hardware level SMT (simultaneous multithreading) like hyperthreading and our answer was no but with a curious twist: "... they are pipelined processors and support many threads in progress in the pipeline." This was the light switch that brought together the realization of many potential advantages of this architecture for us.

If throughput is really one instruction per clock per SP, and each SP handles 4 threads from a warp over 4 clock cycles, then the pipeline is actually working on executing one instruction from a different thread at every stage in the pipeline.

This is actually a multithreading technique known as fine-grained TMT or temporal multithreading and is different than SMT in that it doesn't expose virtual parallel processors to the software but processes multiple threads in different time slices. TMT isn't some hot new technology you missed when it hit the scene: TMT is what computers have made use of for decades to process the many threads running on a single CPU concurrently without starving them. On a desktop CPU we are very familiar with course-grained multithreading where a single thread is serviced for a while before a context switch happens and another thread starts running. This context switch will normally occur after a certain number of cycles or if a higher priority thread needs the processor or if the thread needs to wait on IO or memory for something.

The real interesting bit comes in the differences between fine and course-grained TMT. In course-grained implementations (what we are all used to) all the pipeline stages of a processor are servicing an instruction stream from a single context, whereas in fine-grained we can have multiple context switches happening within the pipeline down to a context switch every stage. Making such fine-grained implementations happen can be tough, but NVIDIA has used a couple tricks to make it easier to manage.

In G80 and GT200, because of the fact that context is stored per warp, even though the SPs are working on an instruction for a different thread in every pipeline stage, they are not working on a different context at every pipeline stage. Each SP processes four threads in a row from the same warp and thus from the same context. Because it is incredibly likely at 1.5GHz that the SPs have more than 4 pipeline stages, we will still see more than one context switch within the pipeline itself, but it still isn't down to a different context for every stage.

So what's the big deal? Latency insensitivity and a maximal avoidance of pipeline bubbles and stalls.

In a modern CPU architecture, we see many instructions from the same thread running one after the other. If everything is running as smoothly as possible we have as many instructions retiring per clock cycle as we are capable of issueing per clock cycle, but this isn't gauranteed. Data dependancies, memory operations, cache misses and the like cause instructions to wait in the pipeline which means clock cycles go by without as much work as possible being done. Techniques to reduce this sort of delay are many. Data forwarding between pipeline stages is necessary to accomodate cases where one instruction is dependant on the result of the previous. This works by forwarding the result from one stage of the pipeline back to a previous stage so that instructions needing that data won't have to wait for it. Hyperthreading is even a technology to help increase pipeline utilization in that it makes one pipeline look like two different processors in order to fill it with more independent instructions and increase utilization.

Fine-grained TMT eliminates the need for data forwarding because there are zero dependant instructions coming down the pipeline: warps are context switched out after issuing one instruction for each independant thread and if NVIDIA's scheduler does its job right then warps won't be rescheduled until their data is available. Techniques like Hyperthreading are unnecessary because the pipeline is already full of instructions from independant threads at every stage.

Managing a pool of warps that are from a mix of different shader programs and different types of shaders (vertex, geometry and pixel) means that the chance every warp being serviced by an SM is wating on the same data is minimized, but having multiple warps from the same shader program is also a good idea to make sure that once data arrives it enables the processing of more than one warp. Of course, since SMs within one TPC share texture address, filter and cache, it is also a good idea to load up similar warps across the SMs on a TPC so that texture look ups by one thread might also be useful to many others. The balance here would be interesting to know, but we'll probably have to wait for Intel to enter the graphics market before we start getting confirmation on the really cool architectural aspects of all this.

How Deep is an SP?

As for pipeline depth, NVIDIA isn't helping us out with this one either, but let's walk through a little reasoning and see what we can come up with. At the insane and stupid extreme, we know NVIDIA wouldn't build a machine with a pipeline longer than they have threads in flight to fill. We'll assume G80 and GT200 are equally pipelined as they are clocked very similarly and we'll use what we know about G80 to draw a baseline. With G80 having 24 warps in flight per SM and each warp taking up 4 pipeline stages per SP, SPs can't possibly have more than 96 stages. Sure, that's crazy anyway, but if we expect that any warp executing in the pipeline won't be rescheduled until completion, then we would expect a higher proportion of warps to be waiting than executing.

If we go on this assumption we've got less than 48 stages, and I'd think it'd be fair to guess that they'd want to have at least two thrids of their their in-flight thread not in the pipeline, so that brings us down to a potential 32 stage pipeline. On the minimal end, there are at least 4 stages because if there were any less, high priority warps wouldn't get context switched at every opportunity: the instructions form the first threads scheduled would be completed and ready to go. Having 8 stages would give maximum flexibility as warps could be scheduled every other opportunity if they were otherwise ready. This would also keep at most three contexts active at different points in the pipeline, and while this type of fine-grained TMT does offer advantages, it is not free to implement a pipeline with access to a high number of contexts. And it is possible to design a single precision FP unit that can do a MAD in 8 cycles at 1.5GHz, but using Itanium as an example is usually seen as extravagant.

It would be tough to put a finer point on it without some indication from NVIDIA, but at least 8 and at most 32 stages is as good as we can get looking at their architecture. But knowing that power and performance per watt are key concerns of NVIDIA we can be fairly certain of eliminating anything higher than say 16 pipeline stages. Everyone remembers the space heater that was the Pentium 4 in general (and Prescott in particular), and it just isn't power efficient to go too deep.

By now we are at a fairly reasonable minimum of 8 stages and taking both architecture and power into consideration 16 seems like the max we could believe. Of course, that's all the way from one end of the world to the next. Anand's original guess was 12-15, but Derek was able to sell him on 8 stages as the sweet spot because of the simplicity of the cores (there are no decode or scheduling stages in the SPs). So was all that guessing about pipeline stages useful? Not really. But it sure was fun!

Now let's blow your mind and suggest that all this combined with the other details of NVIDIA's architecture suggest that all SP operations have the same latency. This way the entire thing would just work like a clock: one in, one out, very little overhead, and as simple as possible. All the overhead is managed outside the SP and the compute core can just focus on what it does best (as long as the rest of the chip does its job and keeps it fed).

UPDATE: We got lots of response on this page, and many CUDA developers, graphics software designers and hardware enthusiasts emailed us links to many resources on these topics. We discovered some very useful info: instruction latency is actually about 22 cycles in G80, so Anand and I were both way off. This and a couple other things we learned are available in our quick update on the GT200 pipeline published a couple days after this article first went live.

Tweaks and Enahancements in GT200 NVIDIA's Dirty Dealing with DX10.1 and How GT200 Doesn't Support it
Comments Locked

108 Comments

View All Comments

  • gigahertz20 - Monday, June 16, 2008 - link

    I think these ridiculous prices and lackluster performance is just a way for them to sell more SLI motherboards, who would buy a $650 GTX 280 when you can buy two 8800GT's with a SLI mobo and get better performance? Especially now that the 8800GT's are approaching around $150.
  • crimson117 - Monday, June 16, 2008 - link

    It's only worth riding the bleeding edge when you can afford to stay there with every release. Otherwise, 12 months down the line, you have no budget left for an upgrade, while everyone else is buying new $200 cards that beat your old $600 card.

    So yeah you can buy an 8800GT or two right now, and you and me should probably do just that! But Richie Rich will be buying 2x GTX 280's, and by the time we could afford even one of those, he'll already have ordered a pair of whatever $600 cards are coming out next.
  • 7Enigma - Tuesday, June 17, 2008 - link

    Nope, the majority of these cards go to Alienware/Falcon/etc. top of the line, overpriced pre-built systems. These are for the people that blow $5k on a system every couple years, don't upgrade, might not even seriously game, they just want the best TODAY.

    They are the ones that blindly check the bottom box in every configuration for the "fastest" computer money can buy.
  • gigahertz20 - Monday, June 16, 2008 - link

    Very few people are richie rich and stay at the bleeding edge. People that are very wealthy tend not to be computer geeks and purchase their computers from Dell and what not. I'd say at least 96% of gamers out there are value oriented, these $650 cards will not sell much at all. If anything, you'll see people claim to have bought one or two of these in forums and other places, but their just lying.
  • perzy - Monday, June 16, 2008 - link

    Well I for one is waiting for Larabee. Maybee (probably) it isen' all that its cranked up to be, but I want to see.

    And what about some real powersaving Nvidia?
  • can - Monday, June 16, 2008 - link

    I wonder if you can just flash the BIOS of the 260 to get it to operate as if it were a 280...
  • 7Enigma - Tuesday, June 17, 2008 - link

    You haven't been able to do this for a long time....they learned their lessons the hard way. :)
  • Nighteye2 - Monday, June 16, 2008 - link

    Is it just me, or does this focus on compute power mean Nvidia is starting to get serious about using the GPU for physics, as well as graphics? It's also in-line with the Ageia acquisition.
  • will889 - Monday, June 16, 2008 - link

    At the point where NV has actually managed to position SLI mobos and GPU's where you actually need that much power to get decent FPS (above 30 average) from games gaming on the PC will be entirely dead to all those but the most esoteric. It would be different if there were any games worth playing or as many games as the console brethren have. I thought GPU's/cases/power supplies were supposed to become more efficient? EG smaller but faster sort of how the TV industry made TV's bigger yet smaller in footprint with way more features - not towering cases with 1200KW PSU's and 2X GTX 280 GPU's? All this in the face o drastically raised gas prices?

    Wanna impress me? How about a single GPU with the PCB size of a 7600GT/GS that's 15-25% faster than a 9800GTX that can fit into a SFF case? needing a small power supply AND able to run passively @ moderate temps. THAT would be impressive. No, Seargent Tom and his TONKA_TRUCK crew just have to show how beefy his toys can be and yank your wallet chains for said. Hell, everyone needs a Boeing 747 in their case right? cause' that's progress for those 1-2 gaming titles per years that give you 3-4 hours of enjoyable PC gaming.....

    /off box
  • ChronoReverse - Tuesday, June 17, 2008 - link

    The 4850 might actually hit that target...

Log in

Don't have an account? Sign up now