NVIDIA's Scalable Link Interface: The New SLI

by Derek Wilson on 6/28/2004 2:00 PM EST
Comments Locked

40 Comments

Back to Article

  • quanta - Monday, July 12, 2004 - link

    >> i would be choose "another old card" because from what nv is saying... performance improvements can go up to 90%, and at least 20-30% increase with a twin card and if history repeats itself, the next incarnation of cards would be as fast... like at less 20-30% speed increase. it would be a smarter choice paying $200 for an upgrade instead of another $500 for an upgrade.

    Even if NVIDIA says is true, it will only be, at best, true for applications supporting current feature sets. If/when tessellation technique get 'standardized', or NVIDIA decide to add new features (eg: 3Dc normal map compression, wider pipes, ray tracing, 64-bit per channel rendering, smarter shader ALUs/instruction scheduler), the old cards are going to be penalized greatly, sometimes even for existing (GeForce 6 generation) applications. Such performance penalties are not going to be compensated easily by adding another GeForce 6800 Ultra Extreme video card, if at all.
  • CZroe - Saturday, July 3, 2004 - link

    <<The last of the SLI Voodoo2s had a dual gpu on a single board for one PCI slot. I cant see why the same couldnt be done for a dual 6800 gpu board on a single x16 PCIe slot which is nowhere near saturation with current gpus.>>

    No, the Quantum Obsidian X24 Voodoo2 SLI on a single card was two boards. They were stacked on top of eachother with a special connector. This cause overheating problems, but it didn't matter much. It used a PCI bridge chip to connect two PCI cards to the same PCI slot and not a dual GPU design. There were four giant 3Dfx chips on the thing, so I think all Voodoo2's were two-CPU boards. There was a crazy Voodoo1 SLI board created at one time but it was probably using an undocumented method for adding a second CPU like the Voodoo2's had. Also, it was not the last Voodoo2 card by far... Even 3dfx themselves started making the Voodoo2 1000 boards!
  • gopinathbill - Thursday, July 1, 2004 - link

    Yes SLI has come back and everyone knows what and how good it is. The question now is about NVIDIA trying to catch the king of the hill thing. Everytime nvidia has come up a new card the next rival ati spanks back with its latest card, which is now the current king. What nvidia has come up is this SLI thing, combination of two pci-e cards. Here everybody is wooed by the frames rates it may do by this dual card. My feeling is

    1) This does not prove that nvidia has brought up a new powerful card in its pocket
    2) What if ATI does the same double pack (This may beat the nivida double pack)
    3) This technology is not new, if any chance a guy with 2 previous nvidia cards can achive the same frame as the current single fastest nvidia cards.(theory)
    4) This may bring down the GRAPHICS WAR. Any one wants more power just keep on adding another card
    say now we have double pack, next triple so on.....want more power just fix another card..just like expanding your RAM modules
    5) Now i have a double pack say Ge 5800, next year I wanna more power to play DOOM 5 ;-). How do i upgrade it buy another Ge 5800 , make a triple pack or what. What if a new GE 7000 card has come up I wanna use this. Then i have buy two of them eh?. We keep this aside for a while

    Yes for any gaming freak the graphics card is his heart/soul. No war no Pun. This can be looked in a different way also

    1) Wanna more fps add a new card, again lot of questions on this
    2) price will come down (nothing is as sweet as this)


    The same can be seen here

    http://www.hardwareanalysis.com/content/forum/1728...
  • DigitalDivine - Wednesday, June 30, 2004 - link

    >>you will be the one making hard decision on whether to get another old card (if the specific manufacturer still makes it), get one (or two) new card to replace the old one. Considering video cards model get obsoleted rather quickly, neither solutions are very attractive even for performace enthusists.

    i would be choose "another old card" because from what nv is saying... performance improvements can go up to 90%, and at least 20-30% increase with a twin card and if history repeats itself, the next incarnation of cards would be as fast... like at less 20-30% speed increase. it would be a smarter choice paying $200 for an upgrade instead of another $500 for an upgrade.

    i have a gut feeling that nvidia would provide an affordable way for people to get dual pci express x16 motherboards to consumers.

    i expect to pay 150 to 200 for a dual pci express motherboards about the same price as the most expensive p4 mobos, and if it could be for less would be perfect. and there is no way in hell i would buy a premade alienware system.

    the dual pci-express videocard sounds great with a possibility of dual core athlons / p4. ;)
  • quanta - Wednesday, June 30, 2004 - link

    Actually, #33, NVIDIA's SLI and Alienware's Video Array are not really about cost savings. Both solution not only require all cards using identical (model and clock rate) processors, but the cards have to come from same manufacters. Furthermore, upgrades are not going to be flexible. If NVIDIA decides to make a new GeForce model, you will be the one making hard decision on whether to get another old card (if the specific manufacturer still makes it), get one (or two) new card to replace the old one. Considering video cards model get obsoleted rather quickly, neither solutions are very attractive even for performace enthusists.

    It is possible that there may be some way to run asymmetric configurations, but it is highly unlikely. After all, SLI and VA's goal is not distributed computing.
  • Phiro - Wednesday, June 30, 2004 - link

    Every pro and con for dual core GPUs vs. daisy-chained like hardware has been debated by the market for the last umpteen years as the single core CPU/SMP vs. multi-core CPUs.

    You can say "oh but the upfront cost of a multi-core gpu is so high!" and "oh it's so wasteful!!" - Every friend of mine that has a dual-cpu motherboard either a) only has one cpu in them or b) both cpu's are slow-ass crap.

    You pay a huge premium on the motherboard, and it never works out. You get out of date too quick, six to 12 months after you put your mega-bucks top of the line dual-cpu rig together your neighbor can spend 1/2 as much on a single CPU and smoke your machine. That's how it *always happens.

    Give me a user manageable socket on the video card at the very least, and support multi-core gpus as well. Heck, call multi-core gpus "SLI" for all I care.

  • Pumpkinierre - Wednesday, June 30, 2004 - link

    #33, the beauty of a dual core 6800 card would be that it would work on cheaper ordinary mobos with a single x16 PCIe slot. If it is possible (and I dont see why not) an enterprising OEM is sure to make one unless nVidia puts its foot down (and I dont see why they should, two GPUs- double the profit).
    True, a single card now and a second card later for extra power makes sense but you gotta have a 2 slot x16 wkstation board which are rare and expensive. However you are right, the single card option would be expensive. Unlike the voodoo2s and ATI Rage FURY MAXXs where half frames are interlaced, the nVidia SLI solution just adds grunt to the processing power of the gpu card. So it is feasible to have more than two gpus eg. 3 or 4 but the heat would be a problem. The single card may also make the driver based load balancing simpler by having dedicated on-board intelligence handling this function. That way the software and system would still see the single card SLI as a single gpu.
  • DigitalDivine - Wednesday, June 30, 2004 - link

    dual core cards does not have the flexibility of actually having 2 physical cards.

    having the option of upgrading later and practically doubling your performance for cheap is an incentive. pay 300 now and pay maybe 200 or 150 later for the other card, instead of 600 for just one card.

    also, dual core cards gets obsolete quickly. look at the voodoo 5 for instance. it's dual core design made it very very expensive, paying the equivilent of 2 cards when you have 1 physical card. takes away the flexibility of separating the card in the future and use them for other purposes and upgradability is abysmal.

    also having dual core cards splits your resources in half.
  • Anemone - Tuesday, June 29, 2004 - link

    Why not start up some dual core cards? I'm sure it would be far cheaper and quite effective to just mount two 6800 gpu's on a card and let er rip :)

    just a thought...
  • artifex - Tuesday, June 29, 2004 - link

    um... doesn't Nvidia now have a single-board design that incorporates this type of thing, just announced by Apple as the "NVIDIA GeForce 6800 Ultra DDL," to drive their 30 inch LCD panel that requires (!) two DVI inputs? (The card has 4 DVI connectors!)

    Or am I reading this wrong?
  • Falloutboy525 - Tuesday, June 29, 2004 - link

    acutally I wouldn't be surprised if one of the board manufactures puts 2 cores on one card. but man just thinking aboutt he physical size of the card gives me nightmares.
  • Pumpkinierre - Tuesday, June 29, 2004 - link

    The last of the SLI Voodoo2s had a dual gpu on a single board for one PCI slot. I cant see why the same couldnt be done for a dual 6800 gpu board on a single x16 PCIe slot which is nowhere near saturation with current gpus. Load balancing would be accomplished on board. In fact, they could do it on AGP 8x as well. They could extend this to multiple gpu (also possible on a 3x 16PCIe slot mobo (+ 3slot bridge) if it ever came out. Just think of the cooling with a Prescott cpu thrown in! Put a Vapochill to room temperature!

    Backward daisy chaining of components is a great idea but I doubt whether the greed of manufacturers will let it happen. The concept should not be limited to gpus but extend to mobo/cpus as well. A high speed link bus(Hypertransport perhaps, but not I2B) should allow systems to act as multiple processor system albeit with a little added latency. With parallel processing and multithreads around the corner, it would be useful to those who detest the enormous waste in the IT industry.
  • quanta - Tuesday, June 29, 2004 - link

    Actually, NFactor, GeForce 6800's dedicated video codec is a step behind from ATI's videoshader. It adds transistor counts for things that can already be done by 3D core. As for power consumption goes, we only have NVIDIA's word for lower power requirement, but consider ATI also use videoshader for mobile parts, I suspect NVIDIA's claim only applies to NVIDIA's own products rather than ATI's.

    As for multiprocessing goes, ATI better catchup. After all, not every gamer can affort Evans & Sutherland simFUSION cards.
  • Phiro - Tuesday, June 29, 2004 - link

    Yes, but there's the economy of scale. Nvidia has a "single" production line churning out the nv4x chipset and they package them accordingly to their price point - no major modifications required.

    The 6800U & x800XT don't really qualify as a "halo" products - they are a high-end version of the *same product the majority of users buy.
  • klah - Tuesday, June 29, 2004 - link

    "And the whole "alienware sells 30k systems a year so there is a market for this" - 30k video cards a year is less than a drop in the bucket for the R&D spent on putting this together."

    The same could be said for the 6800U and x800XT. 99.9% of cards purchased will be sub-$200, so why bother with $500+ units? It's called a halo product. They are not built to make money. They are built for bragging rights and to generate a positive brand image. The 'buzz' this product creates for Nvidia is more substantial than spending the money on magazine ads and lan party sponsorships.

    ---------------

    "Excuse me, but I noticed that one 6800 Ultra takes two slots worth of airspace (due to the gigantic fan). So that means the Ultras would actually occupy the first and third PCIe slots"

    No. All pci-e slots are not they same. This setup require two x16 slots. Dual x16 moptherboards do not have any other slots between these. These two slots have about double the space between them as the rest of the x8, x4 and x2 slots.

    Nvidia is launching their nforce4 chipset later this year which will support dual pci-e x16. This is probably when this product will become available at retail.




  • Phiro - Tuesday, June 29, 2004 - link

    Ugh what a dumb, dumb waste of technology. Give me dual video cards (for dual directx/opengl displays) but not SLI BS. This is far better served with multiple GPU's on the card, not multiple cards.

    If Nvidia is really so concerned with people being able to pay for the ultimate in performance or allowing people to "upgrade" without throwing everything away, Nvidia should go with a user manageable socket on their cards and support multi-core GPUs.

    And the whole "alienware sells 30k systems a year so there is a market for this" - 30k video cards a year is less than a drop in the bucket for the R&D spent on putting this together.

    If this idiotic SLI re-invention cost the release of the nv4x (and prolonged our nv3x agony) a single day, or increased the cost of the nv4x cards by a single dollar, Nvidia is once again crowned king of the dumbshits in my book.

    Good choice buying 3dfx, Nvidia. It took a few years but Nvidia proved the old adage "You are what you eat". Nvidia's cards are hotter, larger, more complicated and more proprietary every day.
  • ScuZZee - Tuesday, June 29, 2004 - link

    Excuse me, but I noticed that one 6800 Ultra takes two slots worth of airspace (due to the gigantic fan). So that means the Ultras would actually occupy the first and third PCIe slots (the second and fourth slot would be made useless since it would be blocked by the coolers).

    So does that means the mobo have to spaced out the two PCIe slots to accommodate the two Ultras?
  • SpeekinSfear - Tuesday, June 29, 2004 - link

    barbary

    Just FYI, if you're gonna buy two, the GT model which $399 instead of the $499 Ultra cost can do it too. They're smaller, less hot and power draining, and did I mention cost $100 less. I think the only power difference is that the GT ones have 50mhz less clock speed.
  • barbary - Tuesday, June 29, 2004 - link

    So now I am stuck what to buy.

    I have a 670 Dell workstation and I was going to buy an ATI X800. But now should I buy a 6800 Ultra??

    Question is do I buy two so I know I have a pair??

    If I do and this technology doesn't come along for months I have wasted my money.

    If I don't buy two a may never get a pair to match and have wasted my money.
  • Swaid - Tuesday, June 29, 2004 - link

    Its not like you *have* to purchase 2 video cards for anything to work, thats only for the big spending enthusiast nuts and the CG/CAD guys. Its already part of the GPU, so its like an added bonus. The hard part in the beginning is getting a motherboard to support 2 PCIe x16.
  • SpeekinSfear - Tuesday, June 29, 2004 - link

    I also prefer NVIDIA 6800s over ATI X800s (Especially the GT model) but I requiring two video cards to get the best peformance is an inconsiderate progression. They're even encouraging devs to design stuff specially for this. It almost makes it like they cant make better video cards anymore or else like they care enough to try hard. Almost like they wanna slow down the video card performance pace, get everyone to buy two cards and make money from quantity over quality. NVIDIA better easy up if they know what's good for them. They're already pushing us hard enough to get PCIe*16 mobos. If they get their heads to high up in the clouds, they may start to lose business because no one will be willing to pay for their stuff. Or maybe Im just reading too much into this. :)
  • Jeff7181 - Tuesday, June 29, 2004 - link

    I thought it was a really big deal when they started combining vga cards and 3d accelerator cards into an "all-in-one" package. Now to get peak performance you're going to have two cards again... sounds like a step back to me... not to mention a HUGE waste of hardware. If they want the power of two NV4x GPU's, make a GeForce 68,000 Super Duper Ultra Extreme that's a dual GPU configuration.
  • NFactor - Tuesday, June 29, 2004 - link

    NVIDIA's new series of chips in my opinion are more impressive than ATI's. ATI may be faster but Nvidia is adding new technology like an onchip video encoder/decoder or this SLI technology. I look forward to seeing it in action.
  • SpeekinSfear - Tuesday, June 29, 2004 - link

    DerekWilson

    I get what you're sayin'. I just think it's crazy! I try to stay somewhat up to pace but this is just too much.
  • DerekWilson - Tuesday, June 29, 2004 - link

    SpeekinSfear --

    If you've got the money to grab a workstation board and 2x 6800 Ultras, I think you can spring for a couple hundred dollar workstation power supply. :-)
  • SpeekinSfear - Tuesday, June 29, 2004 - link

    Im sorry but I thought lots of people were having a hard enough time powering up one 6800 Ultra. Either is absurd or I dont know something. What kind of PSU are gonna need to pull this off?
  • TrogdorJW - Monday, June 28, 2004 - link

    The CPU is already doing a lot of work on the triangles. Doing a quick analysis that determines where to send a triangle shouldn't be too hard. The only difficulty is the overlapping triangles that need to be sent to both cards, and even that isn't very difficult. The load balancing is going to be of much greater benefit than the added computation, I think. Otherwise, you risk instances where 75% of the complexity is in the bottom or top half of the screen, so the actual performance boost of a solution like Alienware's would only be 33% instead of 100%.

    At one point, the article mentioned the bandwidth necessary to transfer half of a 2048x1536 frame from one card to the other. At 32-bit color, it would be 6,291,456 bytes, or 6 MB. If you were shooting for 100 FPS rates, then the bandwidth would need to be 600 MB/s - more than X2 PCIe but less than X4 PCIe if it were run at the same clockspeed as PCIe.

    If the connection is something like 16 bits wide (looking at the images, that seems like a good candidate - there are 13 pins on each side, I think, so 26 pins with 10 being for grounding or other data seems like a good estimate), then the connection would need to run at 300 MHz to manage transferring 600 MB/s. It might simply run at the core clockspeed, then, so it would handle 650 MB/s on the 6800, 700 MB/s on the GT, and 750+ MB/s on the Ultra and Ultra Extreme. Of course, how many of us even have monitors that can run at 2048x1536 resolution? At 1600x1200, you would need to be running at roughly 177 FPS or higher to max out a 650 MB/s connection.

    With that in mind, I imagine benchmarks with older games like Quake 3 (games that run at higher frame rates due to lower complexity) aren't going to benefit nearly as much. I believe we're seeing well over 200 FPS at 1600x1200 with 4xAA in Q3 with high-end systems, and somehow I doubt that the SLI connection is going to be able to push enough information to enable rates of 400+ FPS. (1600x1200x32 at 400 FPS would need 1400 MB/s of bandwidth between the cards just for the frames, not to mention any other communications.) Not that it really matters, though, except for bragging rights. :) More complex, GPU-bound games like Far Cry (and presumably Doom 3 and Half-life 2) will probably be happy to reach even 100 FPS.
  • glennpratt - Monday, June 28, 2004 - link

    Uhh, there's still the same number of triangles. If this is to be transparent to the game's then the card's themselves will likely split up the information.

    You come to some pretty serious conclusions based on exactly zero fact or logic.
  • hifisoftware - Monday, June 28, 2004 - link

    How much CPU load does it add? As I understand every triangle is analyzed as to where it will end up (top or bottom). Then this triangle is sent to the appropriate video card. This will add a huge load on CPU. Is this thing is going to be faster at all?
  • ZobarStyl - Monday, June 28, 2004 - link

    I completely agree with the final thought that if someone can purchase a dual-PCI-E board and a single SLI enabled card with the thought of grabbing an identical card later on, then this will definitely work out well. Plus once a system gets old and is relegated to other purposes (secondary rigs) you could still seperate the two and have 2 perfectly good GPU's. I seriously hope this is what nV has in mind.
  • Wonga - Monday, June 28, 2004 - link

    Hey hadders, I was thinking the same thing. Surely if these cards need such fancy cooling, they need a little bit of room to actually get some air to that cooling??? And to think I used to get worried putting a PCI card next to a Voodoo Banshee...
  • DigitalDivine - Monday, June 28, 2004 - link

    does anyone have any info if nvidia will be doing this for low end cards as well?
  • klah - Monday, June 28, 2004 - link

    "But it is hard for us to see very many people justifying spending $1000 on two NV45 based cards even for 2x the performance of one very fast GPU"

    Probably the same number of people who spend $3k-$7k on systems from Alienware, FalconNW, etc.

    Alienware alone sells ~30,000 units/yr.

    http://money.cnn.com/2004/03/18/commentary/game_ov...

  • hadders - Monday, June 28, 2004 - link

    Whoops. duh. Admittedly all that hot air is been exited via the cooling vent at the back, but still my original thought was overall ambient temperature. I guess there would be no reason why they couldn't put that second PCIe slot further down the boards.
  • hadders - Monday, June 28, 2004 - link

    Hmmm, to be honest I hope they would intend to widen the gap between video cards. I wouldn't think the air flow particularily good on the "second" card if it's pushed up hard against the other? And where is all that hot air been blown?
  • DerekWilson - Monday, June 28, 2004 - link

    The thing about NVIDIA SLI is that the technology is part of die ... Its on 6800UE, 6800U, 6800GT, and 6800 non-ultra ... It is poossible that they disable the technology on lower clocked versions just like one of the quad pipes is disabled on the 12 pipe 6800 ...

    The bottom line is that it wouldn't be any easier or harder for NVIDIA to impiliment this technology for lesser GPUs based on the NV40 core. Its a question of will they. It seems at this point that they aren't plannig on it, but demand can always influence a company's decisions.

    At the same time, I wouldn't recommend holding your breath :-)
  • ET - Monday, June 28, 2004 - link

    Even with current games you can easily make yourself GPU limited by running 8x AA at high resolutions (or even less, but wouldn't you want the highest AA and resolution, if you could get them). Future games will be much more demanding.

    What I'm really interested in is whether this will be available only at the high end, or at the mid-range, too. Buying two mid-range cards for a better-than-single-high-end result could be a nice option.
  • Operandi - Monday, June 28, 2004 - link

    Cool idea, but aren't these high end cards CPU limited by themselves let alone paired together.
  • DerekWilson - Monday, June 28, 2004 - link

    I really would like some bandwidth info, and I would have mentioned it if they had offered.

    That second "bare bones" PCB you are talking about is kind of what I meant when I was speaking of a dedicated slave card. Currently NVIDIA has given us no indication that this is the direction they are heading in.
  • KillaKilla - Monday, June 28, 2004 - link

    Did they give any info as to the bandwidth between cards?

    Or perhaps even to the viability of dual core cards? (say having a sandard card, and adding a seperate PCB with just the bare minimum, say GPU, RAM and and interface? Figuring that this would cut a bit of the cost off of manufacturing an entirely seperate card.

Log in

Don't have an account? Sign up now