At this year’s Consumer Electronics Show, NVIDIA had several things going on. In a public press conference they announced 3D Vision Surround and Tegra 2, while on the showfloor they had products o’plenty, including a GF100 setup showcasing 3D Vision Surround.

But if you’re here, then what you’re most interested in is what wasn’t talked about in public, and that was GF100. With the Fermi-based GF100 GPU finally in full production, NVIDIA was ready to talk to the press about the rest of GF100, and at the tail-end of CES we got our first look at GF100’s gaming abilities, along with a hands-on look at some unknown GF100 products in action. The message NVIDIA was trying to send: GF100 is going to be here soon, and it’s going to be fast.


Fermi/GF100 as announced in September of 2009

Before we get too far ahead of ourselves though, let’s talk about what we know and what we don’t know.

During CES, NVIDIA held deep dive sessions for the hardware press. At these deep dives, NVIDIA focused on 3 things: Discussing GF100’s architecture as is relevant for a gaming/consumer GPU, discussing their developer relations program (including the infamous Batman: Arkham Asylum anti-aliasing situation), and finally demonstrating GF100 in action on some games and some productivity applications.

Many of you have likely already seen the demos, as videos of what we saw have already been on YouTube for a few days now. What you haven’t seen and what we’ll be focusing on today, is what we’ve learned about GF100 as a gaming GPU. We now know everything about what makes GF100 tick, and we’re going to share it all with you.

With that said, while NVIDIA is showing off GF100, they aren’t showing off the final products. As such we can talk about the GPU, but we don’t know anything about the final cards. All of that will be announced at a later time – and no, we don’t know that either. In short, here’s what we still don’t know and will not be able to cover today:

  1. Die size
  2. What cards will be made from the GF100
  3. Clock speeds
  4. Power usage (we only know that it’s more than GT200)
  5. Pricing
  6. Performance

At this point the final products and pricing are going to heavily depend on what the final GF100 chips are like. The clockspeeds NVIDIA can get away with will determine power usage and performance, and by extension of that, pricing. Make no mistake though, NVIDIA is clearly aiming to be faster than AMD’s Radeon HD 5870, so form your expectations accordingly.

For performance in particular, we have seen one benchmark: Far Cry 2, running the Ranch Small demo, with NVIDIA running it on both their unnamed GF100 card and a GTX285. The GF100 card was faster (84fps vs. 50fps), but as Ranch Small is a semi-randomized benchmark (certain objects are in some runs and not others) and we’ve seen Far Cry 2 to be CPU-limited in other situations, we don’t put much faith in this specific benchmark. When it comes to performance, we’re content to wait until we can test GF100 cards ourselves.

With that out of the way, let’s get started on GF100.

GF100’s Gaming Architecture
Comments Locked

115 Comments

View All Comments

  • Ryan Smith - Wednesday, January 20, 2010 - link

    quote:

    In you conclusion you mentioned that the only thing which would matter would be price/performance. However, from the article I wasnt really able to make out a couple of things. When NVIDIA says they can make something look better than the competition, how would you quantify that?
    From my perspective, unless they can deliver better than 5870 performance at a reasonable price, then their image quality improvements aren't going to be enough to seal the deal. If they can meet those two factors however, then yes, image quality needs to be factored in to some degree.

    At this point I'm not sure where that would be, and part of that is diminishing returns. Tessellation will return better models, but adding polygons will result in diminishing returns. We're going to have to see what games do in order to see if the extra geometry that GF100 is supposed to be able to generate can really result in a noticeable difference.

    quote:

    I am a gamer & I love beautiful graphics. It's one of the reasons I still sometimes buy games for PCs instead of consoles. I have a 5870 & a 1080p 24" monitor. I would however consider buying this card if it made my game look better. After a certain number(60fps) I really only care about beautiful graphics. I want no grass to look like paper or jaggies to show on distant objects. Also, will game makers take advantage of this?
    Will game makers take advantage of it? That's the million-dollar question right now. NVIDIA is counting on them doing so, but it remains to be seen just how many devs are going to make meaningful use of tessellation (beyond just n-patching things for better curves), since DX11 game development is so young.

    quote:

    Unlike previous generations game manufacturers are very deeply tied to the current console market. They have to make sure the game performs admirably on current day consoles which are at least 3-5 years behind their PC counterparts, so what incentive do they have to try and advance graphics on the PC when there arent enough people buying them. I am looking at current games and frankly just playing it, other than an obvious improvement in framerate, I cannot notice any visual improvements.
    Consoles certainly have a lot to do with it. One very real possibility is that the bulk of games continue to be at the DX9 level until the next generation of consoles hits with DX11-like GPUs. I'll answer the rest of this in your next question.
    quote:

    Coming back to my question on architecture. Will this tech being built by Nvidia help improve visual quality of games without additional or less additional work from the game manufacturing studios.
    The good news is that it takes very little work. Game assets are almost always designed at a much greater level of detail than what they ship at. The textbook example is Doom3, where the models were designed on the order of 1mil polygons; they needed to be designed that detailed in order to compute proper bump maps and parallax maps. Tessellation and the displacement map is just one more derived map in that regard - for the most part you only need to export an appropriate displacement map from your original assets, and NV is counting on this.

    The only downsides to NV's plan are that: 1) Not everything is done at this high of a detail level (models are usually highly detailed, the world geometry not so much), and 2) Higher quality displacement maps aren't "free". Since a game will have multiple displacement maps (you have to MIP-chain them just like you do any other kind of map), a dev is basically looking at needing to include at least 1 more level that's even bigger than the others. Conceivably, not everyone is going to have extra disc space to spend on such assets. Although most games currently still have space to spare on a DVD-9, so I can't quantify how much of a problem that might be.
  • FITCamaro - Monday, January 18, 2010 - link

    It will be fast. But from the size of it, its going to be expensive as hell.

    I question how much success nvidia will have with yet another fast but hot and expensive card. Especially with the entire world in recession.
  • beginner99 - Monday, January 18, 2010 - link

    Sounds nice but I doubt it's useful yet. DX11, probably takes at least 1-2 year till it takes off and the geometry power could be useful. Meaning could have easly waited a generation longer.
    Power consumption will probably be deciding. The new Radeons do rather well in that area.
    But anyway, i'm gonna wait. unless it is complete crap, it will at least help for Radeon prices going south, even if you don't buy one.
  • just4U - Monday, January 18, 2010 - link

    On Amd pricing. It seems pretty fair for the 57XX line. Cheaper overall then the 4850 and 4870 on their launches with similiar performance and added DX11 features.

    It would be nice to see the 5850 and 5870 priced about one third cheaper.. but here in Canada the cards are always sold out or of very limited stock so... I guess there is some justification for the higher pricing.

    I still can't get a 275 cheap either. It's priced 30-40% higher then the 4870.

    The only card(s) I've purchased so far are the 5750s as I feel the last gen products are still viable at their current pricing ... and I buy a fair amount of video cards (20-100 per year)
  • solgae1784 - Monday, January 18, 2010 - link

    Let's just hope this GF100 doesn't become another disaster that was "Geforce FX".
  • setzer - Monday, January 18, 2010 - link

    While on paper these specs look great for the High-End market (>500€ cards) how much will the mainstream market lose, as in the cards that sell around the 150~300€ bracket, which coincidently are the cards the most people tend to buy. Nvidia tends to scale down the specifications but how much will it be scaled down, what is the interest of the new IQ improvements if you can only use them on high-end cards because the mainstream cards can't handle it.
    The 5 series radeons are similar, the new generation only has appeal if you go for the 58xx++ cards, which are overpriced, if you already have a 4850 you can hold out from buying a new card for at least one extra year, take the 5670, it has dx11 support but hasn't the horse power to use it effectively neutering the card from start as far as dx11 goes.
    So even if Nvidia goes with a March launch of GF100, I'm guessing it will not be until June or July that we see the GeForce 10600GT (like or GX600GT, phun on ATI 10000 series :P), which will just have the effect of Radeon prices to stay where they are (high) and not where they should be in terms of performance (slightly on par with the HD 4000 series).
  • Beno - Monday, January 18, 2010 - link

    page 2 isnt working
  • Zool - Monday, January 18, 2010 - link

    It will be interesting how much of the geometry performance will be true in the end from all these hype. I wouldnt put my hand into fire on nvidias pr slides and in house demos. Like the pr graph with 600% teselation performance increase over ati card. It will surely have some dark sides too like everything around. Nothing is free. Until real benchmarks u cant trust too much to pr graphs these days.
  • haplo602 - Monday, January 18, 2010 - link

    This looks similar to what Riva TNT used to be. Nvidia was promising everything including a cure for cancer. It turned out to be barely better than 3Dfx at that time because of clock/power/heat problems.

    Seems Fermi will be a big bang in workstation/HPC markets. Gaming not so much.
  • DominionSeraph - Monday, January 18, 2010 - link

    Anyone with at least half a brain had a TNT. Tech noobs saw "Voodoo" and went with the gimped Banshee, and those with money to burn threw in dual Voodoo 2's.

    How does this at all compare to Fermi, whose performance will almost certainly not justify its price. The 5870's doesn't, not with the 5850 in town. Such is the nature of the bleeding edge.

    Do you just type things out at random?

Log in

Don't have an account? Sign up now