If we don't know these basic things then we don't know much.
1. Die size
2. What cards will be made from the GF100
3. Clock speeds
4. Power usage (we only know that it’s more than GT200)
5. Pricing
6. Performance
Seems a pretty comprehensive list of important info to me.
You guys that buy a brand new graphics card every single year are crazy . im still running an 8800 GTS 512mb with no issues in any games whatso ever DX10, was a waste of money and everyones time. Im going to upgrade to the highest end of the GF100;s but thats from a 8800 GTS512mb so the upgrade is significant. Bit form a heigh end ati card to GF 100 ?!?!?!? what was the friggin point in even getting a 200 series card.!?!?!!?1/. Games are only just catching up to the 9000 series now.
I'll wait till they (TSMC) start using 28nm (from planned 40nm) fabrication process on Fermi... drop in size, power consumption and price and rise is clock speed will probably make it worth the wait.
It'll be a nice addition to the GTX 295 I currently have. (Yeah, going SLI and PhysX).
Tesselation is quite resource hog on shaders. If u increase polygons by tenfold (quite easy even with basic levels of tesselation factor) the dissplacement map shaders needs to calculate tenfold more normals which ends in the much more detailed dissplacement of course. The main advatage of tesselation is that it dont need space in video memmory and also read(write ?) bandwith is on chip but it actualy acts as you would increase the polygons in game. Lightning, shadows and other geometry based efects should act as on high polygon models too i think (at least in uniengine heaven u have shadows after tesselation where before u didnt had a single shadow).
Only the last stage of tesselator the domain shader produces actual vertices. The real question would be how much does this single(?) domain shader in radeons keep up with the 16 polymorph engines(each with its own tesselation engines) in gt300.
Thats 1(?) domain shader for 32 stream procesors in gt300(and much closer) against 1(?) for 320 5D units in radeon.
If u have too much shader programs that need the new vertices cordinations the radeon could end up being realy botlenecked.
Just my toughs.
Of course ati-s tesselation engine and nvidias tesselation engine can be completly different fixed units. Ati-s tesselation engine is surely more robust than a single tesselation engine in nvidias 16 polymorph engines as its designed for the entire shaders.
They have been sitting on the technology since before the release of SLi.
In fact SLi didn't have even 2-monitor support until recently, when it should have had 4-monitor support all along.
nVidia clearly didn't want to expend the resources on making the software for it until it was forced, as it now is by AMD heavily advertising their version.
If you look at some of their professional offerings with 4-monitor output it is clear that they have the technology, I am just glad they have acknowledged that it is a desire-able feature.
I certainly hope the mainstream cards get 3-monitor output, it will be nice to drive 3 displays. 3 Projectors is an excellent application, not only for high-def movies filmed in wider than 16:9 formats, but games as well. With projectors you don't get the monitor bezel in the way.
Enthusiast multi-monitor gaming goes back to the Quake II days, glad to see that the mainstream has finally caught up (I am sure the geeks have been pushing for it from inside the companies.)
Maybe I'll live to see if Nvidia still wins AMD / Ati, a proposal which is as leadership price / performance, or even as productivity, regardless of price!:)
They should make a GT10000, in which the entire 300mm wafer is 1 die. 300B transistors. Unfortunately you have to mount the final thing to the outside of your case, and it runs off a 240V line.
The 50% larger die size will kill them. Even if the reports of lower yields are false they will have to get a much smaller profit margin on their cards than AMD to stay competetive. As it is the 5870 can run nearly any game on a 30" monitor with everything turned up at a playable rate. The target audience for anything more than a 5870 is absurdly small. If Nvidia does not release a mainstream card the only people that are going to buy this beast are the people that have been looking for a reason not to buy and AMD card all along.
In the end I think Nvidia will loose even more market share this generation. Across the board AMD is the fastest card at every price point. That will not change and with the dual GPU card already out from ATI it will be a long time before Nvidia has the highest performing card because I doubt they will release a dual GPU card at launch if they are having thermal issues with a single GPU card.
BTW... I've only ever owned Nvidia cards but that will likely change at my next system build even after this "information."
Heh. Just that it was hyped up so much and we really didn't get much other than some architectural changes. I suppose that maybe this is really interesting to some, but I've seen a lot of hardware underperform early spec based guesses.
The Anandtech article was great. The information revealed by Nvidia was just okay.
I really hope fermi doesn't turn into "nvidias 2900xt". late, hot, and expensive. while i doubt it will be slow by any stretch of the imagination, i hope it isn't TOO hot and heavy to be feasible. i like amd, but nvidia failing is not good for anybody. higher prices(as we've seen) and slower advancements in technology hurt EVERYONE.
Remember that they talked all about how wondrous NV30 was going to be too. This is marketing folks. They can have the most amazing eye popping theoretical paper specs in the universe, but if it can't be turned into something affordable and highly competitive, it simply doesn't matter.
Put another way, they haven't been delaying it because it's so awesome the world isn't ready for it. Look deeper. :D
I wonder how it will scale, since the bulk of the market is for more mainstream cards. (the article mentioned lesser derivatives having less polymorph engines)
Iam still curious why is nvidia pushing this geometry so hard. With 850 Mhz the cypress should be able to make 850mil polygons/s with one triangel/clock speed. Now thats 14 mil per single frame max at 60fps which is quite unrealistic. Thats more than 7 triangels per single pixel in 1920*1050. Making that amount of geometry in single pixel is quite waste and also botlenecks performance. U just wont see the diference.
Thats why amd/ati is pushing also adaptive tesselation which can reduce the tesselation level with copute shader lod to fit a reasonable amount of triangels per pixel.
I can push teselation factor to 14 in the dx9 ATI tesselation sdk demo and reach 100fps or put it on 3 and reach 700+ fps with almost zero difference.
Also want to note that just tesselation is not enough and u always use displacement mapping too. Not to mention u change the whole rendering scene to more shader demanding(shadows,lightning) so to much tesselation (like in uniengine heaven on almost everything, when without tesselation even stairs are flat) can realy make big shader hit.
If u compare the graphic quality before tesselation and after in uniengine heaven i would rather ask what the hell is taken away that much performance without tesselation as everything looks so flat like in a 10y old engine.
The increased geometry setup should bring litle to no performance advantage for gf100, the main fps push are the much more eficience shaders with new cache architecture and the more than double the shaders of course.
There are still plenty of questions.
Like how tesselation efects MSAA with increased geametry per pixel. Also the flat stairs in uniengine (and very plastic, realistic after tesselation and displacement mapping), would they work with collision detection as after tesselation or before as completely flat and somewhere else in the 3d space. The same with some physix efects. The uniengine heaven is more of a showcase of tesselation and what can be done than a real game engine.
I want also note that for the stream of fps/3rd person shooters/rts/racing games that look all same sometimes upgrading the graphic card doesnt have much sense these days.
Can anyone make a game that will use pc hardware and it wont end in running and shoting at each other from first or third person ? Dragon age was a quite weak overhyped rpg.
Agreed. That is one of the main reasons I've lost interest in PC gaming. Ironically though, my favorite console games on the PS3 have been the two Uncharted games...
Seeing how the GF100 chip has no display components at all on-chip (RAMDAC, TMDS, Displayport, PureVideo), they will probably be using a NVIO chip like the GT200. Would it not be possible to just put multiple NVIO chips to scale with the number of display outputs?
If it's possible, NVIDIA is not doing it. I asked them about the limit on display outputs, and their response (which is what brought upon the comments in the article) was that GF100 cards were already too late in the design process after they greenlit Surround to add more display outputs.
I don't have more details than that, but the implication is that they need to bake support for more displays in to the GPU itself.
In you conclusion you mentioned that the only thing which would matter would be price/performance. However, from the article I wasnt really able to make out a couple of things. When NVIDIA says they can make something look better than the competition, how would you quantify that?
I am a gamer & I love beautiful graphics. It's one of the reasons I still sometimes buy games for PCs instead of consoles. I have a 5870 & a 1080p 24" monitor. I would however consider buying this card if it made my game look better. After a certain number(60fps) I really only care about beautiful graphics. I want no grass to look like paper or jaggies to show on distant objects. Also, will game makers take advantage of this? Unlike previous generations game manufacturers are very deeply tied to the current console market. They have to make sure the game performs admirably on current day consoles which are at least 3-5 years behind their PC counterparts, so what incentive do they have to try and advance graphics on the PC when there arent enough people buying them. I am looking at current games and frankly just playing it, other than an obvious improvement in framerate, I cannot notice any visual improvements.
Coming back to my question on architecture. Will this tech being built by Nvidia help improve visual quality of games without additional or less additional work from the game manufacturing studios.
quote: In you conclusion you mentioned that the only thing which would matter would be price/performance. However, from the article I wasnt really able to make out a couple of things. When NVIDIA says they can make something look better than the competition, how would you quantify that?
From my perspective, unless they can deliver better than 5870 performance at a reasonable price, then their image quality improvements aren't going to be enough to seal the deal. If they can meet those two factors however, then yes, image quality needs to be factored in to some degree.
At this point I'm not sure where that would be, and part of that is diminishing returns. Tessellation will return better models, but adding polygons will result in diminishing returns. We're going to have to see what games do in order to see if the extra geometry that GF100 is supposed to be able to generate can really result in a noticeable difference.
quote: I am a gamer & I love beautiful graphics. It's one of the reasons I still sometimes buy games for PCs instead of consoles. I have a 5870 & a 1080p 24" monitor. I would however consider buying this card if it made my game look better. After a certain number(60fps) I really only care about beautiful graphics. I want no grass to look like paper or jaggies to show on distant objects. Also, will game makers take advantage of this?
Will game makers take advantage of it? That's the million-dollar question right now. NVIDIA is counting on them doing so, but it remains to be seen just how many devs are going to make meaningful use of tessellation (beyond just n-patching things for better curves), since DX11 game development is so young.
quote: Unlike previous generations game manufacturers are very deeply tied to the current console market. They have to make sure the game performs admirably on current day consoles which are at least 3-5 years behind their PC counterparts, so what incentive do they have to try and advance graphics on the PC when there arent enough people buying them. I am looking at current games and frankly just playing it, other than an obvious improvement in framerate, I cannot notice any visual improvements.
Consoles certainly have a lot to do with it. One very real possibility is that the bulk of games continue to be at the DX9 level until the next generation of consoles hits with DX11-like GPUs. I'll answer the rest of this in your next question.
quote: Coming back to my question on architecture. Will this tech being built by Nvidia help improve visual quality of games without additional or less additional work from the game manufacturing studios.
The good news is that it takes very little work. Game assets are almost always designed at a much greater level of detail than what they ship at. The textbook example is Doom3, where the models were designed on the order of 1mil polygons; they needed to be designed that detailed in order to compute proper bump maps and parallax maps. Tessellation and the displacement map is just one more derived map in that regard - for the most part you only need to export an appropriate displacement map from your original assets, and NV is counting on this.
The only downsides to NV's plan are that: 1) Not everything is done at this high of a detail level (models are usually highly detailed, the world geometry not so much), and 2) Higher quality displacement maps aren't "free". Since a game will have multiple displacement maps (you have to MIP-chain them just like you do any other kind of map), a dev is basically looking at needing to include at least 1 more level that's even bigger than the others. Conceivably, not everyone is going to have extra disc space to spend on such assets. Although most games currently still have space to spare on a DVD-9, so I can't quantify how much of a problem that might be.
Sounds nice but I doubt it's useful yet. DX11, probably takes at least 1-2 year till it takes off and the geometry power could be useful. Meaning could have easly waited a generation longer.
Power consumption will probably be deciding. The new Radeons do rather well in that area.
But anyway, i'm gonna wait. unless it is complete crap, it will at least help for Radeon prices going south, even if you don't buy one.
On Amd pricing. It seems pretty fair for the 57XX line. Cheaper overall then the 4850 and 4870 on their launches with similiar performance and added DX11 features.
It would be nice to see the 5850 and 5870 priced about one third cheaper.. but here in Canada the cards are always sold out or of very limited stock so... I guess there is some justification for the higher pricing.
I still can't get a 275 cheap either. It's priced 30-40% higher then the 4870.
The only card(s) I've purchased so far are the 5750s as I feel the last gen products are still viable at their current pricing ... and I buy a fair amount of video cards (20-100 per year)
While on paper these specs look great for the High-End market (>500€ cards) how much will the mainstream market lose, as in the cards that sell around the 150~300€ bracket, which coincidently are the cards the most people tend to buy. Nvidia tends to scale down the specifications but how much will it be scaled down, what is the interest of the new IQ improvements if you can only use them on high-end cards because the mainstream cards can't handle it.
The 5 series radeons are similar, the new generation only has appeal if you go for the 58xx++ cards, which are overpriced, if you already have a 4850 you can hold out from buying a new card for at least one extra year, take the 5670, it has dx11 support but hasn't the horse power to use it effectively neutering the card from start as far as dx11 goes.
So even if Nvidia goes with a March launch of GF100, I'm guessing it will not be until June or July that we see the GeForce 10600GT (like or GX600GT, phun on ATI 10000 series :P), which will just have the effect of Radeon prices to stay where they are (high) and not where they should be in terms of performance (slightly on par with the HD 4000 series).
It will be interesting how much of the geometry performance will be true in the end from all these hype. I wouldnt put my hand into fire on nvidias pr slides and in house demos. Like the pr graph with 600% teselation performance increase over ati card. It will surely have some dark sides too like everything around. Nothing is free. Until real benchmarks u cant trust too much to pr graphs these days.
This looks similar to what Riva TNT used to be. Nvidia was promising everything including a cure for cancer. It turned out to be barely better than 3Dfx at that time because of clock/power/heat problems.
Seems Fermi will be a big bang in workstation/HPC markets. Gaming not so much.
Anyone with at least half a brain had a TNT. Tech noobs saw "Voodoo" and went with the gimped Banshee, and those with money to burn threw in dual Voodoo 2's.
How does this at all compare to Fermi, whose performance will almost certainly not justify its price. The 5870's doesn't, not with the 5850 in town. Such is the nature of the bleeding edge.
hey, Banshee was fine! I had one because by that time the 3dfx api was better than DirectX. But suddenly everything became DX compatible and that was one thing 3dfx GPUs could not do... then I replaced that Banshee with a Radeon 9200, later a Radeon X300 (or something), then Radeon 3850, and now Radeon 5770. I'm always in for the mainstream, not the top of the line, and Nvidia is not paying enough atention to mainstream since Geforce FX series...
The question is when they will come with mid range variants. The GF100 seems to be 448SP variant and the 512SP card will be only after A4 revision or who knows.
http://www.semiconductor.net/article/438968-Nvidia...">http://www.semiconductor.net/article/43...en_Calls... The interesting part on the article is the graph which shows the exponecial increase in leakage power after 40nm and less. (which of course hurts more if u have a big chip and diferent clocks to maintain)
They will have even more problems now that dx11 cards will be only gt300 architecture so no rebrand choices for mid range and lower.
For consumer gf100 will be great if they can buy it somewhere in the future, but nvidia will bleed more on it than the GT200.
Maybe I'm missing something, but it seems like PC gaming has lost most of its value in the last few years. I know that you can run games at higher resolutions and probably faster framerates than you can on consoles, but it will end up costing more than all 3 consoles combined to do so. It just seems to have gotten too expensive for the marginal performance advantage.
That being said, I bet that one of these would really crank through Collatz or GPUGRID.
I certainly share that sentiment. The last major graphical showcase we had was Crysis in 2007. There have been nice looking PC exclusive titles (Crysis Warhead, Arma 2, the Stalker franchise) since then, but no significant new IP with new rendering engines to take advantage of new technology.
If software publishers want our money, they are going to have to do better. Without significant GPGPU applications for the mainstream consumer, GPU manufacturers will eventually suffer as well.
no, i think you're totally correct, from a certain point of view.
i had the thought that the DX9 support is probably more than enough for console games, and why would developers pump money into DX11 support for a product that generates most of it's profits on consoles?
obviously, there is some money to be made in the pc game sphere, but is it really enough to drive game developers to sink money into extra quality just for us?
At least NV has made a product that can be marketed now, and into the future, for design/enterprise solutions. That should help them extract more of the value out of their r&d if there are very few DX11 games for the lifespan of fermi.
If Fermi is working good, NVidia is in a great place for the development of their next GPU - they'll only need to update some things here and there, based mostly on where the card's performance lack (improve this, improve that, reduce this, reduce that). Also, they are in a very good place for making lower-end cards based on Fermi (cut everything in two or four, no need to redesign the previously fixed function blocks).
As for AMD... their current design is in the works and probably too advanced for big changes, so their real Fermi-killer won't come faster than a year or so (that is, if Fermi proves to be so great a success as NVidia wants it to be).
that ^^^^^^^
besides, with Steam/D2D/Impulse there is new breath in PC gaming. constant sales on great games, automatic updates, active support, forums full of people, all integrated with virtual community (profiles, chats, etc.). a place to release demos, trailers, etc. I was worried about PC gaming 2-3 years ago, but I'm absolutely confident that it's coming back better than ever.
Are the screen shots from left 4 dead 2 missing at the end of page 5?
[quote]
As a consequence of this change, TMAA’s tendency to have fake geometry on billboards pop in and out of existence is also solved. Here we have a set of screenshots from Left 4 Dead 2 showcasing this in action. The GF100 with TMAA generates softer edges on the vertical bars in this picture, which is what stops the popping from the GT200.
[/quote]
I have a feeling that nVidia is taking the long road here...
The past 6 months have been painful for nVidia, however I think they are looking way ahead. At its core, the 5000 series from AMD is really just a supersized 4000 series. Not a bad thing, but nothing new either (DX11 is nice, but that'll be awhile, and multiple monitors are still rare).
Games have all looked the same for years now. CPU and GPU power have gone WAY up in the past 5 years, but too much is still developed for DX9 (X360/PS3 partly to blame, as is Vista's poor adoption), and I suspect that even the 5000 series is really still designed around DX9 and games meant for it with a few "enhancements".
This new chip seems designed for DX11 and much higher detailed graphics. Polygon counts can go up with this, the number of new details can really shine, but only once games are designed from scratch for it. From that point, the 6 month wait isn't a big deal, it'll be another few years before games are really designed from scratch for DX11 ONLY. Otherwise you have DX9 games with a few "enhancements" that don't add to gameplay.
It seems like we are really skipping DX10 here, partly due to Vista's poor adoption, partly due to XP not being able to use DX10. With Windows 7 being a success and DX11 backported to Vista, I think in the next 2-3 years you'll finally see most games come out that really require Vista/7 because they will require DX10/11.
Of course, my 260GTX still runs everything I throw at it, so until games get more complex or something else changes, I see no reason to upgrade. I thought about a 5870 as an upgrade, but why? Everything already runs fast enough, what does it get me other than some headroom? If I was still on a 8800GT, it would make sense, but I'd rather wait for nVidia to launch so the prices come down.
Well then there's the fact ATI designed their 2000 series (and 3000 and 4000 series) to comply with the full DirectX 10 specification. NVIDIA didn't have the chips required for this spec, and talked Microsoft into castrating DX10 by only adding in a few things. Tessellation was notably left out. ATI wsa hung out to dry on performanec and features wasted on die. They finalyl got DX10.1 later on but the damage was done.
Sure people complained about Vista, mostly gamers as games ran slower, but I wonder how those games would have been if DX10 was run at the full spec (which was marginally lower the DX11 today)?
I made this post in another forum, but I think it's relevant here:
---
Yes, I'm beginning to see this [games becoming less GPU limited and more CPU limited] with more mainstream games (to repeat, Crysis is NOT a mainstream game). FLOP wise, a high end video card (i.e. 5970 at 5 TFLOP) is something like 100 TIMES the performance of a high end CPU (i7 at 50 GFLOPS).
In comparison, during the 2004 days, we had GPUs like the 6800 Ultra (54 GFLOP) and P4's (6 GFLOP) (historical data here: http://forum.beyond3d.com/showthread.php?t=51677)">http://forum.beyond3d.com/showthread.php?t=51677). That's 9X the performance. We've gone from 9X to 100X the performance in a matter of 5 years. No wonder few modern games are actually pushing modern GPUs (requiring people who want to "get the most" out of their high powered GPUs to go for multiple screens, insane AA/AF, insane detail settings, complex shaders, etc)
I know this is a horrible comparison, but still - it gives you an idea of the imbalance in performance. This kind of reminds me of the whole hard drive capacity vs. transfer rate argument. Today's 2 TB monsters are actually not much faster than the few GB drives at the turn of the millennium (and even less so latency wise).
Personally, I think the days of GPU bound (for mainstream discrete GPU computing) closed when Nvidia's 8 series launched (the 8800GTX is perhaps the longest-lived video card ever made). And in general, when the industry adopted programmable compute units (aka DirectX 10).
Actually the Radeon 9700/9800 Pro had a pretty long life too. The 9700 Pro I bought in 2002/2003 had lasted me all the way to early 2007, which was when I then bought a 8800GTS 640mb. 4 years is pretty good. It could have lasted longer, but then I was itching for a new platform and needed to get a PCI-Express card (the Radeon was AGP).
Sorry you lost all credibility when you tried to spin this bullsh*#t "Today's 2 TB monsters are actually not much faster than the few GB drives at the turn of the millennium"
Go try and run your new rig off one of those old drives, come back and post your results in 2 hours when your system finally boots.
Disclosure: I'm still on a 8800 GTS 512, and I am in no pressure to upgrade right now. While a 58xx would be nice to have, on a single monitor I really have no need to upgrade. I may look into going i7 though.
If something works well for you then there is no real reason (or need) to upgrade.
I still run an 8800 ultra, it still runs many games well on a 22 inch monitor. The GT200 was really only a 50% boost over the 8 series on average. For comparison, I bought a second hand ultra for $60, transplanted both of them into an i7 based system and this really produced a significant boost over a GTX285 in the games I liked; about 25% more performance- roughly equivalent to HD5850, albeit not always as smooth.
It would be good to upgrade to a single GPU that is more than double the performance of this kind of setup. But a HD5800 series card is not in that league, and it remains to be seen if the GF100 is.
I agree this chip does seem designed around new or upcoming features. Many architectural shortcomings from the GT200 chip seem to be addressed and worked around getting usable performance (like tesselation) for new API features.
Anyway to be pragmatic about things, nvidias history leaves much to be desired; performance promised and performance delivered is very variable. HardOCP mentioned the 5800 Ultra launch as a con, there is also th G80 launch on the flip side.
A GPU's theoretical performance and the expectations hanging around it are nothing to make choices by, wait for the real proof. Anyone recall the launch of the 'monstrous' 2900XT? A toothless beast that one.
For the benefit of myself and everyone else who doesn't follow gaming politics closely, what is "the infamous Batman: Arkham Asylum anti-aliasing situation"?
Nvidia helped get AA working in batman which also works on ATI cards. If the game detects anything besides a Nvidia card it disables AA. The reason some people are angry is when ATI helps out with games it doesn't limit who can use the feature, at least that's what they(AMD) claim.
And nvidia shouldn't have since nvidia didn't develop the game.
On the other hand, you can be quite certain that the devs. did run the game on Ati hardware but only lock out the "preferred" AA design because of nvidia's money nvidia invested in the game.
And that can be plainly seen by the fact that when the game is "hacked" to trick the game into seeing an nvidia card installed despite the fact an Ati card is being used and AA works flawlessly....and the ATi cards end up faster than current nvidia cards....the game is exposed for what it is. Purposely crippling a game to favor one brand of video card over another.
But the nvididiots seem to not mind this at all. Yet, this is akin to Intel writing their complier to make AMD cpus run slower or worse on programs compiled with the Intel compiler.
Read about that debacle Intel's now suffering from and that the outrage is fairly universal. Now, you'd think nvidia would suffer the same nearly universal outrage for intentionally crippling a game's function to favor one brand of card over another, yet nvidiots make apologies and say "Ati cards weren't tested." I'd like to see that as a fact instead of conjecture.
So, one company cripples the function of another company's product and the world's up in arms, screaming "Monopolistic tactics!!!" and "Fine them to hell and back!"; another company does essentially the same thing and it gets a pass.
If nV continues like this, it will turn around on them. It took MANY years for the market guards to finally say, "Intel, quit your sh*t!" and actually do something about it. Don't expect immediate retaliation in a multibillion dollar world-wide industry.
"yet nvidiots make apologies and say "Ati cards weren't tested." I'd like to see that as a fact instead of conjecture. "
here you go
http://www.legitreviews.com/news/6570/">http://www.legitreviews.com/news/6570/ "On the other hand, you can be quite certain that the devs. did run the game on Ati hardware but only lock out the "preferred" AA design because of nvidia's money nvidia invested in the game. "
proof? that looks like conjecture to me. Nvidia says otherwise.
Amd doesn't deny it either.
http://www.bit-tech.net/bits/interviews/2010/01/06...">http://www.bit-tech.net/bits/interviews...iew-amd-... they just don't like it
And please refrain from calling people names such as "nvidiot," it doesn't help portray your image as unbiased.
Oh for gosh sakes, this is the 'launch' and we can't even have a paper launch where at least reviewers get hardware? This is just more details for the same crap that was 'announced' when the 5800s came out. Poor show NV, poor show.
This is as close to a paper launch as I've seen in a while, except that there is not even an unattainable card. Gawd, they are gonna drag this out a lonnnnngg time. Better start saving up for that 1500W psu!
Looks like Nvidia G80'd the graphics market again by completely redesigning major parts of their rendering pipeline. Clearly not just a doubling of GT200, some of the changes are really geared toward the next-gen of DX11 and PhysX driven games.
One thing I didn't see mentioned anywhere was HD sound capabilities similar to AMD's 5 series offerings. I'm guessing they didn't mention it, which makes me think its not going to be addressed.
for nvidia to "g80" the market again they would need parts far faster then anything amd had to offer and to maintain that lead for several months. The story is in fact reversed. AMD has the significantly faster cards and has had them for months now. gf100 still isn't here and the fact that nvidia isn't signing the praises of its performance up and down the streets is a sign that they're acceptable at best. (acceptable meaning faster then a 5870, a chip that's significantly smaller and cheaper to make)
Nah, they just have to win the generation, which they will when Fermi launches. And when I mean "generation", I mean the 12-16 month cycles dictated by process node and microarchitecture. It was similar with G80, R580 had the crown for a few months until G80 obliterated it. Even more recently with the 4870X2 and GTX 295. AMD was first to market by a good 4 months but Nvidia still won the generation with GTX 295.
The 295 ran extremely hot, was much MUCH more expensive to manufacture, and the performance advantage in games was negligible for the most part. No game is so demanding the 4870 X2 can't run it well.
The geforce 285 is at least twice as expensive as a radeon 4890, its closest competitor, so how you can say Nvidia "won" this round is beyond me.
But I suppose with fanboy glasses on you can see whatever you want to see. ;)
Funny the 295 ran no hotter (and often cooler) with a lower TDP than the 4870X2 from virtually every review that tested temps and was faster as well. Also the GTX 285 didn't compete with the 4890, the 275 did in both price and performance.
Its obvious Nvidia won the round as these points are historical facts based on mounds of evidence, I suppose with fanboy glasses on you can see whatever you want to see. ;)
Hey kid, sometimes less is more. You dont need to post that much just to say "nVidia wins, and will win again". This round AMD has won with 2mil cards drying up the graphics market. You cant change this, neither could nVidia.
Just come out and buy a Fermi, which is 15-20% faster than a HD 5870, for $500-$600. You only have to wait 3 months, and save some bucks until then. I have a HD 5850 here and I'm waiting for Tegra 2 based smartphone, not Fermi.
Both Tegra 2 and Fermi are extraordinary products - if what NVidia says about them is true. Unfortunately, it doesn't seem like any of them is a perfect fit for the gaming desktop.
You don't win a generation with a very-high-end card - you win a generation with a mainstream card (as this is where most of the profits are). Also, low-end cards are very high-volume, but the profit from each unit is very small.
You might win the bragging rights with the $600, top-of-the-line, two-in-one cards, but they don't really have a market share.
But that's not how Nvidia's business model works for the very reasons you stated. They know their low-end cards are very high-volume and low margin/profit and will sell regardless.
They also know people buying in these price brackets don't know about or don't care about features like DX11 and as the 5670 review showed, such features are most likely a waste on such low-end parts to begin with (a 9800GT beats it pretty much across the board).
The GPU market is broken up into 3 parts, High-end, performance and mainstream. GF100 will cover High-end and the top tier in performance with GT200 filling in the rest to compete with the lower-end 5850. Eventually the technology introduced in GF100 will diffuse down to lower-end parts in that mainstream segment, but until then, Nvidia will deliver the cutting edge tech to those who are most interested in it and willing to pay the premium for it. High-end and performance minded individuals.
Absolutely. Really, the GT200/RV700 generation of DX10 cards was inarguably 'won' (i.e most profitable) for AMD/ATI by cards like the HD4850. But the overall performance crown (i.e highest in-generation performance) was won off the back of the GTX295 for nvidia.
But I agree with chizow that nvidia has ultimately been "winning" (the performance crown) each generation since the G80.
Not sure how you can claim AMD "inarguably" won DX10 with 4850 using profits as a metric. How many times did AMD turn a profit since RV770 launched? Zero. They've posted 12 straight quarters of losses last time I checked. Nvidia otoh has turned a profit in many of those quarters and most recently Q3 09 despite not having the fastest GPU on the market.
Also, the fundamental problem people don't seem to understand with regard to AMD and Nvidia die size and product distribution is that they overlap completely different market segments. Again, this simply serves as a referendum in the differences in their business models. You may also notice these differences are pretty similar to what AMD sees from Intel on the CPU side of things....
Nvidia GT200 die go into all high-end and mainstream parts like GTX 295, 285, 275, 260 that sell for much higher prices. AMD RV770 die went into 4870, 4850, and 4830. The latter two parts were competing with Nvidia's much cheaper and smaller G92 and G96 parts. You can clearly see that the comparison between die/wafer sizes isn't a valid one.
AMD has learned from this btw, and this time around it looks like they're using different die for their top tier parts (Cypress) and their lower tier parts (Redwood, Cedar) so that they don't have to sell their high-end die at mainstream prices.
[quote]Not sure how you can claim AMD "inarguably" won DX10 with 4850 using profits as a metric. How many times did AMD turn a profit since RV770 launched? Zero. They've posted 12 straight quarters of losses last time I checked. Nvidia otoh has turned a profit in many of those quarters and most recently Q3 09 despite not having the fastest GPU on the market. [/quote]
AMD also makes CPUs... they also lost market due to Intel's high end domination... they lost money on ATI... If it wasn't for success of the HD4000 series, AMD would've been in deep shit. Just think before you post.
Hard to make a profit paying the rates of a 5 billion credit - but if you want to take it this way (total profits), why wouldn't we take total income?
AMD/ATI:
PERIOD ENDING 26-Sep-09 27-Jun-09 28-Mar-09 27-Dec-08
Total Revenue 1,396,000 1,184,000 1,177,000 1,227,000
Cost of Revenue 811,000 743,000 666,000 1,112,000
Gross Profit 585,000 441,000 511,000 115,000
NVidia
PERIOD ENDING 25-Oct-09 26-Jul-09 26-Apr-09 25-Jan-09
Total Revenue 903,206 776,520 664,231 481,140
Cost of Revenue 511,423 619,797 474,535 339,474
Gross Profit 391,783 156,723 189,696 141,666
Not looking so good for the "winner of the generation", though. As for the die size and product distribution, all I'm looking at is the retail video card offer, and every price bracket I choose have both NVidia and AMD in it.
You missed my point. I wasn't talking about AMD as a whole I was talking about ATI as a division within AMD. If a company bleeds that much and still survives some part of the company must be making some money and that is the ATI division. ATI is making money. Your macro numbers mean zip.
The model ATI is using is putting out competitive cards from a company, AMD, that is bleeding badly. What generation card is easier to sell the new and improved one with more features, useful or not, or the last generation chip?
ATI is what has been floating AMD with its profits. ATI has decided to make smaller incremental developmental steps that lower end production costs.
Nvidia takes a long time to create a monolithic monster that required massive amounts of capital to develop. They will not recoup this investment off gamers alone because most don't have that much cash to put one of those cards in their machines. It is needed for marketing so they can push lower level cards implying superiority, real or not, they are a heavy marketing company. This chip is directed at their GPU server market and that is where they hope to make their money hoping it can do both really well.
ATI on the other hand by making smaller steps, but at a higher cycle of product development, have focused on the performance/mainstream market. With lower development costs they can turn out new cards that payback development costs back quicker allowing them to put that capital back into new products. Look at the 4890 and 4870. They both share similar architecture but the 4890 is a more refined chip. It was a product that allowed ATI to keep Nvidia reacting to ATI's products.
Nvidia's marketing requires them to have the fastest card on the market. ATI isn't trying to keep the absolute performance crown but hold onto the price/performance crown. Every time they put out a slightly faster card it forces Nvidia to respond. Nvidia recieves lower profits from having to drop card prices. I don't think this chip will be able to function on the 8800 model because AMD/ATI is now on stronger financial footing than they have been in the past couple years and Nvidia being late to market is helping ATI line their pockets cash. The 5000 series is just marginally better, but is better than Nvidia's current offerings.
Will Nvidia release just a single high end card or several tiers of cards to compete across the board? I don't think one card will really help the bottom line over the longer term.
I clearly defined what I considered a generation, historically the rest of the metrics measured over time (market share, mind share, profits, value-add features, game support) tend to follow suit.
For someone like you that doesn't care about who's winning a generation it should be simple enough, buy whatever is best that suits your price:performance requirements when you're ready to buy.
For those who want to make an informed decision once every 12-16 months per generation to avoid those niggling uncertanties and any potential buyer's remorse, they would certainly want to consider both IHV's offerings before making that decision.
How can you "win" if your product isnt intended for a meaningful number of customers. Im sure ATi could pull out the biggest, most expensive, hottest and fastest card in the world as well but theres a reason why they dont.
Really, the performance crown isnt anything special. The title goes from hand to hand all the time.
" Im sure ATi could pull out the biggest, most expensive, hottest and fastest card in the world"- they have, its called the radeon HD5970.
Really, in my Australia, the ATI DX11 hardware represents nothing close to value. The "biggest, most expensive, hottest and fastest card in the world" a.k.a HD5970 weighs in at a ridiculous AUD 1150. In the meantime the HD5850 jumped up from AUD 350 to AUD 450 on average here.
The "smaller, more affordable, better value" line I was used to associating with ATI went out the window the minute their hardware didn't have to compete with nVidia DX11 hardware.
Really, I'm not buying any new hardware until there's some viable alternatives at the top and some competition to burst ATI's pricing bubble. That's why it'd be good to see GF100 make a "G80" impression.
It's mentioned in the article, but nvidia being late to market is why prices on ATI's cards are high. Based on transistor count, etc. There's plenty of room for ATI to drop prices once they have some competition.
And thats where the article is dead wrong. For the most part, the ridiculous prices were dictated by low supply vs. high demand. Now, we finally arrived at decent supply vs. high demand and prices are dropping. The next stage may be good supply vs normal demand. That, and no second earlier, is when AMD themselves could willingly start price gouging due to no competition.
However, the situation will be like this long after Thermi launched for the simple reason, that there is no reason to believe that Thermi wont have yield issues for quite some time after they have been sorted out for AMD - its the size of chipzilla that will give it a rough time for the first couple of months, regardless of its capabilities.
I'm sure ATI would've if they could've instead of settling for 2nd place most of the past 3 years, but GF100 isn't just about the performance crown, its clearly setting the table for future variants based on its design changes for a broader target audience (think G92).
So why does NVIDIA want so much geometry performance? Because with tessellation, it allows them to take the same assets from the same games as AMD and generate something that will look better. With more geometry power, NVIDIA can use tessellation and displacement mapping to generate more complex characters, objects, and scenery than AMD can at the same level of performance. And this is why NVIDIA has 16 PolyMorph Engines and 4 Raster Engines, because they need a lot of hardware to generate and process that much geometry.
Are you saying that ATI's viability and funding resources for R&D are not supported by the majority of sales which traditionally fall into the lower priced hardware which btw requires smaller and cheaper GPUs?
Why do people not understand that with a six month lead in the DX11 arena, AMD/ATI will be able to come out with a refresh card that could easily exceed what Fermi ends up being? Remember, AMD has been dealing with the TSMC issues for longer, and by the time Fermi comes out, the production problems SHOULD be done. Now, how long do you think it will take to work the kinks out of Fermi? How about product availability(something AMD has been dealing with for the past few months). Just because a product is released does NOT mean you will be able to find it for sale.
The refresh from AMD could also mean that in addition to a faster part, it will also be cheaper. So while the 5870 is selling for $400 today, it may be down to $300 by the time Fermi is finally available for sale, with the refresh part(same performance as Fermi) available for $400. Hmmm, same performance for $100 less, and with no games available to take advantage of any improved image quality of Fermi, you see a better deal with the AMD part. We also don't know what the performance will be from the refresh from AMD, so a lot of this needs to take a wait and see approach.
We have also seen that Fermi is CLEARLY not even available for some leaked information on the performance, which implies that it may be six MORE months before the card is really ready. Showing a demo isn't the same as letting reviewers tinker with the part themselves. Really, if it will be available for purchase in March, then shouldn't it be ready NOW, since it will take weeks to go from ready to shipping(packaging and such)?
AMD is winning this round, and they will be in the position where developers will have been using their cards for development since NVIDIA clearly can't. AMD will also be able to make SURE that their cards are the dominant DX11 cards as a result.
Why do people not understand that with a six month lead in the DX11 arena, AMD/ATI will be able to come out with a refresh card that could easily exceed what Fermi ends up being? Remember, AMD has been dealing with the TSMC issues for longer, and by the time Fermi comes out, the production problems SHOULD be done. Now, how long do you think it will take to work the kinks out of Fermi? How about product availability(something AMD has been dealing with for the past few months). Just because a product is released does NOT mean you will be able to find it for sale.
The refresh from AMD could also mean that in addition to a faster part, it will also be cheaper. So while the 5870 is selling for $400 today, it may be down to $300 by the time Fermi is finally available for sale, with the refresh part(same performance as Fermi) available for $400. Hmmm, same performance for $100 less, and with no games available to take advantage of any improved image quality of Fermi, you see a better deal with the AMD part. We also don't know what the performance will be from the refresh from AMD, so a lot of this needs to take a wait and see approach.
We have also seen that Fermi is CLEARLY not even available for some leaked information on the performance, which implies that it may be six MORE months before the card is really ready. Showing a demo isn't the same as letting reviewers tinker with the part themselves. Really, if it will be available for purchase in March, then shouldn't it be ready NOW, since it will take weeks to go from ready to shipping(packaging and such)?
AMD is winning this round, and they will be in the position where developers will have been using their cards for development since NVIDIA clearly can't. AMD will also be able to make SURE that their cards are the dominant DX11 cards as a result.
"Because with tessellation, it allows them to take the same assets from the same games as AMD and generate something that will look better"
No it won't.
If the game will ship with the "high resolution" displacement mappings, NVidia could make use of them (and AMD might not, because of the geometry power involved). If the game won't ship with the "high resolution" displacement maps to use for tesselation, then NVidia will only have a lot of geometry power going to waste, and the same graphical quality as AMD is having.
Remember that in big graphic game engines, there are multiple "video paths" for multiple GPU's - DirectX 8, DirectX 9, DirectX 10, and NVidia and AMD both have optimised execution paths.
WELL.lets just make it simple. I am an advid gamer...I WANT and NEED power and performance. I care only about how well my games play, how good they look, and the impression they leave with me when I am done.
I own a PS3 and am thrilled they went with Nvidia- (smart move)
I own and PC that utilizes the 9800GT OC card....getting ready to upgrade to the new GF100 when it releases, last thing that is on my mind is how the market share is, cost is not an issue.
Hard-Core gaming requires Nvidia. Entry-level baby boomers use ATI.
Nvidia is just playing with their food....its a vulgar display of power- better architecture, better programming, better gamming.
[quote]So why does NVIDIA want so much geometry performance? Because with tessellation, it allows them to take the same assets from the same games as AMD and generate something that will look better. With more geometry power, NVIDIA can use tessellation and displacement mapping to generate more complex characters, objects, and scenery than AMD can at the same level of performance.[/quote]
Might I add to that, nVidia's design is essentially "Modular" they can increase and decrease there geometry performance essentially by taking units out, this however will force programmers to program for the lowest common denominator, whilst AMD's iteration of the technology is the same across the board, so essentially you can have identical geometry regardless of the chip.
The real distinction here is that Nvidia's revamp of fixed-function geometry units to a programmable, scalable, and parallel Polymorph engine means their implementation won't be limited to acceleration of Tesselation in games. Their improvements will benefit every game ever made that benefits from increased geometry performance. I know people around here hate to claim "winners" and "losers" around here when AMD isn't winning, but I think its pretty obvious Nvidia's design and implementation is the better one.
Fully programmable vs. fixed-function, as long as the fully programmable option is at least as fast is always going to be the better solution. Just look at the evolution of the GPU from mostly fixed-function hardware to what it is today with GF100...a fully programmable, highly parallel, compute powerhouse.
If Fermi was a winner Nvidia would have had samples out to be benchmarked by Anand and others a long time ago.
Fermi is designed for GPGPU with gaming secondary. Goody for them. They can probably do a lot of great things and make good money in that sector. But I don't know about gaming. Based upon the info that has gotten out and the fact that reality hasn't appeared yet I am guessing that Fermi will only be slightly faster than 5870 and Nvidia doesn't want to show their hand and let AMD respond. Remember, AMD is finishing up the next generation right now - so Fermi will likely compete against Northern Isles on AMDs 32nm process in the Fall.
Firstly, did you not read this article? The gf100 delay was due in large part to the new architecture they developed, and architectural shift ATI will eventually have to make if they wish to remain competitive. In other words, similarly to the g80 enabling GPU computing features/unified shaders for the first time on the PC, Nvidia invested huge resources in r&d and as a result had a next generation, revolutionary GPU before ATI.
Secondly, Nvidia never meant to place gaming second to GPU computing, as much as you ATI fanboys would like to troll about this subject. What they're trying to do is bring GPU computing up to the level GPU gaming is already at (in terms of accessibility, reliability, and performance). The research they're doing in this field could revolutionize research into many fields outside of gaming, including medicine, astronomy, and 'yes' film production (something I happen to deal with a LOT) while revolutionizing gaming performance and feature sets as well
Thirdly, I would be AMAZED if AMD can come out with their new architecture (their first since the hd2900) by the 3rd quarter of this year, and on the 32nm process. I just can't see them pushing GPU technology forward in the same way Nvidia has given their new business model (smaller GPUs, less focus on GPU computing), while meeting that tight deadline.
The bottom line, that's what. I'm sure Nvidia liked winning the generation - I'm sure they would have loved it even more if they didn't lose market share and potential profits from the fight...
winning the generation is a non-prize if the mainstream buyer can only wish they had one. Make this kind of performance affordable and then you'll impress me.
Yes and the bottom line showed Nvidia turning a profit despite not having the fastest part on the market.
Again, my point about G80'ing the market was more a reference to them revolutionizing GPU design again rather than simply doubling transistors and functional units or increasing clockspeeds based on past designs.
The other poster brought up performance at any given point in time, I was simply pointing out a fact being first or second to market doesn't really matter as long as you win the generation, which Nvidia has done for the last few generations since G80 and will again once GF100 launches.
Yikes, if it is more than the original GTX 280 I would expect some loud cards. When I saw those benchmarks of farcrry 2 I was disappointed that I didn't wait, but now that it is using more than a GTX 280 I think I may have made the right choice. While right now I wan't as much performance as possible eventually my 5850 will go into a secondary pc(why I picked 5850) with a lesser power supply. I don't want to have to buy a bigger power supply just because a friend might come over and play once a week.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
115 Comments
Back to Article
x86 64 - Sunday, January 31, 2010 - link
If we don't know these basic things then we don't know much.1. Die size
2. What cards will be made from the GF100
3. Clock speeds
4. Power usage (we only know that it’s more than GT200)
5. Pricing
6. Performance
Seems a pretty comprehensive list of important info to me.
nyran125 - Saturday, January 30, 2010 - link
You guys that buy a brand new graphics card every single year are crazy . im still running an 8800 GTS 512mb with no issues in any games whatso ever DX10, was a waste of money and everyones time. Im going to upgrade to the highest end of the GF100;s but thats from a 8800 GTS512mb so the upgrade is significant. Bit form a heigh end ati card to GF 100 ?!?!?!? what was the friggin point in even getting a 200 series card.!?!?!!?1/. Games are only just catching up to the 9000 series now.Olen Ahkcre - Friday, January 22, 2010 - link
I'll wait till they (TSMC) start using 28nm (from planned 40nm) fabrication process on Fermi... drop in size, power consumption and price and rise is clock speed will probably make it worth the wait.It'll be a nice addition to the GTX 295 I currently have. (Yeah, going SLI and PhysX).
Zingam - Wednesday, January 20, 2010 - link
Big deal... Until the next generation of Consoles - no games would take any advantage of these new techs. So? Why bother?zblackrider - Wednesday, January 20, 2010 - link
Why am I flooded with memories of the 20th Anniversary Macintosh?Zool - Wednesday, January 20, 2010 - link
Tesselation is quite resource hog on shaders. If u increase polygons by tenfold (quite easy even with basic levels of tesselation factor) the dissplacement map shaders needs to calculate tenfold more normals which ends in the much more detailed dissplacement of course. The main advatage of tesselation is that it dont need space in video memmory and also read(write ?) bandwith is on chip but it actualy acts as you would increase the polygons in game. Lightning, shadows and other geometry based efects should act as on high polygon models too i think (at least in uniengine heaven u have shadows after tesselation where before u didnt had a single shadow).Only the last stage of tesselator the domain shader produces actual vertices. The real question would be how much does this single(?) domain shader in radeons keep up with the 16 polymorph engines(each with its own tesselation engines) in gt300.
Thats 1(?) domain shader for 32 stream procesors in gt300(and much closer) against 1(?) for 320 5D units in radeon.
If u have too much shader programs that need the new vertices cordinations the radeon could end up being realy botlenecked.
Just my toughs.
Zool - Wednesday, January 20, 2010 - link
Of course ati-s tesselation engine and nvidias tesselation engine can be completly different fixed units. Ati-s tesselation engine is surely more robust than a single tesselation engine in nvidias 16 polymorph engines as its designed for the entire shaders.nubie - Tuesday, January 19, 2010 - link
They have been sitting on the technology since before the release of SLi.In fact SLi didn't have even 2-monitor support until recently, when it should have had 4-monitor support all along.
nVidia clearly didn't want to expend the resources on making the software for it until it was forced, as it now is by AMD heavily advertising their version.
If you look at some of their professional offerings with 4-monitor output it is clear that they have the technology, I am just glad they have acknowledged that it is a desire-able feature.
I certainly hope the mainstream cards get 3-monitor output, it will be nice to drive 3 displays. 3 Projectors is an excellent application, not only for high-def movies filmed in wider than 16:9 formats, but games as well. With projectors you don't get the monitor bezel in the way.
Enthusiast multi-monitor gaming goes back to the Quake II days, glad to see that the mainstream has finally caught up (I am sure the geeks have been pushing for it from inside the companies.)
wwwcd - Tuesday, January 19, 2010 - link
Maybe I'll live to see if Nvidia still wins AMD / Ati, a proposal which is as leadership price / performance, or even as productivity, regardless of price!:)AnnonymousCoward - Tuesday, January 19, 2010 - link
They should make a GT10000, in which the entire 300mm wafer is 1 die. 300B transistors. Unfortunately you have to mount the final thing to the outside of your case, and it runs off a 240V line.Stas - Tuesday, January 19, 2010 - link
all that hype just sounds awful for nVidia. I hope they don't leave us for good. I like AMD but I like competition more :)SmCaudata - Monday, January 18, 2010 - link
The 50% larger die size will kill them. Even if the reports of lower yields are false they will have to get a much smaller profit margin on their cards than AMD to stay competetive. As it is the 5870 can run nearly any game on a 30" monitor with everything turned up at a playable rate. The target audience for anything more than a 5870 is absurdly small. If Nvidia does not release a mainstream card the only people that are going to buy this beast are the people that have been looking for a reason not to buy and AMD card all along.In the end I think Nvidia will loose even more market share this generation. Across the board AMD is the fastest card at every price point. That will not change and with the dual GPU card already out from ATI it will be a long time before Nvidia has the highest performing card because I doubt they will release a dual GPU card at launch if they are having thermal issues with a single GPU card.
BTW... I've only ever owned Nvidia cards but that will likely change at my next system build even after this "information."
Yojimbo - Monday, January 18, 2010 - link
what do you mean by "information"?SmCaudata - Monday, January 18, 2010 - link
Heh. Just that it was hyped up so much and we really didn't get much other than some architectural changes. I suppose that maybe this is really interesting to some, but I've seen a lot of hardware underperform early spec based guesses.The Anandtech article was great. The information revealed by Nvidia was just okay.
qwertymac93 - Monday, January 18, 2010 - link
I really hope fermi doesn't turn into "nvidias 2900xt". late, hot, and expensive. while i doubt it will be slow by any stretch of the imagination, i hope it isn't TOO hot and heavy to be feasible. i like amd, but nvidia failing is not good for anybody. higher prices(as we've seen) and slower advancements in technology hurt EVERYONE.alvin3486 - Monday, January 18, 2010 - link
Nvidia GF100 pulls 280W and is unmanufacturable , details it wont talk about publiclyswaaye - Monday, January 18, 2010 - link
Remember that they talked all about how wondrous NV30 was going to be too. This is marketing folks. They can have the most amazing eye popping theoretical paper specs in the universe, but if it can't be turned into something affordable and highly competitive, it simply doesn't matter.Put another way, they haven't been delaying it because it's so awesome the world isn't ready for it. Look deeper. :D
blowfish - Monday, January 18, 2010 - link
This was a great read, but it made my head hurt!I wonder how it will scale, since the bulk of the market is for more mainstream cards. (the article mentioned lesser derivatives having less polymorph engines)
Can't wait to see reviews of actual hardware.
Zool - Monday, January 18, 2010 - link
Iam still curious why is nvidia pushing this geometry so hard. With 850 Mhz the cypress should be able to make 850mil polygons/s with one triangel/clock speed. Now thats 14 mil per single frame max at 60fps which is quite unrealistic. Thats more than 7 triangels per single pixel in 1920*1050. Making that amount of geometry in single pixel is quite waste and also botlenecks performance. U just wont see the diference.Thats why amd/ati is pushing also adaptive tesselation which can reduce the tesselation level with copute shader lod to fit a reasonable amount of triangels per pixel.
I can push teselation factor to 14 in the dx9 ATI tesselation sdk demo and reach 100fps or put it on 3 and reach 700+ fps with almost zero difference.
Zool - Tuesday, January 19, 2010 - link
Also want to note that just tesselation is not enough and u always use displacement mapping too. Not to mention u change the whole rendering scene to more shader demanding(shadows,lightning) so to much tesselation (like in uniengine heaven on almost everything, when without tesselation even stairs are flat) can realy make big shader hit.If u compare the graphic quality before tesselation and after in uniengine heaven i would rather ask what the hell is taken away that much performance without tesselation as everything looks so flat like in a 10y old engine.
The increased geometry setup should bring litle to no performance advantage for gf100, the main fps push are the much more eficience shaders with new cache architecture and the more than double the shaders of course.
Zool - Tuesday, January 19, 2010 - link
There are still plenty of questions.Like how tesselation efects MSAA with increased geametry per pixel. Also the flat stairs in uniengine (and very plastic, realistic after tesselation and displacement mapping), would they work with collision detection as after tesselation or before as completely flat and somewhere else in the 3d space. The same with some physix efects. The uniengine heaven is more of a showcase of tesselation and what can be done than a real game engine.
marraco - Monday, January 18, 2010 - link
Far Cry Ranch Small, and all the integrated benchmark, reads constantly the hard disk, so is dependent of HD speed.It's not unfair, since FC2 updates textures from hard disk all the time, making the game freeze constantly, even in the better computers.
I wish to see that benchmark run with and without SSD.
Zool - Monday, January 18, 2010 - link
I want also note that for the stream of fps/3rd person shooters/rts/racing games that look all same sometimes upgrading the graphic card doesnt have much sense these days.Can anyone make a game that will use pc hardware and it wont end in running and shoting at each other from first or third person ? Dragon age was a quite weak overhyped rpg.
Suntan - Monday, January 18, 2010 - link
Agreed. That is one of the main reasons I've lost interest in PC gaming. Ironically though, my favorite console games on the PS3 have been the two Uncharted games...-Suntan
mark0409mr01 - Monday, January 18, 2010 - link
Does anybody know if Fermi, GF100 or whatever it's going to be called have support for bitstream of HD audio codecs?Also do we know anything else about the video capabilites of the new card, there doesn't really seem to have been much mentioned about this.
Thanks
Slaimus - Monday, January 18, 2010 - link
Seeing how the GF100 chip has no display components at all on-chip (RAMDAC, TMDS, Displayport, PureVideo), they will probably be using a NVIO chip like the GT200. Would it not be possible to just put multiple NVIO chips to scale with the number of display outputs?Ryan Smith - Wednesday, January 20, 2010 - link
If it's possible, NVIDIA is not doing it. I asked them about the limit on display outputs, and their response (which is what brought upon the comments in the article) was that GF100 cards were already too late in the design process after they greenlit Surround to add more display outputs.I don't have more details than that, but the implication is that they need to bake support for more displays in to the GPU itself.
Headfoot - Monday, January 18, 2010 - link
Best comment for the entire page, I am wondering the same thing.Suntan - Monday, January 18, 2010 - link
Looking at the image of the chip on the first page, it looks like a miniature of a vast city complex. Man, when are they going to remake “TRON”……although, at the speeds that chips are running now-a-days, the whole movie would be over in a ¼ of a second…
-Suntan
arnavvdesai - Monday, January 18, 2010 - link
In you conclusion you mentioned that the only thing which would matter would be price/performance. However, from the article I wasnt really able to make out a couple of things. When NVIDIA says they can make something look better than the competition, how would you quantify that?I am a gamer & I love beautiful graphics. It's one of the reasons I still sometimes buy games for PCs instead of consoles. I have a 5870 & a 1080p 24" monitor. I would however consider buying this card if it made my game look better. After a certain number(60fps) I really only care about beautiful graphics. I want no grass to look like paper or jaggies to show on distant objects. Also, will game makers take advantage of this? Unlike previous generations game manufacturers are very deeply tied to the current console market. They have to make sure the game performs admirably on current day consoles which are at least 3-5 years behind their PC counterparts, so what incentive do they have to try and advance graphics on the PC when there arent enough people buying them. I am looking at current games and frankly just playing it, other than an obvious improvement in framerate, I cannot notice any visual improvements.
Coming back to my question on architecture. Will this tech being built by Nvidia help improve visual quality of games without additional or less additional work from the game manufacturing studios.
Ryan Smith - Wednesday, January 20, 2010 - link
At this point I'm not sure where that would be, and part of that is diminishing returns. Tessellation will return better models, but adding polygons will result in diminishing returns. We're going to have to see what games do in order to see if the extra geometry that GF100 is supposed to be able to generate can really result in a noticeable difference.
Will game makers take advantage of it? That's the million-dollar question right now. NVIDIA is counting on them doing so, but it remains to be seen just how many devs are going to make meaningful use of tessellation (beyond just n-patching things for better curves), since DX11 game development is so young.
Consoles certainly have a lot to do with it. One very real possibility is that the bulk of games continue to be at the DX9 level until the next generation of consoles hits with DX11-like GPUs. I'll answer the rest of this in your next question.
The good news is that it takes very little work. Game assets are almost always designed at a much greater level of detail than what they ship at. The textbook example is Doom3, where the models were designed on the order of 1mil polygons; they needed to be designed that detailed in order to compute proper bump maps and parallax maps. Tessellation and the displacement map is just one more derived map in that regard - for the most part you only need to export an appropriate displacement map from your original assets, and NV is counting on this.
The only downsides to NV's plan are that: 1) Not everything is done at this high of a detail level (models are usually highly detailed, the world geometry not so much), and 2) Higher quality displacement maps aren't "free". Since a game will have multiple displacement maps (you have to MIP-chain them just like you do any other kind of map), a dev is basically looking at needing to include at least 1 more level that's even bigger than the others. Conceivably, not everyone is going to have extra disc space to spend on such assets. Although most games currently still have space to spare on a DVD-9, so I can't quantify how much of a problem that might be. From my perspective, unless they can deliver better than 5870 performance at a reasonable price, then their image quality improvements aren't going to be enough to seal the deal. If they can meet those two factors however, then yes, image quality needs to be factored in to some degree.
FITCamaro - Monday, January 18, 2010 - link
It will be fast. But from the size of it, its going to be expensive as hell.I question how much success nvidia will have with yet another fast but hot and expensive card. Especially with the entire world in recession.
beginner99 - Monday, January 18, 2010 - link
Sounds nice but I doubt it's useful yet. DX11, probably takes at least 1-2 year till it takes off and the geometry power could be useful. Meaning could have easly waited a generation longer.Power consumption will probably be deciding. The new Radeons do rather well in that area.
But anyway, i'm gonna wait. unless it is complete crap, it will at least help for Radeon prices going south, even if you don't buy one.
just4U - Monday, January 18, 2010 - link
On Amd pricing. It seems pretty fair for the 57XX line. Cheaper overall then the 4850 and 4870 on their launches with similiar performance and added DX11 features.It would be nice to see the 5850 and 5870 priced about one third cheaper.. but here in Canada the cards are always sold out or of very limited stock so... I guess there is some justification for the higher pricing.
I still can't get a 275 cheap either. It's priced 30-40% higher then the 4870.
The only card(s) I've purchased so far are the 5750s as I feel the last gen products are still viable at their current pricing ... and I buy a fair amount of video cards (20-100 per year)
solgae1784 - Monday, January 18, 2010 - link
Let's just hope this GF100 doesn't become another disaster that was "Geforce FX".setzer - Monday, January 18, 2010 - link
While on paper these specs look great for the High-End market (>500€ cards) how much will the mainstream market lose, as in the cards that sell around the 150~300€ bracket, which coincidently are the cards the most people tend to buy. Nvidia tends to scale down the specifications but how much will it be scaled down, what is the interest of the new IQ improvements if you can only use them on high-end cards because the mainstream cards can't handle it.The 5 series radeons are similar, the new generation only has appeal if you go for the 58xx++ cards, which are overpriced, if you already have a 4850 you can hold out from buying a new card for at least one extra year, take the 5670, it has dx11 support but hasn't the horse power to use it effectively neutering the card from start as far as dx11 goes.
So even if Nvidia goes with a March launch of GF100, I'm guessing it will not be until June or July that we see the GeForce 10600GT (like or GX600GT, phun on ATI 10000 series :P), which will just have the effect of Radeon prices to stay where they are (high) and not where they should be in terms of performance (slightly on par with the HD 4000 series).
Beno - Monday, January 18, 2010 - link
page 2 isnt workingZool - Monday, January 18, 2010 - link
It will be interesting how much of the geometry performance will be true in the end from all these hype. I wouldnt put my hand into fire on nvidias pr slides and in house demos. Like the pr graph with 600% teselation performance increase over ati card. It will surely have some dark sides too like everything around. Nothing is free. Until real benchmarks u cant trust too much to pr graphs these days.haplo602 - Monday, January 18, 2010 - link
This looks similar to what Riva TNT used to be. Nvidia was promising everything including a cure for cancer. It turned out to be barely better than 3Dfx at that time because of clock/power/heat problems.Seems Fermi will be a big bang in workstation/HPC markets. Gaming not so much.
DominionSeraph - Monday, January 18, 2010 - link
Anyone with at least half a brain had a TNT. Tech noobs saw "Voodoo" and went with the gimped Banshee, and those with money to burn threw in dual Voodoo 2's.How does this at all compare to Fermi, whose performance will almost certainly not justify its price. The 5870's doesn't, not with the 5850 in town. Such is the nature of the bleeding edge.
Do you just type things out at random?
marc1000 - Tuesday, January 19, 2010 - link
hey, Banshee was fine! I had one because by that time the 3dfx api was better than DirectX. But suddenly everything became DX compatible and that was one thing 3dfx GPUs could not do... then I replaced that Banshee with a Radeon 9200, later a Radeon X300 (or something), then Radeon 3850, and now Radeon 5770. I'm always in for the mainstream, not the top of the line, and Nvidia is not paying enough atention to mainstream since Geforce FX series...Zool - Monday, January 18, 2010 - link
The question is when they will come with mid range variants. The GF100 seems to be 448SP variant and the 512SP card will be only after A4 revision or who knows.http://www.semiconductor.net/article/438968-Nvidia...">http://www.semiconductor.net/article/43...en_Calls...
The interesting part on the article is the graph which shows the exponecial increase in leakage power after 40nm and less. (which of course hurts more if u have a big chip and diferent clocks to maintain)
They will have even more problems now that dx11 cards will be only gt300 architecture so no rebrand choices for mid range and lower.
For consumer gf100 will be great if they can buy it somewhere in the future, but nvidia will bleed more on it than the GT200.
QChronoD - Monday, January 18, 2010 - link
Maybe I'm missing something, but it seems like PC gaming has lost most of its value in the last few years. I know that you can run games at higher resolutions and probably faster framerates than you can on consoles, but it will end up costing more than all 3 consoles combined to do so. It just seems to have gotten too expensive for the marginal performance advantage.That being said, I bet that one of these would really crank through Collatz or GPUGRID.
GourdFreeMan - Monday, January 18, 2010 - link
I certainly share that sentiment. The last major graphical showcase we had was Crysis in 2007. There have been nice looking PC exclusive titles (Crysis Warhead, Arma 2, the Stalker franchise) since then, but no significant new IP with new rendering engines to take advantage of new technology.If software publishers want our money, they are going to have to do better. Without significant GPGPU applications for the mainstream consumer, GPU manufacturers will eventually suffer as well.
dukeariochofchaos - Monday, January 18, 2010 - link
no, i think you're totally correct, from a certain point of view.i had the thought that the DX9 support is probably more than enough for console games, and why would developers pump money into DX11 support for a product that generates most of it's profits on consoles?
obviously, there is some money to be made in the pc game sphere, but is it really enough to drive game developers to sink money into extra quality just for us?
At least NV has made a product that can be marketed now, and into the future, for design/enterprise solutions. That should help them extract more of the value out of their r&d if there are very few DX11 games for the lifespan of fermi.
Calin - Monday, January 18, 2010 - link
If Fermi is working good, NVidia is in a great place for the development of their next GPU - they'll only need to update some things here and there, based mostly on where the card's performance lack (improve this, improve that, reduce this, reduce that). Also, they are in a very good place for making lower-end cards based on Fermi (cut everything in two or four, no need to redesign the previously fixed function blocks).As for AMD... their current design is in the works and probably too advanced for big changes, so their real Fermi-killer won't come faster than a year or so (that is, if Fermi proves to be so great a success as NVidia wants it to be).
toyota - Monday, January 18, 2010 - link
what I have saved on games this year has more than paid for the difference between the price of a console and my pc.Stas - Tuesday, January 19, 2010 - link
that ^^^^^^^besides, with Steam/D2D/Impulse there is new breath in PC gaming. constant sales on great games, automatic updates, active support, forums full of people, all integrated with virtual community (profiles, chats, etc.). a place to release demos, trailers, etc. I was worried about PC gaming 2-3 years ago, but I'm absolutely confident that it's coming back better than ever.
deeceefar2 - Monday, January 18, 2010 - link
Are the screen shots from left 4 dead 2 missing at the end of page 5?[quote]
As a consequence of this change, TMAA’s tendency to have fake geometry on billboards pop in and out of existence is also solved. Here we have a set of screenshots from Left 4 Dead 2 showcasing this in action. The GF100 with TMAA generates softer edges on the vertical bars in this picture, which is what stops the popping from the GT200.
[/quote]
Ryan Smith - Monday, January 18, 2010 - link
Whoops. Fixed.FlyTexas - Monday, January 18, 2010 - link
I have a feeling that nVidia is taking the long road here...The past 6 months have been painful for nVidia, however I think they are looking way ahead. At its core, the 5000 series from AMD is really just a supersized 4000 series. Not a bad thing, but nothing new either (DX11 is nice, but that'll be awhile, and multiple monitors are still rare).
Games have all looked the same for years now. CPU and GPU power have gone WAY up in the past 5 years, but too much is still developed for DX9 (X360/PS3 partly to blame, as is Vista's poor adoption), and I suspect that even the 5000 series is really still designed around DX9 and games meant for it with a few "enhancements".
This new chip seems designed for DX11 and much higher detailed graphics. Polygon counts can go up with this, the number of new details can really shine, but only once games are designed from scratch for it. From that point, the 6 month wait isn't a big deal, it'll be another few years before games are really designed from scratch for DX11 ONLY. Otherwise you have DX9 games with a few "enhancements" that don't add to gameplay.
It seems like we are really skipping DX10 here, partly due to Vista's poor adoption, partly due to XP not being able to use DX10. With Windows 7 being a success and DX11 backported to Vista, I think in the next 2-3 years you'll finally see most games come out that really require Vista/7 because they will require DX10/11.
Of course, my 260GTX still runs everything I throw at it, so until games get more complex or something else changes, I see no reason to upgrade. I thought about a 5870 as an upgrade, but why? Everything already runs fast enough, what does it get me other than some headroom? If I was still on a 8800GT, it would make sense, but I'd rather wait for nVidia to launch so the prices come down.
PorscheRacer - Tuesday, January 19, 2010 - link
Well then there's the fact ATI designed their 2000 series (and 3000 and 4000 series) to comply with the full DirectX 10 specification. NVIDIA didn't have the chips required for this spec, and talked Microsoft into castrating DX10 by only adding in a few things. Tessellation was notably left out. ATI wsa hung out to dry on performanec and features wasted on die. They finalyl got DX10.1 later on but the damage was done.Sure people complained about Vista, mostly gamers as games ran slower, but I wonder how those games would have been if DX10 was run at the full spec (which was marginally lower the DX11 today)?
Scali - Wednesday, January 27, 2010 - link
I think you need to read this, and reconsider your statement:http://scalibq.spaces.live.com/blog/cns">http://scalibq.spaces.live.com/blog/cns!663AD9A4F9CB0661!194.entry
jimhsu - Monday, January 18, 2010 - link
I made this post in another forum, but I think it's relevant here:---
Yes, I'm beginning to see this [games becoming less GPU limited and more CPU limited] with more mainstream games (to repeat, Crysis is NOT a mainstream game). FLOP wise, a high end video card (i.e. 5970 at 5 TFLOP) is something like 100 TIMES the performance of a high end CPU (i7 at 50 GFLOPS).
In comparison, during the 2004 days, we had GPUs like the 6800 Ultra (54 GFLOP) and P4's (6 GFLOP) (historical data here: http://forum.beyond3d.com/showthread.php?t=51677)">http://forum.beyond3d.com/showthread.php?t=51677). That's 9X the performance. We've gone from 9X to 100X the performance in a matter of 5 years. No wonder few modern games are actually pushing modern GPUs (requiring people who want to "get the most" out of their high powered GPUs to go for multiple screens, insane AA/AF, insane detail settings, complex shaders, etc)
I know this is a horrible comparison, but still - it gives you an idea of the imbalance in performance. This kind of reminds me of the whole hard drive capacity vs. transfer rate argument. Today's 2 TB monsters are actually not much faster than the few GB drives at the turn of the millennium (and even less so latency wise).
Personally, I think the days of GPU bound (for mainstream discrete GPU computing) closed when Nvidia's 8 series launched (the 8800GTX is perhaps the longest-lived video card ever made). And in general, when the industry adopted programmable compute units (aka DirectX 10).
AznBoi36 - Tuesday, January 19, 2010 - link
Actually the Radeon 9700/9800 Pro had a pretty long life too. The 9700 Pro I bought in 2002/2003 had lasted me all the way to early 2007, which was when I then bought a 8800GTS 640mb. 4 years is pretty good. It could have lasted longer, but then I was itching for a new platform and needed to get a PCI-Express card (the Radeon was AGP).RJohnson - Monday, January 18, 2010 - link
Sorry you lost all credibility when you tried to spin this bullsh*#t "Today's 2 TB monsters are actually not much faster than the few GB drives at the turn of the millennium"Go try and run your new rig off one of those old drives, come back and post your results in 2 hours when your system finally boots.
jimhsu - Monday, January 18, 2010 - link
A fun chart. Note the performance disparity.http://i65.photobucket.com/albums/h204/killer-ra/V...">http://i65.photobucket.com/albums/h204/...Game%20S...
jimhsu - Monday, January 18, 2010 - link
Disclosure: I'm still on a 8800 GTS 512, and I am in no pressure to upgrade right now. While a 58xx would be nice to have, on a single monitor I really have no need to upgrade. I may look into going i7 though.dentatus - Monday, January 18, 2010 - link
If something works well for you then there is no real reason (or need) to upgrade.I still run an 8800 ultra, it still runs many games well on a 22 inch monitor. The GT200 was really only a 50% boost over the 8 series on average. For comparison, I bought a second hand ultra for $60, transplanted both of them into an i7 based system and this really produced a significant boost over a GTX285 in the games I liked; about 25% more performance- roughly equivalent to HD5850, albeit not always as smooth.
It would be good to upgrade to a single GPU that is more than double the performance of this kind of setup. But a HD5800 series card is not in that league, and it remains to be seen if the GF100 is.
dentatus - Monday, January 18, 2010 - link
I agree this chip does seem designed around new or upcoming features. Many architectural shortcomings from the GT200 chip seem to be addressed and worked around getting usable performance (like tesselation) for new API features.Anyway to be pragmatic about things, nvidias history leaves much to be desired; performance promised and performance delivered is very variable. HardOCP mentioned the 5800 Ultra launch as a con, there is also th G80 launch on the flip side.
A GPU's theoretical performance and the expectations hanging around it are nothing to make choices by, wait for the real proof. Anyone recall the launch of the 'monstrous' 2900XT? A toothless beast that one.
DanNeely - Monday, January 18, 2010 - link
For the benefit of myself and everyone else who doesn't follow gaming politics closely, what is "the infamous Batman: Arkham Asylum anti-aliasing situation"?sc3252 - Monday, January 18, 2010 - link
Nvidia helped get AA working in batman which also works on ATI cards. If the game detects anything besides a Nvidia card it disables AA. The reason some people are angry is when ATI helps out with games it doesn't limit who can use the feature, at least that's what they(AMD) claim.san1s - Monday, January 18, 2010 - link
the problem was that nvidia did not do qa testing on ati hardwareMeghan54 - Monday, January 18, 2010 - link
And nvidia shouldn't have since nvidia didn't develop the game.On the other hand, you can be quite certain that the devs. did run the game on Ati hardware but only lock out the "preferred" AA design because of nvidia's money nvidia invested in the game.
And that can be plainly seen by the fact that when the game is "hacked" to trick the game into seeing an nvidia card installed despite the fact an Ati card is being used and AA works flawlessly....and the ATi cards end up faster than current nvidia cards....the game is exposed for what it is. Purposely crippling a game to favor one brand of video card over another.
But the nvididiots seem to not mind this at all. Yet, this is akin to Intel writing their complier to make AMD cpus run slower or worse on programs compiled with the Intel compiler.
Read about that debacle Intel's now suffering from and that the outrage is fairly universal. Now, you'd think nvidia would suffer the same nearly universal outrage for intentionally crippling a game's function to favor one brand of card over another, yet nvidiots make apologies and say "Ati cards weren't tested." I'd like to see that as a fact instead of conjecture.
So, one company cripples the function of another company's product and the world's up in arms, screaming "Monopolistic tactics!!!" and "Fine them to hell and back!"; another company does essentially the same thing and it gets a pass.
Talk about bias.
Stas - Tuesday, January 19, 2010 - link
If nV continues like this, it will turn around on them. It took MANY years for the market guards to finally say, "Intel, quit your sh*t!" and actually do something about it. Don't expect immediate retaliation in a multibillion dollar world-wide industry.san1s - Monday, January 18, 2010 - link
"yet nvidiots make apologies and say "Ati cards weren't tested." I'd like to see that as a fact instead of conjecture. "here you go
http://www.legitreviews.com/news/6570/">http://www.legitreviews.com/news/6570/
"On the other hand, you can be quite certain that the devs. did run the game on Ati hardware but only lock out the "preferred" AA design because of nvidia's money nvidia invested in the game. "
proof? that looks like conjecture to me. Nvidia says otherwise.
Amd doesn't deny it either.
http://www.bit-tech.net/bits/interviews/2010/01/06...">http://www.bit-tech.net/bits/interviews...iew-amd-...
they just don't like it
And please refrain from calling people names such as "nvidiot," it doesn't help portray your image as unbiased.
MadMan007 - Monday, January 18, 2010 - link
Oh for gosh sakes, this is the 'launch' and we can't even have a paper launch where at least reviewers get hardware? This is just more details for the same crap that was 'announced' when the 5800s came out. Poor show NV, poor show.bigboxes - Monday, January 18, 2010 - link
This is as close to a paper launch as I've seen in a while, except that there is not even an unattainable card. Gawd, they are gonna drag this out a lonnnnngg time. Better start saving up for that 1500W psu!Adul - Monday, January 18, 2010 - link
I suppose this is a vaporlaunch then.Adul - Monday, January 18, 2010 - link
I suppose this is a vaporlaunch then.chizow - Monday, January 18, 2010 - link
Looks like Nvidia G80'd the graphics market again by completely redesigning major parts of their rendering pipeline. Clearly not just a doubling of GT200, some of the changes are really geared toward the next-gen of DX11 and PhysX driven games.One thing I didn't see mentioned anywhere was HD sound capabilities similar to AMD's 5 series offerings. I'm guessing they didn't mention it, which makes me think its not going to be addressed.
mm2587 - Monday, January 18, 2010 - link
for nvidia to "g80" the market again they would need parts far faster then anything amd had to offer and to maintain that lead for several months. The story is in fact reversed. AMD has the significantly faster cards and has had them for months now. gf100 still isn't here and the fact that nvidia isn't signing the praises of its performance up and down the streets is a sign that they're acceptable at best. (acceptable meaning faster then a 5870, a chip that's significantly smaller and cheaper to make)chizow - Monday, January 18, 2010 - link
Nah, they just have to win the generation, which they will when Fermi launches. And when I mean "generation", I mean the 12-16 month cycles dictated by process node and microarchitecture. It was similar with G80, R580 had the crown for a few months until G80 obliterated it. Even more recently with the 4870X2 and GTX 295. AMD was first to market by a good 4 months but Nvidia still won the generation with GTX 295.FaaR - Monday, January 18, 2010 - link
Win schmin.The 295 ran extremely hot, was much MUCH more expensive to manufacture, and the performance advantage in games was negligible for the most part. No game is so demanding the 4870 X2 can't run it well.
The geforce 285 is at least twice as expensive as a radeon 4890, its closest competitor, so how you can say Nvidia "won" this round is beyond me.
But I suppose with fanboy glasses on you can see whatever you want to see. ;)
beck2448 - Monday, January 18, 2010 - link
Its amazing to watch ATI fanboys revise history.The 295 smoked the competition and ran cooler and quieter. Fermi will inflict another beatdown soon enough.
chizow - Monday, January 18, 2010 - link
Funny the 295 ran no hotter (and often cooler) with a lower TDP than the 4870X2 from virtually every review that tested temps and was faster as well. Also the GTX 285 didn't compete with the 4890, the 275 did in both price and performance.Its obvious Nvidia won the round as these points are historical facts based on mounds of evidence, I suppose with fanboy glasses on you can see whatever you want to see. ;)
Paladin1211 - Monday, January 18, 2010 - link
Hey kid, sometimes less is more. You dont need to post that much just to say "nVidia wins, and will win again". This round AMD has won with 2mil cards drying up the graphics market. You cant change this, neither could nVidia.Just come out and buy a Fermi, which is 15-20% faster than a HD 5870, for $500-$600. You only have to wait 3 months, and save some bucks until then. I have a HD 5850 here and I'm waiting for Tegra 2 based smartphone, not Fermi.
Calin - Tuesday, January 19, 2010 - link
Both Tegra 2 and Fermi are extraordinary products - if what NVidia says about them is true. Unfortunately, it doesn't seem like any of them is a perfect fit for the gaming desktop.Calin - Monday, January 18, 2010 - link
You don't win a generation with a very-high-end card - you win a generation with a mainstream card (as this is where most of the profits are). Also, low-end cards are very high-volume, but the profit from each unit is very small.You might win the bragging rights with the $600, top-of-the-line, two-in-one cards, but they don't really have a market share.
chizow - Monday, January 18, 2010 - link
But that's not how Nvidia's business model works for the very reasons you stated. They know their low-end cards are very high-volume and low margin/profit and will sell regardless.They also know people buying in these price brackets don't know about or don't care about features like DX11 and as the 5670 review showed, such features are most likely a waste on such low-end parts to begin with (a 9800GT beats it pretty much across the board).
The GPU market is broken up into 3 parts, High-end, performance and mainstream. GF100 will cover High-end and the top tier in performance with GT200 filling in the rest to compete with the lower-end 5850. Eventually the technology introduced in GF100 will diffuse down to lower-end parts in that mainstream segment, but until then, Nvidia will deliver the cutting edge tech to those who are most interested in it and willing to pay the premium for it. High-end and performance minded individuals.
dentatus - Monday, January 18, 2010 - link
Absolutely. Really, the GT200/RV700 generation of DX10 cards was inarguably 'won' (i.e most profitable) for AMD/ATI by cards like the HD4850. But the overall performance crown (i.e highest in-generation performance) was won off the back of the GTX295 for nvidia.But I agree with chizow that nvidia has ultimately been "winning" (the performance crown) each generation since the G80.
chizow - Monday, January 18, 2010 - link
Not sure how you can claim AMD "inarguably" won DX10 with 4850 using profits as a metric. How many times did AMD turn a profit since RV770 launched? Zero. They've posted 12 straight quarters of losses last time I checked. Nvidia otoh has turned a profit in many of those quarters and most recently Q3 09 despite not having the fastest GPU on the market.Also, the fundamental problem people don't seem to understand with regard to AMD and Nvidia die size and product distribution is that they overlap completely different market segments. Again, this simply serves as a referendum in the differences in their business models. You may also notice these differences are pretty similar to what AMD sees from Intel on the CPU side of things....
Nvidia GT200 die go into all high-end and mainstream parts like GTX 295, 285, 275, 260 that sell for much higher prices. AMD RV770 die went into 4870, 4850, and 4830. The latter two parts were competing with Nvidia's much cheaper and smaller G92 and G96 parts. You can clearly see that the comparison between die/wafer sizes isn't a valid one.
AMD has learned from this btw, and this time around it looks like they're using different die for their top tier parts (Cypress) and their lower tier parts (Redwood, Cedar) so that they don't have to sell their high-end die at mainstream prices.
Stas - Tuesday, January 19, 2010 - link
[quote]Not sure how you can claim AMD "inarguably" won DX10 with 4850 using profits as a metric. How many times did AMD turn a profit since RV770 launched? Zero. They've posted 12 straight quarters of losses last time I checked. Nvidia otoh has turned a profit in many of those quarters and most recently Q3 09 despite not having the fastest GPU on the market. [/quote]AMD also makes CPUs... they also lost market due to Intel's high end domination... they lost money on ATI... If it wasn't for success of the HD4000 series, AMD would've been in deep shit. Just think before you post.
Calin - Tuesday, January 19, 2010 - link
Hard to make a profit paying the rates of a 5 billion credit - but if you want to take it this way (total profits), why wouldn't we take total income?AMD/ATI:
PERIOD ENDING 26-Sep-09 27-Jun-09 28-Mar-09 27-Dec-08
Total Revenue 1,396,000 1,184,000 1,177,000 1,227,000
Cost of Revenue 811,000 743,000 666,000 1,112,000
Gross Profit 585,000 441,000 511,000 115,000
NVidia
PERIOD ENDING 25-Oct-09 26-Jul-09 26-Apr-09 25-Jan-09
Total Revenue 903,206 776,520 664,231 481,140
Cost of Revenue 511,423 619,797 474,535 339,474
Gross Profit 391,783 156,723 189,696 141,666
Not looking so good for the "winner of the generation", though. As for the die size and product distribution, all I'm looking at is the retail video card offer, and every price bracket I choose have both NVidia and AMD in it.
knutjb - Wednesday, January 20, 2010 - link
You missed my point. I wasn't talking about AMD as a whole I was talking about ATI as a division within AMD. If a company bleeds that much and still survives some part of the company must be making some money and that is the ATI division. ATI is making money. Your macro numbers mean zip.The model ATI is using is putting out competitive cards from a company, AMD, that is bleeding badly. What generation card is easier to sell the new and improved one with more features, useful or not, or the last generation chip?
beck2448 - Tuesday, January 19, 2010 - link
Those numbers are ludicrous. AMD hasn't made a profit in years. ATI's revenue is about 30% of Nvidia's.knutjb - Monday, January 18, 2010 - link
ATI is what has been floating AMD with its profits. ATI has decided to make smaller incremental developmental steps that lower end production costs.Nvidia takes a long time to create a monolithic monster that required massive amounts of capital to develop. They will not recoup this investment off gamers alone because most don't have that much cash to put one of those cards in their machines. It is needed for marketing so they can push lower level cards implying superiority, real or not, they are a heavy marketing company. This chip is directed at their GPU server market and that is where they hope to make their money hoping it can do both really well.
ATI on the other hand by making smaller steps, but at a higher cycle of product development, have focused on the performance/mainstream market. With lower development costs they can turn out new cards that payback development costs back quicker allowing them to put that capital back into new products. Look at the 4890 and 4870. They both share similar architecture but the 4890 is a more refined chip. It was a product that allowed ATI to keep Nvidia reacting to ATI's products.
Nvidia's marketing requires them to have the fastest card on the market. ATI isn't trying to keep the absolute performance crown but hold onto the price/performance crown. Every time they put out a slightly faster card it forces Nvidia to respond. Nvidia recieves lower profits from having to drop card prices. I don't think this chip will be able to function on the 8800 model because AMD/ATI is now on stronger financial footing than they have been in the past couple years and Nvidia being late to market is helping ATI line their pockets cash. The 5000 series is just marginally better, but is better than Nvidia's current offerings.
Will Nvidia release just a single high end card or several tiers of cards to compete across the board? I don't think one card will really help the bottom line over the longer term.
StormyParis - Monday, January 18, 2010 - link
I'm not sure what "winning" means, nor, really what a generation is.you can win on highest performance, highest marketshare, highest profit, best engineering...
a generation may also be adirectX iteration, a chip release cycle (in which case, each manufacturer has its own), a fiscal year...
Anyhoo, I don't really care, as long as i'm regularly getting better, cheaper cards. I'll happily switch back to nVidia
chizow - Monday, January 18, 2010 - link
I clearly defined what I considered a generation, historically the rest of the metrics measured over time (market share, mind share, profits, value-add features, game support) tend to follow suit.For someone like you that doesn't care about who's winning a generation it should be simple enough, buy whatever is best that suits your price:performance requirements when you're ready to buy.
For those who want to make an informed decision once every 12-16 months per generation to avoid those niggling uncertanties and any potential buyer's remorse, they would certainly want to consider both IHV's offerings before making that decision.
Ahmed0 - Monday, January 18, 2010 - link
How can you "win" if your product isnt intended for a meaningful number of customers. Im sure ATi could pull out the biggest, most expensive, hottest and fastest card in the world as well but theres a reason why they dont.Really, the performance crown isnt anything special. The title goes from hand to hand all the time.
dentatus - Monday, January 18, 2010 - link
" Im sure ATi could pull out the biggest, most expensive, hottest and fastest card in the world"- they have, its called the radeon HD5970.Really, in my Australia, the ATI DX11 hardware represents nothing close to value. The "biggest, most expensive, hottest and fastest card in the world" a.k.a HD5970 weighs in at a ridiculous AUD 1150. In the meantime the HD5850 jumped up from AUD 350 to AUD 450 on average here.
The "smaller, more affordable, better value" line I was used to associating with ATI went out the window the minute their hardware didn't have to compete with nVidia DX11 hardware.
Really, I'm not buying any new hardware until there's some viable alternatives at the top and some competition to burst ATI's pricing bubble. That's why it'd be good to see GF100 make a "G80" impression.
mcnabney - Monday, January 18, 2010 - link
You have no idea what a market economy is.If demand outstrips supply prices WILL go up. They have to.
nafhan - Monday, January 18, 2010 - link
It's mentioned in the article, but nvidia being late to market is why prices on ATI's cards are high. Based on transistor count, etc. There's plenty of room for ATI to drop prices once they have some competition.Griswold - Wednesday, January 20, 2010 - link
And thats where the article is dead wrong. For the most part, the ridiculous prices were dictated by low supply vs. high demand. Now, we finally arrived at decent supply vs. high demand and prices are dropping. The next stage may be good supply vs normal demand. That, and no second earlier, is when AMD themselves could willingly start price gouging due to no competition.However, the situation will be like this long after Thermi launched for the simple reason, that there is no reason to believe that Thermi wont have yield issues for quite some time after they have been sorted out for AMD - its the size of chipzilla that will give it a rough time for the first couple of months, regardless of its capabilities.
chizow - Monday, January 18, 2010 - link
I'm sure ATI would've if they could've instead of settling for 2nd place most of the past 3 years, but GF100 isn't just about the performance crown, its clearly setting the table for future variants based on its design changes for a broader target audience (think G92).bupkus - Monday, January 18, 2010 - link
So why does NVIDIA want so much geometry performance? Because with tessellation, it allows them to take the same assets from the same games as AMD and generate something that will look better. With more geometry power, NVIDIA can use tessellation and displacement mapping to generate more complex characters, objects, and scenery than AMD can at the same level of performance. And this is why NVIDIA has 16 PolyMorph Engines and 4 Raster Engines, because they need a lot of hardware to generate and process that much geometry.Are you saying that ATI's viability and funding resources for R&D are not supported by the majority of sales which traditionally fall into the lower priced hardware which btw requires smaller and cheaper GPUs?
Targon - Wednesday, January 20, 2010 - link
Why do people not understand that with a six month lead in the DX11 arena, AMD/ATI will be able to come out with a refresh card that could easily exceed what Fermi ends up being? Remember, AMD has been dealing with the TSMC issues for longer, and by the time Fermi comes out, the production problems SHOULD be done. Now, how long do you think it will take to work the kinks out of Fermi? How about product availability(something AMD has been dealing with for the past few months). Just because a product is released does NOT mean you will be able to find it for sale.The refresh from AMD could also mean that in addition to a faster part, it will also be cheaper. So while the 5870 is selling for $400 today, it may be down to $300 by the time Fermi is finally available for sale, with the refresh part(same performance as Fermi) available for $400. Hmmm, same performance for $100 less, and with no games available to take advantage of any improved image quality of Fermi, you see a better deal with the AMD part. We also don't know what the performance will be from the refresh from AMD, so a lot of this needs to take a wait and see approach.
We have also seen that Fermi is CLEARLY not even available for some leaked information on the performance, which implies that it may be six MORE months before the card is really ready. Showing a demo isn't the same as letting reviewers tinker with the part themselves. Really, if it will be available for purchase in March, then shouldn't it be ready NOW, since it will take weeks to go from ready to shipping(packaging and such)?
AMD is winning this round, and they will be in the position where developers will have been using their cards for development since NVIDIA clearly can't. AMD will also be able to make SURE that their cards are the dominant DX11 cards as a result.
Targon - Wednesday, January 20, 2010 - link
Why do people not understand that with a six month lead in the DX11 arena, AMD/ATI will be able to come out with a refresh card that could easily exceed what Fermi ends up being? Remember, AMD has been dealing with the TSMC issues for longer, and by the time Fermi comes out, the production problems SHOULD be done. Now, how long do you think it will take to work the kinks out of Fermi? How about product availability(something AMD has been dealing with for the past few months). Just because a product is released does NOT mean you will be able to find it for sale.The refresh from AMD could also mean that in addition to a faster part, it will also be cheaper. So while the 5870 is selling for $400 today, it may be down to $300 by the time Fermi is finally available for sale, with the refresh part(same performance as Fermi) available for $400. Hmmm, same performance for $100 less, and with no games available to take advantage of any improved image quality of Fermi, you see a better deal with the AMD part. We also don't know what the performance will be from the refresh from AMD, so a lot of this needs to take a wait and see approach.
We have also seen that Fermi is CLEARLY not even available for some leaked information on the performance, which implies that it may be six MORE months before the card is really ready. Showing a demo isn't the same as letting reviewers tinker with the part themselves. Really, if it will be available for purchase in March, then shouldn't it be ready NOW, since it will take weeks to go from ready to shipping(packaging and such)?
AMD is winning this round, and they will be in the position where developers will have been using their cards for development since NVIDIA clearly can't. AMD will also be able to make SURE that their cards are the dominant DX11 cards as a result.
chizow - Monday, January 18, 2010 - link
@bupkus, no, but I can see a monster strawman coming from a mile away.Calin - Monday, January 18, 2010 - link
"Because with tessellation, it allows them to take the same assets from the same games as AMD and generate something that will look better"No it won't.
If the game will ship with the "high resolution" displacement mappings, NVidia could make use of them (and AMD might not, because of the geometry power involved). If the game won't ship with the "high resolution" displacement maps to use for tesselation, then NVidia will only have a lot of geometry power going to waste, and the same graphical quality as AMD is having.
Remember that in big graphic game engines, there are multiple "video paths" for multiple GPU's - DirectX 8, DirectX 9, DirectX 10, and NVidia and AMD both have optimised execution paths.
SothemX - Tuesday, March 9, 2010 - link
WELL.lets just make it simple. I am an advid gamer...I WANT and NEED power and performance. I care only about how well my games play, how good they look, and the impression they leave with me when I am done.I own a PS3 and am thrilled they went with Nvidia- (smart move)
I own and PC that utilizes the 9800GT OC card....getting ready to upgrade to the new GF100 when it releases, last thing that is on my mind is how the market share is, cost is not an issue.
Hard-Core gaming requires Nvidia. Entry-level baby boomers use ATI.
Nvidia is just playing with their food....its a vulgar display of power- better architecture, better programming, better gamming.
StevoLincolnite - Monday, January 18, 2010 - link
[quote]So why does NVIDIA want so much geometry performance? Because with tessellation, it allows them to take the same assets from the same games as AMD and generate something that will look better. With more geometry power, NVIDIA can use tessellation and displacement mapping to generate more complex characters, objects, and scenery than AMD can at the same level of performance.[/quote]Might I add to that, nVidia's design is essentially "Modular" they can increase and decrease there geometry performance essentially by taking units out, this however will force programmers to program for the lowest common denominator, whilst AMD's iteration of the technology is the same across the board, so essentially you can have identical geometry regardless of the chip.
Yojimbo - Monday, January 18, 2010 - link
just say the minimum, not the lowest common denominator. it may look fancy bit it doesn't seem to fit.chizow - Monday, January 18, 2010 - link
The real distinction here is that Nvidia's revamp of fixed-function geometry units to a programmable, scalable, and parallel Polymorph engine means their implementation won't be limited to acceleration of Tesselation in games. Their improvements will benefit every game ever made that benefits from increased geometry performance. I know people around here hate to claim "winners" and "losers" around here when AMD isn't winning, but I think its pretty obvious Nvidia's design and implementation is the better one.Fully programmable vs. fixed-function, as long as the fully programmable option is at least as fast is always going to be the better solution. Just look at the evolution of the GPU from mostly fixed-function hardware to what it is today with GF100...a fully programmable, highly parallel, compute powerhouse.
mcnabney - Monday, January 18, 2010 - link
If Fermi was a winner Nvidia would have had samples out to be benchmarked by Anand and others a long time ago.Fermi is designed for GPGPU with gaming secondary. Goody for them. They can probably do a lot of great things and make good money in that sector. But I don't know about gaming. Based upon the info that has gotten out and the fact that reality hasn't appeared yet I am guessing that Fermi will only be slightly faster than 5870 and Nvidia doesn't want to show their hand and let AMD respond. Remember, AMD is finishing up the next generation right now - so Fermi will likely compete against Northern Isles on AMDs 32nm process in the Fall.
dragonsqrrl - Monday, February 15, 2010 - link
Firstly, did you not read this article? The gf100 delay was due in large part to the new architecture they developed, and architectural shift ATI will eventually have to make if they wish to remain competitive. In other words, similarly to the g80 enabling GPU computing features/unified shaders for the first time on the PC, Nvidia invested huge resources in r&d and as a result had a next generation, revolutionary GPU before ATI.Secondly, Nvidia never meant to place gaming second to GPU computing, as much as you ATI fanboys would like to troll about this subject. What they're trying to do is bring GPU computing up to the level GPU gaming is already at (in terms of accessibility, reliability, and performance). The research they're doing in this field could revolutionize research into many fields outside of gaming, including medicine, astronomy, and 'yes' film production (something I happen to deal with a LOT) while revolutionizing gaming performance and feature sets as well
Thirdly, I would be AMAZED if AMD can come out with their new architecture (their first since the hd2900) by the 3rd quarter of this year, and on the 32nm process. I just can't see them pushing GPU technology forward in the same way Nvidia has given their new business model (smaller GPUs, less focus on GPU computing), while meeting that tight deadline.
chewietobbacca - Monday, January 18, 2010 - link
"Winning" the generation? What really matters?The bottom line, that's what. I'm sure Nvidia liked winning the generation - I'm sure they would have loved it even more if they didn't lose market share and potential profits from the fight...
realneil - Monday, January 25, 2010 - link
winning the generation is a non-prize if the mainstream buyer can only wish they had one. Make this kind of performance affordable and then you'll impress me.chizow - Monday, January 18, 2010 - link
Yes and the bottom line showed Nvidia turning a profit despite not having the fastest part on the market.Again, my point about G80'ing the market was more a reference to them revolutionizing GPU design again rather than simply doubling transistors and functional units or increasing clockspeeds based on past designs.
The other poster brought up performance at any given point in time, I was simply pointing out a fact being first or second to market doesn't really matter as long as you win the generation, which Nvidia has done for the last few generations since G80 and will again once GF100 launches.
sc3252 - Monday, January 18, 2010 - link
Yikes, if it is more than the original GTX 280 I would expect some loud cards. When I saw those benchmarks of farcrry 2 I was disappointed that I didn't wait, but now that it is using more than a GTX 280 I think I may have made the right choice. While right now I wan't as much performance as possible eventually my 5850 will go into a secondary pc(why I picked 5850) with a lesser power supply. I don't want to have to buy a bigger power supply just because a friend might come over and play once a week.deputc26 - Monday, January 18, 2010 - link
This may be the most dragged-out launch ever.Vinb2k10 - Monday, January 18, 2010 - link
Try the Chevy Volt.SlyNine - Monday, January 18, 2010 - link
I just hope it doesn't end up being a 5800GT. the 5870 is a great card, but it's not exactly the ATI 9700pro of its time.SlyNine - Monday, January 18, 2010 - link
My bad, 5800 ultra.Ryan Smith - Monday, January 18, 2010 - link
Working on the images now guys.