As we mentioned last week, we’re currently down in San Jose, California covering NVIDIA’s annual GPU Technology Conference. If Intel has IDF and Apple has the World Wide Developers Conference, then GTC is NVIDIA’s annual powwow to rally their developers and discuss their forthcoming plans. The comparison to WWDC is particularly apt, as GTC is a professional conference focused on development and business use of the compute capabilities of NVIDIA’s GPUs (e.g. the Tesla market).

NVIDIA has been pushing GPUs as computing devices for a few years now, as they see it as the next avenue of significant growth for the company. GTC is fairly young – the show emerged from NVISION and its first official year was just last year – but it’s clear that NVIDIA’s GPU compute efforts are gaining steam. The number of talks and the number of vendors at GTC is up compared to last year, and according to NVIDIA’s numbers, so is the number of registered developers.

We’ll be here for the next two days meeting with NVIDIA and other companies and checking out the show floor. Much of this trip is to get a better grasp on just where things are for NVIDIA still-fledging GPU compute efforts, especially on the consumer front where GPU compute usage has been much flatter than we were hoping for at this time last year with the announcement/release of NVIDIA and AMD’s next-generation GPUs, and the ancillary launch of APIs such as DirectCompute and OpenCL, which are intended to allow developers to write an application against these common APIs rather than targeting CUDA or Brook+/Stream. If nothing else, we’re hoping to see where our own efforts in covering GPU computing need to lie – we want to add more compute tests to our GPU benchmarks, but is the market to the point yet where there’s going to be significant GPU compute usage in consumer applications? This is what we’ll be finding out over the next two days.

Jen-Hsun Huang Announces NVIDIA’s Next Two GPUs

While we’re only going to be on the show floor Wednesday and Thursday, GTC unofficially kicked off Monday, and the first official day of the show was Tuesday. Tuesday started off with a 2 hour keynote speech by NVIDIA’s CEO Jen-Hsun Huang, which keeping with the theme of GTC focused on the use of NVIDIA GPUs in business environments.

Not unlike GTC 2009, NVIDIA is also using the show as a chance to announce their next-generation GPUs. GTC 2009 saw the announcement of the Fermi family, with NVIDIA first releasing the details of the GPU’s compute capabilities there, before moving on to focusing on gaming at CES 2010. This year NVIDIA announced the next two GPU families the company is working on, albeit not in as much detail as we got about Fermi in 2009.


The progression of NVIDIA's GPUs from a Tesla/Compute Standpoint

The first GPU is called Kepler (as in Johannes Kepler the mathematician), which will be released in the 2nd half of 2011. At this point the GPU is still a good year out, which is why NVIDIA is not talking about its details just yet. For now they’re merely talking about performance in an abstract manner, in this case Kepler should offer 3-4 times the amount of double precision floating point performance per watt of Fermi. With GF100 NVIDIA basically hit the wall for power consumption (and this is part of the reason current Tesla parts are running 448 out of 512 CUDA cores), so we’re basically looking at NVIDIA having to earn their performance improvements without increasing power consumption. They’re also going to have to earn their keep in sales, as NVIDIA is already talking about Kepler taking 2 billion dollars to develop and it’s not out for another year.

The second GPU is Maxwell (named after James Clerk Maxwell, the physicist/mathematician), and will be released some time in 2013. Compared to Fermi it should offer 10-12 times the DP FP performance per watt, which means it’s roughly another 3x increase over Kepler.


NVIDIA GPUs and manufacturing processes up to Fermi

NVIDIA still has to release the finer details of the GPUs, but we do know that Kepler and Maxwell are tied to the 28nm and 22nm processes respectively. So if nothing else, this gives us strong guidance on when they would be coming out, as production-quality 28nm fabrication suitable for GPUs is still a year out and 22nm is probably a late 2013 release at this rate. What’s clear is that NVIDIA is not going to take a tick-tock approach as stringently as Intel did – Kepler and Maxwell are going to launch against new processes – but this is only about GPUs for NVIDIA’s compute efforts. It’s likely the company will still emulate tick-tock to some degree, producing old architectures on new processes first; similar to how NVIDIA’s first 40nm products were the GT21x GPUs.  In this scenario we’re talking about low-end GPUs destined for life in consumer video cards, so the desktop/graphics side of NVIDIA isn’t bound to this schedule like the Fermi/compute side is.

At this point the biggest question is going to be what the architecture is. NVIDIA has invested heavily in their current architecture ever since the G80 days, and even Fermi built upon that. It’s a safe bet that these next two GPUs are going to maintain the same (super)scalar design for the CUDA cores, but beyond that anything is possible. This also doesn’t say anything about what the GPUs’ performance is going to be like under single precision floating point or gaming. If NVIDIA focuses almost exclusively on DP, we could see GPUs that are significantly faster at that while not being much better at anything else. Conversely they could build more of everything and these GPUs would be 3-4 times faster at more than just DP.

Gaming of course is a whole other can of worms. NVIDIA certainly hasn’t forgotten about gaming, but GTC is not the place for it. Whatever the gaming capabilities of these GPUs are, we won’t know for quite a while. After all, NVIDIA still hasn’t launched GF108 for the low-end.

Wrapping things up, don’t be surprised if Kepler details continue to trickle out over the next year. NVIDIA took some criticism for introducing Fermi months before it shipped, but it seems to have worked out well for the company anyhow. So a repeat performance wouldn’t be all that uncharacteristic for them.

And on that note, we’re out of here. We’ll have more tomorrow from the GTC show floor.

Comments Locked

82 Comments

View All Comments

  • Goty - Wednesday, September 22, 2010 - link

    The only reason ATI has nothing competitive in the 460 price range is because they're making too much money selling the 5850s and 5870s for larger margins.

    They could lower the price of the 5850 and 5870 to be more competitive, but why? Cypress and GF104 cost roughly the same to produce (based on die sizes), but Cypress sells for significantly more and sells well at those prices. AMD makes more money by not "competing" with the 460 and just letting it sweep up the leftovers of the high-end.
  • Dark_Archonis - Wednesday, September 22, 2010 - link

    Wait, so now people are talking about how ATI doesn't need to be the price/performance leader? I recall seeing in various forums people yelling from the rooftops about how the ATI 5xxx series is so great because of the price/performance aspect.

    So now ATI can act like Nvidia or Intel and keep prices high just because? That's fine, but ATI/AMD supporters should then not use price/performance as a main argument of why the 5xxx cards are so good.

    Getting back to the point, the GTX 460 *currently* offers the best performance at its price point. Period.

    ATI wanting high margins and keeping their prices high is irrelevant; the GTX 460 remains the best mid-range card you can get right now for the price.
  • Touche - Wednesday, September 22, 2010 - link

    You have to look at it from both perspectives. GTX460 is a great card for customers, but it's terrible for Nvidia. They hardly make any money from it, if at all. Whole Fermi fiasco is terrible for everyone except for ATI. Nvidia is in trouble, and we don't get cheaper cards.

    And yes, ATI can, is and will act just like Nvidia or Intel or any other company out there when there is no competition. If they sell everything they produce anyway, why lower their margins? They wouldn't sell more because there's nothing to sell, only earn less.

    ATI could sell 5870 for way less than GTX460 and still make a nice profit. It's sad that Fermi line is not making them do that.
  • TheJian - Thursday, September 23, 2010 - link

    Sorry, AMD can't sell things cheaper. You apparently don't understand ATI doesn't exist any more. It seems you're unaware of the fact AMD has been losing money for over 4 years (more?...I can't even remember when they made money and I've owned the stock and follow their cpu/gpu's...LOL). NVDA has a few billion in the bank while AMD has a few billion in DEBT. I'm guessing AMD isn't spending 2Billion on their next GPU. They just can't afford it.

    The problem isn't AMD charging more. I'm an AMD lover...so want them to succeed, and would rather have an Intel/AMD 50/50 split for obvious reasons. It's fanboys freaking out when nvidia charges for their product, but complete silence when AMD does the same. Note I recently got my Radeon 5830 (sorry COOL/QUIET Fermi was a little late, Amazon beat them and finally shipped a card 7 months later...LOL).

    If you actually think a company with a few billion in the bank, no debt, making money (not much for this year or last, but still profit - especially if you remove TSMC chips and payouts for them crapping out), currently the top product, investing in future heavily, diversifying into 2 brand new markets for them, about to complete their shrink of Fermi across the lines (more RED for AMD next 3 quarters - Until they do the same...and probably still not make money) you really need to read more about business. This business looks VERY strong for the next 18 months at least. Sandy Bridge will do nothing more than replace motherboard gpus. Performance sucks to bad to do anything else. AMD just gets weaker by the day (can't continue paying higher and higher interest to put off your debt each few years, you have to PAY IT OFF).

    I really only see Intel as a longer term threat. With ARM tops in phone cpu's, and NVDA married to them they don't need a cpu (for now?) to add to the bottom line a LOT with little market share. Tegra2 is what Tegra1 should have been but the competition had good products so the first effort was just like larabee (DOA). Tegra3 however looks to make NVDA jump like google has in phones (from like 5% to 17% in about 9 months?). It should make most makers take notice (Tegra2 looks to get some, signing LG and looking like droids gpu of choice) and NVDA's bottom line finally rise significantly. Intel will need another rev of Atom (or two) or Sandy Bridge's next rev to change anything at NVDA. By then of course those chips will be facing Tegra3 and another rev on the desktop chips for nvda.

    We won't have another TSMC chip fiasco with other fabs competing for that business (AMD's fabs...OUCH), I think all will avoid bumpgate this time. If the process does go bad at least they can launch with 1/2 chips from another fab and minimize damage. Also, I believe NVDA will eventually either sue TSMC (they said they would in their first statement released about bumpgate) for all money related to bumpgate as Jen Hsun said they would or they've got a pretty sweet manufacturing deal for ages coming because of it (maybe already got one). Whichever way NVDA goes with TSMC, either will boost bottom line. They've already paid for the damage over the last 2-3 years (note huge write-offs in statements, 200mil here, 180 mil there etc).

    I haven't heard TSMC paid anything yet, though you could blame badly made chips on the actual CHIP MAKER easily I think. It's their silicon that failed not NVDA's. Arguments can be made about designer fault but it's really hard to ignore TSMC made them all (and even caused problems for AMD). It seems that all cost about 500-800mil. Even if NVDA gets half back that's a lot of money. HP ponied up 100mil adding to Nvidia's argument that notebook makers ignored heat recommendations (I don't pay that much unless I think I can lose in court). Something tells me it's being paid in production price reduction over years..

    Again, they just started up HPC stuff and it looks promising as another market for NVDA to step into and it appears the foundation work is almost laid (vendors, guests etc tell the story at GTC 2010).

    Chipset business death has been writing on the wall for a few years. Apple's profits should show what entering a new large market can do for a company. So I am hugely confident 2 new markets will win over the old dead one. High margins in HPC, and a billion+ phone market are great places to enter with 0%. You can only gain with a good product and it looks like a GREAT GPU is becoming very important on phones. Just $10/chip at 10% share would be adding a billion or so to revenue for NVDA. Interesting. AMD can dream if they want (I hope, but...), but I hope NVDA gets back into the 20's and just buys AMD for the cpu's/engineers. AMD vs. Intel=AMD death. AMD+NVDA vs. Intel=hmm...great race for consumer dollars :) AMD will likely go to $2-3 again by next xmas and NVDA could buy them at $20 pretty easily probably. AMD would have a market cap of about 2bil or so which is just NVDA's cash account.

    Moral of the story is AMD is weak and losing money. They don't dictate pricing these days and certainly can't charge whatever they want. NVDA can bleed AMD to death just like Intel has. AMD couldn't have recovered from a product snafu that nvidia just rode out easily. Intel did the same for 3 years as AMD slaughtered them with Athlon64. AMD still couldn't capitalize on OWNING the crown for 3 YEARS! But hey, buy some AMD. I'll keep my NVDA. We'll compare notes next xmas and see who's right :)

    If you actually think AMD will be stronger than NVDA in the next 18 months I think you're high on crack ;) I don't see 2 Bill in profits from them over the next 2 years to pay that debt off, not to mention left over profits to pay employees, R&D etc.

    With a cancelled larabee (delayed etc...whatever), NVDA has smooth sailing for another 18 months. Atom is weak cpu, Sandy weak gpu. Only problem would be TSMC, solved by 1/2 supply over Global Foundries etc. GF has the money to put behind fabs so NV/AMD shouldn't have process problems for a while. GF is investing Intel style while TSMC looks like AMD. They already put in 10Bil! Charter wasn't good enough (not enough money), but GF is. GF is making NV's job easier a bit while it all just hurt AMD. GF will also make TSMC better. Charter couldn't kill TSMC over a failed process, but GF will have a few 15in fabs that would make bumpgate a huge problem if TSMC does it again. I think their employees will be told to perform a bit better for the next few years :) Fear does that.
  • Touche - Wednesday, September 22, 2010 - link

    As Goty said, it's more expensive because ATI sells everything they make regardless. That's what no competition brings to us customers - no price drops.

    GTX460 is more expensive for Nvidia to make than 5870 is for ATI, yet ATI sells theirs for more. That is extreme failure from Nvidia's perspective.

    6xxx series or price reduction for 5xxxx after 6 is launched will fill that 5770-5850 gap for ATI.
  • TheJian - Thursday, September 23, 2010 - link

    Yet AMD hasn't made money for YEARS (was 2004 the last year they made money?). Stock has also diluted from 300mil shares outstanding to 671mil outstanding. NVDA has went from 400 to 561mil shares outstanding. AMD had to dump assets to sta afloat (while dominating graphics for a year...LOL). Also NVDA is still in the middle of a buyback of $2Bil. Not the sign of a weak company. If you're selling everything you make, you need to make MORE.

    The GTX 460 (and other shrinks on low end) will just get back whatever market share AMD took (and still couldn't profit from). Thus more debt for AMD soon. I think it's extreme success that a company can withstand a complete fiasco with production over 3 years and still keep the enemy from making money, keep market share from dwindling too much, still have a few billion in the bank, and be looking to capitalize in more than one way in the next year or two. Intel did the same for 3 years while AMD dominated the cpu's. Note Intel has taken back everything AMD gained over that 3 years. Here we go again.

    GTX 460's direct competition is 5830 which is $10-20 more than GTX460 and SLOWER. That means, that while AMD sells out, NVDA can sell more than enough to profit handily while AMD is probably going to continue to lose money for the year (heck I think next year too). I just hope NVDA doesn't pull an AMD and buy way too expensive thus killing them. AMD should have paid 2Bil or less for ATI (they never made more than $60mil in a year! why 4.5bil? Smoking crack...LOL).

    Will that gap be filled after xmas is totally over? Already missed back to school. How much can they get out the door with an Oct25th release date? At least they aimed at GTX 460 first (that was smart). Don't forget this card is going to be 40nm also (400mm range? vs 528mm for GTX 460). So not a lot to play with in margin for AMD vs the last year. At 32nm this would be a win, but at 40nm NV will just cut the price to whatever keeps you from getting market share. I see no way for AMD to win this xmas. AMD pulled a product forward (supposedly) to get this done. That can't be good for the bottom line when you're already losing money. You're already talking price reductions for 5xxxx which means competetion is back :) Also NVDA looks pretty stinking healthy company wise so competition isn't leaving any time soon.
  • shawkie - Wednesday, September 22, 2010 - link

    I actually have a GTX 460 and I hate it. Its louder and more power hungry than the GTX 260 it replaced and as a bonus it seems to be unstable (crashes with screen corruption) at stock clocks.
  • Dark_Archonis - Wednesday, September 22, 2010 - link

    It's unfortunate that you have had that experience with your 460, but most 460 owners are happy with their cards. Look at any reviews of the 460 you want, or any gaming/3D websites that have 460 owners and you'll find the cards for the most part run cool, are not overly power-hungry, and are fairly quiet, all in the context of the performance the cards offer.
  • kallogan - Wednesday, September 22, 2010 - link

    I woudn't say Fermi is a utter failure, even if it's almost true. IMO only the GTX 460 is a valuable gpu and it's itx friendly. It fits nicely in a Sugo SG05 and it's really quiet. Though overclocking is very unstable compared to what i experienced with a HD 5770 or HD 5850. Don't know why exactly, poor Nvidia memory controller i guess. I often had Nvidia cards which were whining too, pretty annoyin.
  • aegisofrime - Wednesday, September 22, 2010 - link

    My own GTX 460 runs faster and also cooler than the non-reference 4870 it replaced. Granted it's different generations , but it's still a far cry from the jokes that 480 and 470 are , not to mention the super huge EPIC FAIL that GTX 465 is.

Log in

Don't have an account? Sign up now