As the two year GPU cycle continues in earnest, we’ve reached the point where NVIDIA is gearing up for their annual desktop product line refresh. With the GeForce 600 series proper having launched over a year ago, all the way back in March of 2012, most GeForce 600 series products are at or are approaching a year old, putting us roughly halfway through Kepler’s expected 2 year lifecycle. With their business strongly rooted in annual upgrades, this means NVIDIA’s GPU lineup is due for a refresh.

How NVIDIA goes about their refreshes has differed throughout the years. Unlike the CPU industry (specifically Intel), the GPU industry doesn’t currently live on any kind of tick-tock progression method. New architectures are launched on new process nodes, which in turn ties everything to the launch of those new process nodes by TSMC. Last decade saw TSMC doing yearly half-node steps, allowing incremental fab-driven improvements every year. But with TSMC no longer doing half-node steps as of 40nm, this means fab-drive improvements now come only every two years.

In lieu of new process nodes and new architectures, NVIDIA has opted to refresh based on incremental improvements within their product lineups. With the Fermi generation, NVIDIA initially shipped most GeForce 400 Fermi GPUs with one or more disabled functional units. This helped to boost yields on a highly temperamental 40nm process, but it also left NVIDIA an obvious route of progression for the GeForce 500 series. With the GeForce 600 series on the other hand, 28nm is relatively well behaved and NVIDIA has launched fully-enabled products at almost every tier, leaving them without an obvious route of progression for the Kepler refresh.

So where does NVIDIA go from here? As it turns out NVIDIA’s solution for their annual refresh is essentially the same: add more functional units. NVIDIA of course doesn’t have more functional units to turn on within their existing GPUs, so instead they’re doing the next best thing, acquiring more functional units by climbing up the GPU ladder itself. And with this in mind, this brings us to today’s launch, the GeForce GTX 780.

The GeForce GTX 780 is the follow-up to last year’s GeForce GTX 680, and is a prime example of refreshing a product line by bringing in a larger, more powerful GPU that was previously relegated to a higher tier product. Whereas GTX 680 was based on a fully-enabled GK104 GPU, GTX 780 is based on a cut-down GK110 GPU, NVIDIA’s monster GPU first launched into the prosumer space with GTX Titan earlier this year. Going this route doesn’t offer much in the way of surprises since GK110 is a known quantity, but as we’ll see it allows NVIDIA to improve performance while slowly bringing down GPU prices.

  GTX Titan GTX 780 GTX 680 GTX 580
Stream Processors 2688 2304 1536 512
Texture Units 224 192 128 64
ROPs 48 48 32 48
Core Clock 837MHz 863MHz 1006MHz 772MHz
Shader Clock N/A N/A N/A 1544MHz
Boost Clock 876Mhz 900Mhz 1058MHz N/A
Memory Clock 6GHz GDDR5 6GHz GDDR5 6GHz GDDR5 4GHz GDDR5
Memory Bus Width 384-bit 384-bit 256-bit 384-bit
VRAM 6GB 3GB 2GB 1.5GB
FP64 1/3 FP32 1/24 FP32 1/24 FP32 1/8 FP32
TDP 250W 250W 195W 244W
Transistor Count 7.1B 7.1B 3.5B 3B
Manufacturing Process TSMC 28nm TSMC 28nm TSMC 28nm TSMC 40nm
Launch Price $999 $649 $499 $499

As the first of the desktop GeForce 700 lineup, GeForce GTX 780 is in almost every sense of the word a reduced price, reduced performance version of GTX Titan. This means that on the architectural side we’re looking at the same GK110 GPU, this time with fewer functional units. Titan’s 14 SMXes have been reduced to just 12 SMXes, reducing the shader count from 2688 to 2304, and the texture unit count from 224 to 192.

At the same time because NVIDIA has gone from disabling 1 SMX (Titan) to disabling 3 SMXes, GTX 780’s GPC count is now going to be variable thanks to the fact that GK110 packs 3 SMXes to a GPC. GTX 780 cards will either have 5 GPCs or 4 GPCs depending on whether the 3 disabled SMXes are all in the same GPC or not. This is nearly identical to what happened with the GTX 650 Ti, and as with the GTX 650 Ti it’s largely an intellectual curiosity since the difference in GPCs won’t notably impact performance. But it is something worth pointing out.

Moving on with our Titan comparison, much to our surprise NVIDIA has not touched the ROP/memory blocks at all (something they usually do), meaning GTX 780 comes with all 48 ROPs tied to a 384bit memory bus just as Titan does. Clockspeeds aside, this means that GTX 780 maintains Titan’s ROP/memory throughput rather than taking a performance hit, which bodes well for ROP and memory-bound scenarios. Note however that while the memory bus is the same width, NVIDIA has dropped Titan’s massive 6GB of RAM for a more conservative 3GB, giving GTX 780 the same memory bandwidth while giving it less RAM overall.

As for clockspeeds, clockspeeds have actually improved slightly, thanks to the fact that fewer SMXes need to be powered. Whereas GTX Titan had a base clockspeed of 837MHz, GTX 780 is 2 bins higher at 863MHz, with the boost clock having risen from 876MHz to 900MHz. Memory clocks meanwhile are still at 6GHz, the same as Titan, giving GTX 780 the full 288GB/sec of memory bandwidth to work from.

Taken in altogether, when it comes to theoretical performance GTX 780 should have 88% of Titan’s shading, texturing, and geometry performance, and 100% of Titan’s memory bandwidth. Meanwhile on the ROP side of matters, we actually have an interesting edge case where thanks to GTX 780’s slightly higher clockspeeds, its theoretical ROP performance exceeds Titan’s by about 3%. In practice this doesn’t occur – the loss of the SMXes is far more significant – but in ROP-bound scenarios GTX 780 should be able to stay close to Titan.

 

For better or worse, power consumption is also going to be very close between GTX 780 and Titan. Titan had a 250W TDP and so does GTX 780, so there won’t be much of a decrease in power consumption despite the decrease in performance. This is more atypical of NVIDIA since lower tier products usually have lower TDPs, but ultimately it comes down to leakage, binning, and the other factors that dictate how GPU tiers need to be structured so that NVIDIA can harvest as many GPUs as possible. On the other hand the fact that the TDP is still 250W (with the same +6% kicker) means that GTX 780 should have a bit more TDP headroom than Titan since GTX 780 has fewer SMXes and RAM chips to power.

On a final note from a feature/architecture standpoint there are a couple of differences between the GTX 780 and GTX Titan that buyers will want to be aware of. Even though Titan is being sold under the GeForce label, it was essentially NVIDIA’s first prosumer product, crossing over between gaming and compute. GTX 780 on the other hand is a pure gaming/consumer part like the rest of the GeForce lineup, meaning NVIDIA has stripped it of Titan’s marquee compute feature: uncapped double precision (FP64) performance. As a result GTX 780 can offer 90% of GTX Titan’s gaming performance, but it can only offer a fraction of GTX Titan’s FP64 compute performance, topping out at 1/24th FP32 performance rather than 1/3rd like Titan. Titan essentially remains as NVIDIA’s entry-level compute product, leaving GTX 780 to be a high-end gaming product.

Meanwhile, compared to the GTX 680 which it will be supplanting, the GTX 780 should be a big step up in virtually every way. As NVIDIA likes to put it, GTX 780 is 50% more of everything than GTX 680. 50% more SMXes, 50% more ROPs, 50% more RAM, and 50% more memory bandwidth. In reality due to the clockspeed differences the theoretical performance difference isn’t nearly as large – we’re looking at just a 29% increase in shading/texturing/ROP performance – but this still leaves GTX 780 as being much more powerful than its predecessor. The tradeoff of course being that with a 250W TDP versus GTX 680’s 195W TDP, GTX 780 also draws around 28% more power; without a process node improvement, performance improvements generally come about by moving along the power/performance curve.

Moving on to pricing and competitive positioning, it unfortunately won’t just be GTX 780’s performance that’s growing. As we’ve already seen clearly with the launch of GTX Titan, GK110 is in a class of its own as far as GPUs go; AMD simply doesn’t have a GPU big enough to compete on raw performance. Consequently NVIDIA is under no real pricing pressure and can price GTX 780 wherever they want. In this case GTX 780 isn’t just 50% more hardware than the GTX 680, but it’s about 50% more expensive too. NVIDIA will be pricing the GTX 780 at $650, $350 below the GTX Titan and GTX 690, and around $200-$250 more than the GTX 680. This has the benefit of bringing Titan-like performance down considerably, but as an x80 card it’s priced well above its predecessor, which launched back at the more traditional price point of $500. NVIDIA is no stranger to the $650 price point – they initially launched the GTX 280 there back in 2008 – but this is the first time in years they’ll be able to hold that position.

At $650, the GTX 780 is more of a gap filler than it is a competitor. Potential Titan buyers will want to pay close attention to the GTX 780 since it offers 90% of Titan’s gaming performance, but that’s about it for GTX 780’s competition. Above it the GTX 690 and Radeon HD 7990 offer much better gaming performance for much higher prices (AFR issues aside), and the next-closest card below GTX 780 will be the GTX 680 and Radeon HD 7970 GHz Edition, for which GTX 780 is 20%+ faster. As a cheaper Titan this is a solid price, but otherwise it’s still somewhat of a luxury card compared to the GTX 680 and its ilk.

Meanwhile as far as availability goes this will be a standard hard launch. And unlike GTX Titan and GTX 690 all of NVIDIA’s usual partners will be participating, so there will be cards from a number of companies available from day one, with semi-custom cards right around the corner.

Finally, looking at GTX 780 as an upgrade path, NVIDIA’s ultimate goal here isn’t to sell the card as an upgrade to existing GTX 680 owners, but rather as with past products the upgrade path is targeted at those buying video cards at 2+ year intervals. GTX 580 is 2.5 years old, while GTX 480 and GTX 280 are older still. A $650 won’t move GTX 680 owners, but with GTX 780 in some cases doubling GTX 580’s performance NVIDIA believe it may very well move Fermi owners, and they’re almost certainly right.

May 2013 GPU Pricing Comparison
AMD Price NVIDIA
AMD Radeon HD 7990 $1000 GeForce GTX Titan/GTX 690
  $650 GeForce GTX 780
Radeon HD 7970 GHz Edition $450 GeForce GTX 680
Radeon HD 7970 $390  
  $350 GeForce GTX 670
Radeon HD 7950 $300  

 

Meet The GeForce GTX 780
POST A COMMENT

155 Comments

View All Comments

  • Stuka87 - Thursday, May 23, 2013 - link

    The video card does handle the decoding and rendering for the video. Anand has done several tests over the years comparing their video quality. There are definite differences between AMD/nVidia/Intel. Reply
  • JDG1980 - Thursday, May 23, 2013 - link

    Yes, the signal is digital, but the drivers often have a bunch of post-processing options which can be applied to the video: deinterlacing, noise reduction, edge enhancement, etc.
    Both AMD and NVIDIA have some advantages over the other in this area. Either is a decent choice for a HTPC. Of course, no one in their right mind would use a card as power-hungry and expensive as a GTX 780 for a HTPC.

    In the case of interlaced content, either the PC or the display device *has* to apply post-processing or else it will look like crap. The rest of the stuff is, IMO, best left turned off unless you are working with really subpar source material.
    Reply
  • Dribble - Thursday, May 23, 2013 - link

    To both of you above, on DVD yes, not on bluray - there is no interlacing, noise or edges to reduce - bluray decodes to a perfect 1080p picture which you send straight to the TV.

    All the video card has to do is decode it, which why a $20 bluray player with $5 cable will give you exactly the same picture and sound quality as a $1000 bluray player with $300 cable - as long as TV can take the 1080p input and hifi can handle the HD audio signal.
    Reply
  • JDG1980 - Thursday, May 23, 2013 - link

    You can do any kind of post-processing you want on a signal, whether it comes from DVD, Blu-Ray, or anything else. A Blu-Ray is less likely to get subjective quality improvements from noise reduction, edge enhancement, etc., but you can still apply these processes in the video driver if you want to.

    The video quality of Blu-Ray is very good, but not "perfect". Like all modern video formats, it uses lossy encoding. A maximum bit rate of 40 Mbps makes artifacts far less common than with DVDs, but they can still happen in a fast-motion scene - especially if the encoders were trying to fit a lot of content on a single layer disc.

    Most Blu-Ray content is progressive scan at film rates (1080p23.976) but interlaced 1080i is a legal Blu-Ray resolution. I believe some variants of the "Planet Earth" box set use it. So Blu-Ray playback devices still need to know how to deinterlace (assuming they're not going to delegate that task to the display).
    Reply
  • Dribble - Thursday, May 23, 2013 - link

    I admit it's possible to post process but you wouldn't, a real time post process is highly unlikely to add anything good to the picture - fancy bluray players don't post process, they just pass on the signal. As for 1080i that's a very unusual case for bluray, but as it's just the standard HD TV resolution again pass it to the TV - it'll de-interlace it just like it does all the 1080i coming from your cable/satelight box. Reply
  • Galidou - Sunday, May 26, 2013 - link

    ''All the video card has to do is decode it, which why a $20 bluray player with $5 cable will give you exactly the same picture and sound quality as a $1000 bluray player with $300 cable - as long as TV can take the 1080p input and hifi can handle the HD audio signal.''

    I'm an audiophile and a professionnal when it comes to hi-end home theater, I myself built tons of HT system around PCs and or receivers and I have to admit this is the funniest crap I've had to read. I'd just like to know how many blu-ray players you've personnally compared up to let's say the OPPO BDP -105(I've dealt with pricier units than this mere 1200$ but still awesome Blu-ray player).

    While I can certainly say that image quality not affected by much, the audio on the other side sees DRASTIC improvements. Hardware not having an effect on sound would be like saying: there's no difference between a 200$ and a 5000$ integrated amplifier/receiver, pure non sense.

    ''the same picture and sound quality''

    The part speaking about sound quality should really be removed from your comment as it really astound me to think you can beleive what you said is true.
    Reply
  • eddman - Thursday, May 23, 2013 - link

    http://i.imgur.com/d7oOj7d.jpg Reply
  • EzioAs - Thursday, May 23, 2013 - link

    If I were a Titan owner (and I actually purchase the card, not some free giveaway or something), I would regret that purchase very, very badly. $650 is still a very high price for the normal GTX x80 cards but it makes the Titan basically a product with incredibly bad pricing (not that we don't know that already). Still, I'm no Titan owner, so what do I know...

    On the other hand, when I look at the graphs, I think the HD7970 is an even better card than ever despite it being 1.5 years older. However, as Ryan pointed out for previous GTX500 users who plan on sticking with Nvidia and are considering high end cards like this, it may not be a bad card at all since there are situations (most of the time) where the performance improvements are about twice the GTX580.
    Reply
  • JeffFlanagan - Thursday, May 23, 2013 - link

    I think $350 is almost pocket-change to someone who will drop $1000 on a video card. $1K is way out of line with what high-quality consumer video cards go for in recent years, so you have to be someone who spends to say they spent, or someone mining one of the bitcoin alternatives in which case getting the card months earlier is a big benefit. Reply
  • mlambert890 - Thursday, May 23, 2013 - link

    I have 3 Titans and don't regret them at all. While I wouldn't say $350 is "pocket change" (or in this case $1050 since its x3), it also is a price Im willing to pay for twice the VRAM and more perf. With performance at this level "close" doesn't count honestly if you are looking for the *highest* performance possible. Gaming in 3D surround even 3xTitan actually *still* isn't fast enough, so no way I'd have been happy with 3x780s for $1000 less. Reply

Log in

Don't have an account? Sign up now