Final Thoughts

NVIDIA is primarily pitching the GeForce GTX 780 as the next step in their high-end x80 line of video cards, a role it fits into well. At the same time however I can’t help but to keep going back to GTX Titan comparisons due to the fact that the GTX 780 is by every metric a cut-down GTX Titan card. Whether this is a good thing or not is open to debate, but with NVIDIA’s emergence into the prosumer market with GTX Titan and the fact that there’s now a single-GPU video card above the traditionally top-tier x80 card, this complicates things as compared to past x80 card launches.

Anyhow, we’ll start with the obvious: the GeForce GTX 780 is a filler card whose most prominent role will be filling the game between sub-$500 cards and this odd prosumer/luxury/ultra-enthusiast market that has taken root above $500. If there’s to be a $1000 single-GPU card in NVIDIA’s product stack then it’s simply good business to have something between that and the sub-$500 market, and that something is the GTX 780.

For the small number of customers that can afford a card in this price segment, the GTX 780 is an extremely strong contender. In fact it’s really the only contender – at least as far as single-GPU cards go – as AMD won’t directly be competing with GK110. The end result is that with the GTX 780 delivering an average of 90% of Titan’s gaming performance for 65% of the price, this is by all rights the Titan Mini, the cheaper video card Titan customers have been asking for. From that perspective the GTX 780 is nothing short of an amazing deal for the level of performance offered, especially since it maintains the high build quality and impressive acoustics that helped to define Titan.

On the other hand, as an x80 card the GTX 780 is pretty much a tossup. The full generational performance improvement is absolutely there, as the GTX 780 beats the last-generation GTX 580 by an average of 80%. NVIDIA knows their market well, and for most buyers in a 2-3 year upgrade cycle this is the level of performance necessary to spur on an upgrade.

The catch comes down to pricing. $650 for the GTX 780 makes all the sense in the world from NVIDIA’s perspective – GTX Titan sales have exceeded NVIDIA’s expectations – so between that and Tesla K20 sales the GK110 GPU is in high demand right now. At the same time the performance of the GTX 780 is high enough that AMD can’t directly compete with the card, leaving NVIDIA without competition and free to set prices as they would like, and this is exactly what they have done.

This doesn’t make GTX 780 a bad card, and on the contrary it’s probably a better card than any x80 card before it, particularly when it comes to build quality. But it’s $650 for a product tier that for the last 5 years was a $500 product tier. To that end no one likes a price increase, ourselves included. Ultimately some fraction of the traditional x80 market will make the jump to $650, and for the rest there will be the remainder of the GeForce 700 family or holding out for the eventual GeForce 800 family.

Moving on, it’s interesting to note that with the launch of Titan and now the GTX 780, the high-end single-GPU market looks almost exactly like it did back in 2011. The prices have changed, but otherwise we’ve returned to unchallenged NVIDIA domination of the high end, with AMD fighting the good fight at lower price points. The 22% performance advantage that the GTX 780 enjoys over the Radeon HD 7970GHz Edition cements NVIDIA’s performance lead, while the price difference between the cards means that the 7970GE is still a very strong contender in its current $400 market and a clear budget-saving spoiler like the 6970 before it.

Finally, to bring things to a close we turn our gaze towards the future of the rest of the GeForce 700 family.  The GTX 780 is the first of the GeForce 700 family but it clearly won’t be the last. A cut-down GK110 card as GTX 780 was the logical progression for NVIDIA, but what to use to replace GTX 670 is a far murkier question as NVIDIA has a number of good choices at their disposal. Mull that over for a bit, and hopefully we’ll be picking up the subject soon.

Power, Temperature, & Noise
Comments Locked

155 Comments

View All Comments

  • just4U - Thursday, May 23, 2013 - link

    I love the fact that their using the cooler they used for the Titan. While I plan to wait (no need to upgrade right now) I'd like to see more of that.. It's a feature I'd pay for from both Nvidia and Amd.
  • HalloweenJack - Thursday, May 23, 2013 - link

    no compute with the GTX 780 - the DP is similar to a GTX 480 and way way down on a 7970. no folding on these then
  • BiffaZ - Friday, May 24, 2013 - link

    Folding doesn't use DP currently, its SP, same for most @home type compute apps, the main exclusion being Milkyway@Home which needs DP alot.
  • boe - Thursday, May 23, 2013 - link

    Bring on the DirectCU version and I'll order 2 today!
  • slickr - Thursday, May 23, 2013 - link

    At $650 its way too expensive. Two years ago this card would have been $500 at launch and within 4-5 months it would have been $400 with the slower cut down version at $300 and mid range cards $200.

    I hope people aren't stupid to buy this overpriced card that only brings about 5fps more than AMD top end single card.
  • chizow - Thursday, May 23, 2013 - link

    I think if it launched last year, it's price would have been more justified, but Nvidia sat on it for a year while they propped up mid-range GK104 as flagship. Very disappointing.

    Measured on it's own merits, GTX 780 is very impressive and probably worth the increase over previous flagship price points. For example, it's generally 80% faster than GTX 580, almost 100% faster than GTX 480, it's predecessors. In the past the increase might only be ~60-75% and improve some with driver gains. It also adds some bling and improvements with the cooler.

    It's just too late imo for Nvidia to ask those kinds of prices, especially after lying to their fanbase about GK104 always slotted as Kepler flagship.
  • JPForums - Thursday, May 23, 2013 - link

    I love what you are doing with frame time deltas. Some sites don't quite seem to understand that you can maintain low maximum frame times while still introducing stutter (especially in the simulation time counter) by having large deltas between frames. In the worst case, your simulation time can slow down (or speed up) while your frame time moves back in the opposite direction exaggerating the result.

    Admittedly I may be misunderstanding your method as I'm much more accustomed to seeing algebraic equations describing the method, but assuming I get it, I'd like to suggest further modification to you method to deal with performance swings that occur expectedly (transition to/from cut-scenes, arrival/departure of graphically intense elements, etc.). Rather than compare the average of the delta between frames against an average frame time across the entire run, you could compare instantaneous frame time against a sliding window average. The window could be large for games with consistent performance and smaller for games with mood swings. Using percentages when comparing against the average frame times for the entire run can result in situations where two graphics solutions with the exact same deltas would show the one with better performance having worse deltas. As an example, take any video cards frame time graph and subtract 5ms from each frame time and compare the two resulting delta percentages. A sliding window accounts for natural performance deviations while still giving a baseline to compare frame times swings from. If you are dead set on percentages, you can take them from there as the delta percentages from local frame time averages are more relevant than the delta percentage from the runs overall average. Given my love of number manipulation, though, I'd still prefer to see the absolute frame time difference from the sliding window average. It would make it much easier for me to see whether the difference to the windowed average is large (lets say >15ms) or small (say <4ms). Of course, while I'm being demanding, it would be nice to get an xls, csv, or some other format of file with the absolute frame times so I can run whatever graph I want to see myself. I won't hold my breath. Take some of my suggestions, all of them, or none of them. I'm just happy to see where things are going.
  • Arnulf - Thursday, May 23, 2013 - link

    The correct metric for this comparison would be die size (area) and complexity of manufacturing rather than the number of transistors.

    RAM modules contain far more transistors (at least a couple of transistors per bit, with common 4 GB = 32 Gb = 64+ billion transistors per stick modules selling for less than $30 on Newegg), yet cost peanuts compared to this overpriced abomination that is 780.
  • marc1000 - Thursday, May 23, 2013 - link

    and GTX 760 ??? what will it be? will it be $200??

    or maybe the 660 will be rebranded as 750 and go to $150??
  • kilkennycat - Thursday, May 23, 2013 - link

    Fyi: eVGA offers "Superclocked" versions of the GTX780 with either a eVGA-designed "ACX" dual-open-fan cooler, or the nVidia-designed "titan"blower. Both at $659 are ~ $10 more than the default-speed version. The overclocks are quite substantial, 941MHz base, 993MHz boost (vs default 863/902) for the "titan" blower version, 967/1020 for the ACX-cooler version. The ACX cooler is likely to be more noisy than the "titan", plus it will dump some exhaust heat back into the computer case. Both of these eVGa Superclocked types were available for a short time on Newegg this morning, now "Auto Notify" :-( :-(

Log in

Don't have an account? Sign up now