Earlier this week NVIDIA announced their new top-end single-GPU consumer card, the GeForce GTX Titan. Built on NVIDIA’s GK110 and named after the same supercomputer that GK110 first powered, the GTX Titan is in many ways the apex of the Kepler family of GPUs first introduced nearly one year ago. With anywhere between 25% and 50% more resources than NVIDIA’s GeForce GTX 680, Titan is intended to be the ultimate single-GPU card for this generation.

Meanwhile with the launch of Titan NVIDIA has repositioned their traditional video card lineup to change who the ultimate video card will be chasing. With a price of $999 Titan is decidedly out of the price/performance race; Titan will be a luxury product, geared towards a mix of low-end compute customers and ultra-enthusiasts who can justify buying a luxury product to get their hands on a GK110 video card. So in many ways this is a different kind of launch than any other high performance consumer card that has come before it.

So where does that leave us? On Tuesday we could talk about Titan’s specifications, construction, architecture, and features. But the all-important performance data would be withheld another two days until today. So with Thursday finally upon us, let’s finish our look at Titan with our collected performance data and our analysis.

Titan: A Performance Summary

  GTX Titan GTX 690 GTX 680 GTX 580
Stream Processors 2688 2 x 1536 1536 512
Texture Units 224 2 x 128 128 64
ROPs 48 2 x 32 32 48
Core Clock 837MHz 915MHz 1006MHz 772MHz
Shader Clock N/A N/A N/A 1544MHz
Boost Clock 876Mhz 1019MHz 1058MHz N/A
Memory Clock 6.008GHz GDDR5 6.008GHz GDDR5 6.008GHz GDDR5 4.008GHz GDDR5
Memory Bus Width 384-bit 2 x 256-bit 256-bit 384-bit
VRAM 6GB 2 x 2GB 2GB 1.5GB
FP64 1/3 FP32 1/24 FP32 1/24 FP32 1/8 FP32
TDP 250W 300W 195W 244W
Transistor Count 7.1B 2 x 3.5B 3.5B 3B
Manufacturing Process TSMC 28nm TSMC 28nm TSMC 28nm TSMC 40nm
Launch Price $999 $999 $499 $499

On paper, compared to GTX 680, Titan offers anywhere between a 25% and 50% increase in resource. At the starting end, Titan comes with 25% more ROP throughput, a combination of Titan’s 50% increase in ROP count and simultaneous decrease in clockspeeds relative to GTX 680. Shading and texturing performance meanwhile benefits even more from the expansion of the number of SMXes, from 8 to 14. And finally, Titan has a full 50% more memory bandwidth than GTX 680.

Setting aside the unique scenario of compute for a moment, this means that Titan will be between 25% and 50% faster than GTX 680 in GPU limited situations, depending on the game/application and its mix of resource usage. For an industry and userbase still trying to come to terms with the loss of nearly annual half-node jumps, this kind of performance jump on the same node is quite remarkable. At the same time it also sets expectations for how future products may unfold; one way to compensate for the loss of the rapid cadence in manufacturing nodes is to spread out the gains from a new node over multiple years, and this is essentially what we’ve seen with the Kepler family by launching GK104, and a year later GK110.

In any case, while Titan can improve gaming performance by up to 50%, NVIDIA has decided to release Titan as a luxury product with a price roughly 120% higher than the GTX 680. This means that Titan will not be positioned to push the price of NVIDIA’s current cards down, and in fact it’s priced right off the currently hyper-competitive price-performance curve that the GTX 680/670 and Radeon HD 7970GE/7970 currently occupy.

February 2013 GPU Pricing Comparison
AMD Price NVIDIA
  $1000 GeForce Titan/GTX 690
(Unofficial) Radeon HD 7990 $900  
Radeon HD 7970 GHz Edition $450 GeForce GTX 680
Radeon HD 7970 $390  
  $350 GeForce GTX 670
Radeon HD 7950 $300  

This setup isn’t unprecedented – the GTX 690 more or less created this precedent last May – but it means Titan is a very straightforward case of paying 120% more for 50% more performance; the last 10% always costs more. What this means is that the vast majority of gamers will simply be shut out from Titan at this price, but for those who can afford Titan’s $999 price tag NVIDIA believes they have put together a powerful card and a convincing case to pay for luxury.

So what can potential Titan buyers look forward to on the performance front? As always we’ll do a complete breakdown of performance in the following pages, but we wanted to open up this article with a quick summary of performance. So with that said, let’s take a look at some numbers.

GeForce GTX Titan Performance Summary (2560x1440)
  vs. GTX 680 vs. GTX 690 vs. R7970GE vs. R7990
Average +47% -15% 34% -19%
Dirt: Showdown 47% -5% 3% -38%
Total War: Shogun 2 50% -15% 62% 1%
Hitman: Absolution 34% -15% 18% -15%
Sleeping Dogs 49% -15% 17% -30%
Crysis 54% -13% 21% -25%
Far Cry 3 35% -23% 37% -15%
Battlefield 3 48% -18% 52% -11%
Civilization V 59% -9% 60% 0

Looking first at NVIDIA’s product line, Titan is anywhere between 33% and 54% faster than the GTX 680. In fact with the exception of Hitman: Absolution, a somewhat CPU-bound benchmark, Titan’s performance relative to the GTX 680 is actually very consistent at a narrow 45%-55% range. Titan and GTX 680 are of course based on the same fundamental Kepler architecture, so there haven’t been any fundamental architecture changes between the two; Titan is exactly what you’d expect out of a bigger Kepler GPU. At the same time this is made all the more interesting due to the fact that Titan’s real-world performance advantage of 45%-55% is so close to its peak theoretical performance advantage of 50%, indicating that Titan doesn’t lose much (if anything) in efficiency when scaled up, and that the games we’re testing today favor memory bandwidth and shader/texturing performance over ROP throughput.

Moving on, while Titan offers a very consistent performance advantage over the architecturally similar GTX 680, it’s quite a different story when compared to AMD’s fastest single-GPU product, the Radeon HD 7970 GHz Edition. As we’ve seen time and time again this generation, the difference in performance between AMD and NVIDIA GPUs not only varies with the test and settings, but dramatically so. As a result Titan is anywhere between being merely equal to the 7970GE to being nearly a generation ahead of it.

At the low-end of the scale we have DiRT: Showdown, where Titan’s lead is less than 3%. At the other end is Total War: Shogun 2, where Titan is a good 62% faster than the 7970GE. The average gain over the 7970GE is almost right in the middle at 34%, reflecting a mix of games where the two are close, the two are far, and the two are anywhere in between. With recent driver advancements having helped the 7970GE pull ahead of the GTX 680, NVIDIA had to work harder to take back their lead and to do so in an concrete manner.

Titan’s final competition are the dual-GPU cards of this generation, the GK104 based GTX 690, and the officially unofficial Tahiti based HD 7990 cards, which vary in specs but generally have just shy of the performance of a pair of 7970s. As we’ve seen in past generations, when it comes to raw performance one big GPU is no match for two smaller GPUs, and the same is true with Titan. For frames per second and nothing else, Titan cannot compete with those cards. But as we’ll see there are still some very good reasons for Titan’s existence, and areas Titan excels at that even two lesser GPUs cannot match.

None of this of course accounts for compute. Simply put, Titan stands alone in the compute world. As the first consumer GK110 GPU based video card there’s nothing quite like it. We’ll see why that is in our look at compute performance, but as far as the competitive landscape is concerned there’s not a lot to discuss here.

The Final Word On Overclocking
Comments Locked

337 Comments

View All Comments

  • JeBarr - Thursday, February 21, 2013 - link

    I would guess because as time goes by the reviewers here (and elsewhere) think they need to bench at settings used by the "majority". Even when that majority doesn't frequent, or even know the existance of, Anandtech.com. Go figure.

    I don't like it any more than you do...but for different reasons.

    I for one was happy to have a review site still benching at 16:10...which is what the long-time hardware enthusiasts/gamers prefer, that is, when they can't find a good CRT monitor ;)

    Just think of this review as the new bench standard going forward. A new starting point, if you will.
  • Ryan Smith - Monday, February 25, 2013 - link

    Bench 2013 will be going live soon. The backend is done (it's what I used to store and generate the charts here), but the frontend is part of a larger project...

    As for why the settings change, when we refresh our suite we sometimes change our settings to match what the latest generation of cards can do. When Titan sets the high bar for example, running 2560 at Ultra with 4xMSAA is actually practical.
  • TheJian - Thursday, February 21, 2013 - link

    NO Borderlands 2 (~6 million copies sold rated 89! not counting the addons rated high also)
    No Diablo3 (I hate the DRM but 10million+ sold of course rated high, but not by users)
    No Guild 2 (MMO with 3million copies sold rated 90!) even WOW Mists of pandaria has 3million or so now and 11 million playing the game's total content. I don't play WOW but it's still got a TON of users.
    No Assassin's Creed 3 (brings 680/7970 to low 30's 2560x1600)
    Crysis 3, warhead needs to die, and this needs to replace it (at the very LEAST). As shown below NOBODY is playing warhead. Wasted page space, and time spend benching it.

    Instead we get Crysis warhead...ROFL Well what can we expect Ryan still loves AMD.
    http://www.gametracker.com/search/warhead/
    Notice all the empty servers? Go ahead list them by players only 3 had over 10!..Most are ZERO players...LOL...Why even waste your time benchmarking this ignored game? Just to show NV weakness?
    Dirt Showdown - Raise your hand if you play this...Nope, you're all playing Dirt3 (wisely, or F1 etc anything that rates better than showdown)
    User ratings on metacritic of 70/4.7 (out of TEN not 5) and best summarized by gamespy (rated it a 40/100 on the frontpage of the metacritic site: http://www.metacritic.com/game/pc/dirt-showdown
    "DiRT: Showdown delivers bargain-basement entertainment value for the high, high price of $50. With its neutered physics, limited driving venues, clunky multiplayer, and diminished off-road racing options, discerning arcade racing fans should just write this one off as an unanticipated pothole in Codemaster's trailblazing DiRT series. "
    If you're going to use a racing game, at least make it a good one, not just the one AMD wins in. Why not F1 2012 (scored 80 at metacritic/6.8 from users). AMD wins in warhead which is also why crysis warhead is chosen even though nobody plays it (it's from 2008!). Again check the server list, who are you testing this for? What does it represent today? What other game based on it's engine? It's representing nothing correct? Nobody plays showdown either.

    How about adding some games people actually PLAY. I thought the whole point of benchmarking is to show us how games WE PLAY will run, is that not true at anandtech?

    Also no discussion of the frame delay ala Techreport:
    http://techreport.com/review/24381/nvidia-geforce-...
    No discussion of the frame latency issues that AMD is working on game by game. Their current beta I think just fixed the skyrim/borderland/guild wars2 issues which were awful.
    http://techreport.com/review/24218/a-driver-update...
    This has been an ongoing problem Anantech (ryan?) seems to just ignore. AMD is just getting to fixing this stuff in Jan...LOL. You can read more about it in the rematch of the 660TI/7950 here:
    http://techreport.com/review/23981/radeon-hd-7950-...
    Of course you can start at the beginning but this is where they recommend the 660TI and why (dec 2012 article).
    "The FPS average suggests near-parity performance between the 7950 and the GTX 660 Ti, with a tiny edge to the GeForce. The 99th percentile frame time, though, captures the impact of the Radeon's frame latency issues and suggests the GTX 660 Ti is easily the superior performer."
    More:
    "Instead, we have a crystal clear recommendation of the GeForce GTX 660 Ti over the Radeon HD 7950 for this winter's crop of blockbuster games. Perhaps AMD will smooth out some of the rough patches in later driver releases, but the games we've tested are already on the market—and Nvidia undeniably delivers the better experience in them, overall. "
    Even Tomshardware reports on delays now (albeit the wrong metric...LOL). Read the comments at techreport for why they're using the wrong one.

    No wonder they left out the xmas blockbusters and diablo3 (which will still sell probably 15million over it's life even though I would never buy it). I can name other games that are hot and new also:
    Dishonored, Deadspace 3, max payne 3, all highly rated. Max 3 barely hits 50's on top cards at 2560x1600 (7970ghz, 680 even lower), excellent test game and those are NOT the minimums (which can bring you to 20's/teens on lower cards). Witcher 2 (witcher 3 is coming), with uber sampling ENABLED is a taxer also.

    Dragon Age 2 at 2560x1600 will bring 7970/680 to teens/20's at minimums also, barely hits 40's avg (why use ONLY AVG at techspot I don't know, but better than maxes).
    http://www.techspot.com/review/603-best-graphics-c...

    START reporting MIN FPS for every game benched! There should be more discussion of the fact that in a lot of these games you hit teens for even $500 cards at 2560x1600 maxed out. Max fps means NOTHING. IF you hit 10-20fps a lot in a game your max means nothing. You won't want to play at that res, so what have you shown me? NOTHING. You should ALWAYS report MIN FPS as that dictates our gameplay experience and if it isn't always above 30 life sucks usually. Farcry 3 hits below 30 on both 680/7970 at 2560x1600.
    http://www.hardocp.com/article/2013/02/21/nvidia_g...
    And they don't have them on ULTRA, only titan is and none on 4xmsaa. At least they're giving max details/res you can expect to play and what it's min will be (better, you at least have USEFUL info after reading their benchmarks).

    From your article:
    "This is enough to get Titan to 74fps at 2560 with 4xMSAA, which is just fast enough to make BF3 playable at those settings with a single GPU."
    Why didn't you just report the minimums so we can see when ALL cards hit 30fps or less in all resolutions tested? If the game doesn't give a way to do this use fraps while running it (again, for ALL games). So it takes 74fps to get playable in BF3? It's easier to just give the minimums so people can see, otherwise are we supposed to attempt to extrapolate every one of your games without MINS listed? You did it for us in this sentence, but for ONE card and even then it's just a comment, not a number we can work with. It's YOU extrapolating your own guess that it would be playable given 74fps. What kind of benchmarking is this? I won't even get into your other comments throughout the articles on titan, It's more important to me to key on what you totally ignore that is VERY important to anyone picking ANY gpu. SMOOTHNESS of gameplay (latency testing) and MIN FPS so we know where we have no prayer of playing or what to expect playable on a given gpu. This is why Hardocp actually points to you guys as why your benchmarks suck. It's linked in most of their articles...LOL. FIX IT.
    http://www.hardocp.com/article/2008/02/11/benchmar...
    They have that in nearly every gpu article including the titan article. It's a valid point. But if you're not going to use IN GAME play, at least give min fps for canned etc. That link is in the test setup page of nearly every article on hardocp, you'd think you'd fix this so they'd stop. Your benchmarks represent something that doesn't reflect gameplay in most cases. The maxfps doesn't dictate fun factor. MIN does.

    One comment on Titan, I'd think about it at $800-850. Compute isn't important today at home for me, and won't be until more games use it like civ5 (they're just scratching surface here). At that point this card could become a monster compared to 690 without heat, noise etc. One day it may be worth $1000 to me, but for now it's not worth more than $800 (to me, no SFF needed, no compute needed). I don't like any dual chips or running multiple cards (see microstutter, latency delays etc), so once cheaper this would be tops on my list, but I don't usually spend over $360 on a card anyway...LOL. Most of the first run will go to boutique shops (20K first run I think). Maybe they'll drop it after that.

    LOL at anyone thinking the price sucks. Clearly you are NOT the target market. If you're product sells out at a given price, you priced it right. That's good business, and actually you probably should have asked more if it's gone in hours. You can still an SLI of titan in SFF, what other card can do that? You always pay a premium for the TOP card. Intel's extreme chips are $1000 too...No surprise. Same thing on the pro side is $2500 and not much different. IT's 20% slower than 690, but 690 can't go into SFF for the most part and certainly not as quiet or controllable. Also blows away 690 in compute if someone is after that. Though they need APPS that test this, not some home made anandtech benchmark. How about testing something I can actually USE and is relevant (no I don't count folding@home or bitcoin mining either, they don't make me money-a few coins?...LOL).
  • JeBarr - Thursday, February 21, 2013 - link

    I'm pretty sure Ryan has mentioned the benches you want are forthcoming. Maybe they haven't figured it all out yet...i dunno....but like you, I've been waiting what seems like a year or more for Anandtech to catch up with reality in GPU benching.
  • CeriseCogburn - Tuesday, February 26, 2013 - link

    Yes, well I've found Frame Rate Target to be an absolute GEM in this area:

    " START reporting MIN FPS for every game benched! There should be more discussion of the fact that in a lot of these games you hit teens for even $500 cards at 2560x1600 maxed out. Max fps means NOTHING. IF you hit 10-20fps a lot in a game your max means nothing. "

    If you crank to max settings then have frame drop issues, FRAME RATE TARGET by nVidia of course, is excellent for minimizing and eliminating that issue.
    It really is a great and usable feature, and of course is for the most now already completely ignored.

    It was ported back to at least the top 500 series cards I don't remember exactly which ones right now, but that feature should have an entire article dedicated to it at every review site. It is AWESOME, and directly impacts minimum frame rates lofting nVidia to absolutely playable vs amd.

    I really think the bias won't ever be overcome. We used to hear nothing but eyefinity, yet now with nvidia cards capable of 4 monitors out of the box, it has suddenly become very unpopular for reviewers to mention eyefinity, surround, and surround plus ONE MORE in the nVidia case, without the need for any special adapters in many of nViida's partners card releases.

    So, it's really a sick situation.
  • Urbanos - Friday, February 22, 2013 - link

    he went through all the trouble of benchmarking in order to show that entry points for budget conscious users can get through Titan, but it doesn't actually prove that Titan is even worth the money without comparing it to at least 1 of its bigger competitors in the GPGPU market. Can you please consider adding that or having a new review based on the compute only.
  • codedivine - Friday, February 22, 2013 - link

    I am certainly interested in looking at the Xeon Phi if I can find the time and if we can arrange the resources to do so.

    My performance expectation (based on Intel white papers) is about 1700-1800 GFLops for SGEMM and 800-900 GFlops for DGEMM on the Xeon Phi 5110P. However, there are also a few benchmarks where I am expecting them to win as well thanks to the large cache on the Phi. Stay tuned.
  • Ryan Smith - Monday, February 25, 2013 - link

    This is really a consumer/prosumer level review, so the cards we're going to judge it against need to be comparable in price and intended audience. Not only can we not get some of those parts, but all of them cost many times more than Titan.

    If we were ever able to review K20, then they would be exactly the kinds of parts we'd try to include though.
  • kivig - Friday, February 22, 2013 - link

    There is a whole community of 3D people interested.
    Or when it will get added to bench table?
  • etriky - Saturday, February 23, 2013 - link

    +1
    Since this card at this price point is pointless for gaming I figured the article would be heavy on compute applications in order to give us a reason for it's existence.

    But then, nothing. No SmallLuxGpu or Cycles. Not even any commercial packages like Octane, or any of the Adobe products. I know LuxGPU and Blender used to be in the test suite. What happened?

Log in

Don't have an account? Sign up now