I do not think Nvidia will have that long with this being the only mega GPU on the market. I really wish they allowed partner models of the Titan. I think a lot of people would go nuts over a MSI Lightning Titan or something like that.
Yes, a big mistake like the last Titan to not allow custom AIB cards. Good likelihood the 390X will blow the doors off the card with many custom models like MSI Lightning, DCU2 etc.
Also $1000 for this ??! lol is the only sensible response, none of the dual precision we saw in the original Titan to justify that price, but all of the price. Nvidia trying to cash in here, 390X will force them to do a card probably with less VRAM so people will actually buy this overpriced/overhyped card.
Titan and NVTTM are just as much about image, style and quality as much as performance. Its pretty obvious Nvidia is proud of the look and performance of this cooler, and isn't willing to strap on a hunking mass of Al/Cu to make it look like something that fell off the back of a Humvee.
They also want to make sure it fits in the SFF and Lanboxes that have become popular. In any case I'm quite happy they dropped the DP nonsense with this card and went all gaming, no cuts, max VRAM.
It is truly a card made for gamers, by gamers! 100% GeForce, 100% gaming, no BS compute.
What do you think they give up when they add DP? Its the same fabrication, was for titan vs 780ti. If I'm mistaken, the only difference between cards are whether the process screwed up 1 or more of the smps, then they get sold as gaming cards at varying decreasing prices...
@ratzes, its well documented, even in the article. DP/FP64 requires extra registers for the higher precision, which means more transistors allocated to that functionality. GM200 is only 1Bn more transistors than GK210 on the same process node, yet they managed to cram in a ton more functional units. Now compare to GM204 to GK204 3.5Bn to 5.2Bn and you can see, its pretty amazing they were even able to logically increase by 1.5x over the GM204, which we know is all gaming, no DP compute also.
Correct, but then they should have priced it around $800, not $1k. The reason they could demand $1k for the original Titan was due to the FP64 compute functionality on board.
This is exactly what they did when they made the GTX 560 Ti, chopped out the compute features to maximize gaming power at a low cost. The reason that one was such a great card was due to price positioning, not just performance.
@Denithor, I disagree, the reason they could charge $1K for the original Titan was because there was still considerable doubt there would ever be a traditionally priced GeForce GTX card based on GK110, the compute aspect was just add-on BS to fluff up the price.
Since then of course, they released not 1, but 2 traditional GTX cards (780 and Ti) that were much better received by the gaming market in terms of both price and in the case of the Ti, performance. Most notably was the fact the original Titan price on FS/FT and Ebay markets quickly dropped below that of the 780Ti. If the allure of the Titan was indeed for DP compute, it would have held its price, but the fact Titan owners were dumping their cards for less than what it cost to buy a 780Ti clearly showed the demand and price justification for a Titan for compute alone simply wasn't there. Also, important to note Titan's drivers were still GeForce, so even if it did have better DP performance, there were still a lot of driver limitations related to CUDA preventing it from reaching Quadro/Tesla levels of performance.
Simply put, Nvidia couldn't pull that trick again under the guise of compute this time around, and people like me who weren't willing to pay a penny for compute over gaming weren't willing to justify that price tag for features we had no use for. Titan X on the other hand, its 100% dedicated to gamers, not a single transistor budgeted for something I don't care about, and no false pretenses to go with it.
The identity crisis this card has with itself is that for all the effort, it's still slower than two 980's in SLI, and when overclocked to try to catch up to them, ends up using MORE POWER than two 980's in SLI.
So for the price (being identical) wouldn't you just pick up two 980's which offer more performance, less power consumption and FP64 (even if you don't need it, it'll help the resell value in the future)?
The 980 have the same 1/32 DP performance as the Titan X. And Titan never was a sensible card. Noone sensible buys it over a x80 of that generation (which i assume will be 1080 or whatever they call it, based on GM200 with less ram, and maybe some disabled ROPs).
The Titan is a true flagship. making no sense economically, but increasing your penis size by miles
I considered going this route but ultimately decided against it despite having used many SLI setups in the past. There's a number of things to like about the 980 but ultimately I felt I didn't want to be hamstrung by the 4GB in the future. There are already a number of games that push right up to that 4GB VRAM usage at 1440p and in the end I was more interested in bringing up min FPS than absolutely maxing out top-end FPS with 980 SLI.
Power I would say is about the same, 980 is super efficient but once overclocked, with 2 of them I am sure the 980 set-up would use as much if not more than the single Titan X.
2. $1000/1300€ puts it to exactly double the price of exactly the same performance level you get with any other solution: 970 SLI kicks it with $750, 295x2 does the same, 2x290X also... In Europe, the card is even 30% more expensive than in US and than other cards so even less people will buy it there.
3. In summer, when AMD releases 390X for $700 and gives even better performance, Nvidia will either have to drop TitanX to the same price or suffer being smashed around at the market.
Keep in mind HBM is seriously a performance kicker for high resolutions, end-game gaming that TitanX is intended for. No amount of RAM can counter RAM bandwidth, especially when you don't really need over 6-7GB for even the most demanding games out there.
Or they could just say fuck it and keep the Titan at it's exact price and release a x80 GM200 at a lower price with some features cut that will still compete with whatever AMD has to offer. This is the 3rd Titan, how can you not know this by now.
Well, yes. But without any compute performance of previous Titans, who would any why buy a 1000 Titan X while having exact same performance in some 980Ti or alike? Those who need 12GB for rendering may as well buy Quadros with more VRAM... When you need 12, you need more anyway... For gaming, 12GB means jack sht.
Well its only the fastest card in the WORLD look at it that way the fattest card in the world ONLY 1000$ I know I know 1000 does not justify the performance but its the fastest card in the WORLD!!!
LOL had to laugh @ farealstarfareal's comment that the 390X would likely blow the doors off the Titan X, the 390X is nowhere near the Titan X, it's closer to a 980. The all mighty R9 FuryX reviews posted this morning and it's not even beating the 980ti.
If the most recent slides (allegedly leaked from AMD) hold true, the 390x will be at least as fast as the Titan X, though with only 8GB of RAM (but HBM!).
A straight 4096SP GCN 1.2/3 GPU would be a close match-up already, but any other improvements made along the way will potentially give the 390X a fairly healthy launch-day lead.
I think nVidia wanted to keep AMD in the dark as much as possible so that they could not position themselves to take more advantage of this, but AMD decided to hold out, apparently, until May/June (even though they apparently already have some inventory on hand) rather than give nVidia a chance to revise the Titan X before launch.
nVidia blinked, it seems, after it became apparent AMD was just going to wait out the clock with their current inventory.
Unless AMD has achieved considerable increase in perf/w, they are going to have really hard time tuning those 4k shaders to a reasonable frequency without being a 450W card.
Not that being a 500W is necessarily a deal breaker for everyone, but in practice cooling a 450W card without causing ear shattering level of noise is very difficult compared to cooling a 250W card.
Let us wait and hope, since AMD really would need to get a break and make some money on this one...
Very true. We know that with HBM there should already be a fairly beefy power savings (~20-30W vs 290X it seems).
That doesn't buy them room for 1,280 more SPs, of course, but it should get them a healthy 256 of them. Then, GCN 1.3 vs 1.1 should have power advantages as well. GCN 1.2 vs 1.0 (R9 285 vs R9 280) with 1792 SPs showed a 60W improvement, if we assume GCN 1.1 to GCN 1.3 shows a similar trend the 390X should be pulling only about 15W more than the 290X with the rumored specs without any other improvements.
Of course, the same math says the 290X should be drawing 350W, but that's because it assumes all the power is in the SPs... But I do think it reveals that AMD could possibly do it without drawing much, if any, more power without making any unprecedented improvements.
Yeah, but the question is, How well will the memory survive on top of a 300W GPU? Because the first part in a graphic card to die from high temperatures is the VRAM.
It will be to the side, on a 2.5d interposer, I believe.
GPU thermal energy will move through the path of least resistance (technically, to the area with the greatest deltaT, but regulated by the material thermal conductivity coefficient), which should be into the heatsink or water block. I'm not sure, but I'd think the chips could operate in the same temperature range as the GPU, but maybe not. It may be necessary to keep them thermally isolated. Which shouldn't be too difficult, maybe as simple as not using thermal pads at all for the memory and allowing them to passively dissipate heat (or through interposer mounted heatsinks).
It will be interesting to see what they have done to solve the potential issues, that's for sure.
Yes, I agree that AMD would be able to absolutely destroy NVIDIA on the performance front if they designed a 500W GPU and left the PCB and waterblock design to their AIB partners.
I would also absolutely love to see what kind of performance a 500W or even a 1kW graphics card would be able to muster; however, since a relatively constant 60fps presented with less than about 100ms of total system latency has been deemed sufficient for a "smooth and responsive" gaming experience, I simply can't imagine such a card ever seeing the light of day. And while I can understand everyone likes to pretend that they are saving the planet with their <150W GPUs, the argument that such a TDP would be very difficult to cool does not really hold much water IMHO.
If, for instance, the card was designed from the ground up to dissipate its heat load over multiple 200W~300W GPUs, connected via a very-high-speed, N-directional data interconnect bus, the card could easily and (most importantly) quietly be cooled with chilled-watercooling dissipating into a few "quad-fan" radiators. Practically, 4 GM200-size GPUs could be placed back-to-back on the PCB, with each one rendering a quarter of the current frame via shared, high-speed frame buffers (thereby eliminating SLI-induced microstutter and "frame-pacing" lag). Cooling would then be as simple as installing 4 standard gpu-watercooling loops with each loop's radiator only having to dissipate the TDP of a single GPU module.
There won't ever be a 980 Ti if you understand Nvidia's naming schemes. Ti's are for unlocked parts, there's nothing to further unlock on the 980 GM204.
@stun you're in for a huge upgrade either way. Makes sense to wait though, but I am not sure if 390X will change current pricing if at all. But Nvidia may also launch a cut down GM200 in that timeframe to give you another option in that $500+ range.
Im not convinced about this TitanX and the last titan turned out to be a bad investment for the $1,000 asking price. Last time, Titan came out (at $1,000) then a matter of weeks later , the 780TI came out with the same performance for $300 less. This time, we have the 390X soon but no doubt Nvidia have a 980TI up their sleeve, so the value of these highend $1,000 cards disappears quickly making it a bad investment. I expect a $1,000 card to hold the performance crown for at least 6-12 months not a few weeks, then get out performed by a card that costs $300 less.
@cactusdog "Titan came out (at $1,000) then a matter of weeks later , the 780TI came out with the same performance for $300 less." Actually the 780Ti, having a lot more CUDA cores, destroys the original Titan in gaming performance. The 780Ti equivalent was the "Titan Black", with the same amount of cores, but twice the VRAM, slightly higher default core clock, and fully unlocked compute.
Well lets see. Even when it launches, will it be readily available and not highly priced like the 290X. If the 290x was readily available when it was launched, I would've bought one.
Based on leaked slides referencing Battlefield 4 at 4K resolution the 390X is 1.6x the 290X. In the context of this review results we could guess it comes up slightly short at 4K ultra and 10 fps faster than the Titan X at 4K medium. Far Cry 4 came in at 1.55 x the 290X.
These numbers don't tell the whole story on how AMD arrived with the figures, but it paints the picture of a GPU that goes toe-to-toe with the Titan X. The slides also talk about a water cooler edition. I'm suspecting the wattage will be in the same ball park as the 290X and likely higher.
With the Titan X full breadth compute muscle, I am not sure what the 980 Ti will look like. I suspect Nvidia is holding that back based on whatever AMD releases, so they can unload a smack down trump card. Rumored $700 for the 390X WCE with 8GB HBM (high bandwidth memory - 4096 bit width) and in Q2 (April-June). Titan X and 390X at the same price given what I know at the moment I would go with the Titan X.
Plenty of dolts bought the first Titan as a gaming card so I'm sure someone will buy this. At least there's a bigger performance difference between the Titan X and GTX 980 than there was between the Titan and GTX 780.
Except the GTX 780 came after the Titan launched. Rather it was the original Titan compared to the GTX 680 and here we see a similar gap between the Titan X and the GTX 980. It is also widely speculated that we'll see a cut down GM200 to fit between the GTX 980 and the Titan X so history looks like it will repeat itself.
@Railgun, I'd disagree and I was very vocal against the original Titan for a number of reasons. Mainly because Nvidia used the 7970 launch as an opportunity to jump their 2nd fastest chip as mainstream. 2ndly, because they held back their flagship chip nearly a full year (GTX 680 launched Mar 2012, Titan Feb 2013) while claiming the whole time there was no bigger chip, they tried to justify the higher price point because it was a "compute" card and lastly because it was a cut down chip and we knew it.
Titan X isn't being sold with any of those pretenses and now that the new pricing/SKU structure has settled in (2nd fastest chip = new $500 flagship), there isn't any of that sticker shock anymore. Its the full chip, there's no complaints about them holding anything back, and 12GB of VRAM is a ridiculous amount of VRAM to stick on a card, and that costs money. If EVGA wants to release an $800 Classified 980 and people see value in it, then certainly this Titan X does as well.
At least for me, it is the more appealing option for me now than getting a 2nd 980 for SLI. Slightly lower performance, lower heat, no SLI/scaling issues, and no framebuffer VRAM concerns for the foreseeable future. I game at 2560x1440p on an ROG Swift btw, so that is right in this card's wheelhouse.
There was indeed a bigger chip due closer to the GK104/GTX 680's launch: the GK100. However it was cancelled due to bugs in the design. A fixed revision eventually became the GK110 which was ultimately released as the Titan/GTX 780.
After that there have been two more revisions. The GK110B is quick respin which all fully enabled dies stem from (Titan Black/GTX 780 Ti). Then late last nVidia surprised everyone with the GK210 which has a handful of minor architectural improvements (larger register files etc.).
The morale of the story is that building large dies is hard and takes lots of time to get right.
We don't know what happened to GK100, it is certainly possible as I've guessed aloud numerous times that AMD's 7970 and overall lackluster pricing/performance afforded Nvidia the opportunity to scrap GK100 and respin it to GK110 while trotting GK104 out as its flagship, because it was close enough to AMD's best and GK100 may have had problems as you described. All of that led to considerable doubt whether or not we would see a big Kepler, a sentiment that was even dishonestly echoed by some Nvidia employees I got into it with on their forums.
Only in October 2012 did we see signs of Big Kepler in the Titan supercomputer with K20X, but still no sign of a GeForce card. Its no doubt that a big die takes time, but Nvidia had always led with their big chip first, since G80 and this was the first time they deviated from that strategy while parading what was clearly their 2nd best, mid-range performance ASIC as flagship.
Titan X sheds all that nonsense and goes back to their gaming roots. It is their best effort, up front, no BS. 8Bn transistors Inspired by Gamers and Made by Nvidia. So as someone who buys GeForce for gaming first and foremost, I'm going to reward them for those efforts so they keep rewarding me with future cards of this kind. :)
With regards to the price, 12GB of RAM isn't justification enough for it. Memory isn't THAT expensive in the grand scheme of things. What the Titan was originally isn't what the Titan X is now. They can't be seen as the same lineage. If you want to say memory is the key, the original Titan with its 6GB could be seen as more than still relevant today. Crysis is 45% faster in 4K with the X than the original. Is that the chip itself or memory helping? I vote the former given the 690 is 30% faster in 4K with the same game than the original Titan, with only 4GB total memory. VRAM isn't going to really be relevant for a bit other than those that are running stupidly large spans. It's a shame as Ryan touches on VRAM usage in Middle Earth, but doesn't actually indicate what's being used. There too, the 780Ti beats the original Titan sans huge VRAM reserves. Granted, barely, but point being is that VRAM isn't the reason. This won't be relevant for a bit I think.
You can't compare an aftermarket price to how an OEM prices their products. The top tier card other than the TiX is the 980, which has been mentioned ad nauseam that the TiX is NOT worth 80% more given its performance. If EVGA wants to OC a card out of their shop and charge 45% more than a stock clock card, then buyer beware if it's not a 45% gain in performance. I for one don't see the benefit of a card like that. The convenience isn't there given the tools and community support for OCing something one's self.
I too game on 25x14 and there've been zero issues regarding VRAM, or the lack thereof.
I didn't say VRAM was the only reason, I said it was one of the reasons. The bigger reason for me is that it is the FULL BOAT GM200 front and center. No waiting. No cut cores. No cut SMs for compute. No cut down part because of TDP. It's 100% of it up front, 100% of it for gaming. I'm sold and onboard until Pascal. That really is the key factor, who wants to wait for unknown commodities and timelines if you know this will set you within +/-10% of the next fastest part's performance if you can guarantee you get it today for maybe a 25-30% premium? I guess it really depends on how much you value your current and near-future gaming experience. I knew from the day I got my ROG Swift (with 2x670 SLI) I would need more to drive it. 980 was a bit of a sidegrade in absolute performance and I still knew i needed more perf, and now I have it with Titan X.
As for VRAM, 12GB is certainly overkill today, but I'd say 6GB isn't going to be enough soon enough. Games are already pushing 4GB (SoM, FC4, AC:U) and that's still with last-gen type textures. Once you start getting console ports with PC texture packs I could see 6 and 8GB being pushed quite easily, as that is the target framebuffer for consoles (2+6). So yes, while 12GB may be too much, 6GB probably isn't enough, especially once you start looking at 4K and Surround.
Again, if you don't think the price is worth it over a 980 that's fine and fair, but the reality of it is, if you want better single-GPU performance there is no alternative. A 2nd 980 for SLI is certainly an option, but for my purposes and my resolution, I would prefer to stick to a single-card solution if possible, which is why I went with a Titan X and will be selling my 980 instead of picking up a 2nd one as I originally intended.
Best part about Titan X is it gives another choice and a target level of performance for everyone else!
@Frenetic Pony, maybe now, but what about once DX12 drops and games are pushing over 6GB? We already see games saturating 4GB, and we still haven't seen next-gen engine games like UE4. Why compromise for a few hundred less? You haven't seen all the complaints from 780Ti users about how 3GB isn't enough anymore? Shoudn't be a problem for this card, which is just 1 less thing to worry about.
Games dont push 4GB... Check the LTT Ultrawide video, where he barely got Shadow of Mordor on ultra to go past 4GBs on 3 ulrawide 1440p screens.
And as a game dev i can tell you, with proper optimisations, more than 4GB is insane, on a GPU, unless you just load stuff in with a predictive algorithm, to avoid PCIe bottlenecks.
And please do show me where a 780Ti user isnt happy with his cards performance at 1080-1600p. Because the card does, and will continue to perform great on those resolutions, since games wont really advance, due to consoles limiting again.
Also, DX12 wont make games magically use more VRAM. all it really does is it makes the CPU and GPU communicate better. It wont magically make games run or look better. both of those are up to the devs, and the look better part is certainly not the textures or polycounts. Its merely the amount of drawcalls per frame going up, meaning more UNIQUE objects. (contrary to more objects, which can be achieved through instancing easily in any modern engine, but Ubisoft havent learned that yet)
DX12 raises the bar for all games by enabling better visuals, you're going to get better top-end visuals across the board. Certainly you don't think UE4 when it debuts will have the same reqs as DX11 based games on UE3?
Even if you have the same size textures as before 2K or 4K assets as is common now, the fact you are drawing more polygons enabled by DX12's lower overhead, higher draw call/poly capabilities means they need to be textured, meaning higher VRAM requirement unless you are using the same textures over and over again.
Also, since you are a game dev, you would also know Devs are going more and more towards bindless or megatextures that specifically make great use of textures staying resident in local VRAM for faster accesses, rather than having to optimize and cache/load/discharge them.
Uh, they absolutely do push 4GB, its not all for the framebuffer but they use it as a texture cache that absolutely leads to a smoother gaming experience. I've seen SoM, FC4, AC:Unity all use the entire 4GB on my 980 at 1440p Ultra settings (textures most important ofc) even without MSAA.
You can optimize as much as you like but if you can keep texture buffered locally it is going to result in a better gaming experience.
And for 780Ti owners not being happy, believe what you like, but these are the folks jumping to upgrade even to 980 because that 3GB has crippled the card, especially at higher resolutions like 4K. 780Ti beats 290X in everything and every resolution, until 4K.
Funny how 3.5GB wass just recently a kickk to the insufficient groin, a gigantic and terrible lie, and worth a lawsuit due to performance issues... as 4GB was sorely needed, now 4GB isn't used....
Yes 4GB isn't needed. It was just 970 seconds ago, but not now !
You always pay extra for the privilege of owning a halo product. Nvidia already rewrote the pricing structure in the consumer's favor when they released the GTX 970 -- a card with $650 performance -- at $329. You can't complain too much that they don't give you the GTX 980 for $400. If you want above the 970 you're going to pay for it. And Nvidia has hit it out of the ballpark with the Titan X. If Nvidia brought the high end of Maxwell down in price AMD would pretty much be out of business considering they'd have to sell housefire Hawaii at $150 instead of being able to find a trickle of pity buyers at $250.
would it have? No. They could have given it FP64. Could they have given it FP64 without pushing the power and heat up a lot? Nope.
the 390x silicon will be capable of over 3TFlop FP64 (the 390x probably locked to 1/8 performance, however) and will be a smaller chip than this. The price to pay will be heat and power. How much? Good question.
Yes, it would've required a lot more transistors and die area with Maxwell's architecture, which relies on separate fp64 and fp32 cores. Comparing the costs associated with double precision performance directly to GCN is inaccurate.
Last I checked, rectal limits are a bit north of 700 mm^2. However, nVidia is already in the crazy realm in terms of economics when it comes to supply/demand/yields/cost. Getting fully functional chips with die sizes over 600 mm^2 isn't easy. Then again, it isn't easy putting down $999 USD for a graphics card.
However, harvested parts should be quiet plentiful and the retail price of such a card should be appropriately lower.
Intel's limit is supposed to be between 750 and 800 mm^2. They have released a 699 mm^2 product commercially (Tukwilla Itanium 2) a few years ago so it can be done.
Yes its clear Nvidia had to make sacrifices somewhere to maintain advancements on 28nm and it looks like FP64/DP got the cut. I'm fine with it though, at least on GeForce products I don't want to pay a penny more for non-gaming products, if someone wants dedicated compute, go Tesla/Quadro.
Kepler also has dedicated FP64 cores and from what I see in Anandtech articles, those cores are not used for FP32 calculations. How does NVIDIA save power with Maxwell by leaving FP64 cores off the die? The Maxwell GPUs seem to still be FP64 capable with respect to the number of FP64 cores placed on the die. It seems what they save by having less FP64 cores is die space and, as a result, the ability to have more FP32 cores. In other words, I haven't seen any information about Maxwell that leads me to believe they couldn't have added more FP64 cores when designing GM200 to make a GPU with superior double precision performance and inferior single precision performance compared with the configuration they actually chose for GM200. Maybe they just judged single precision performance to be more important to focus on than double precision, with a performance boost for double precision users having to wait until Pascal is released. Perhaps it was a choice between making a modest performance boost for both single and double precision calculations or making a significant performance boost for single precision calculations by forgoing double precision. Maybe they thought the efficiency gain of Maxwell could not carry sales on its own.
I didn't care either, my $800 FX9590 loved 320 watts and my "uber" two niner zeroX loved double dipping that 320 watts, so I converted my carbon arc Linclon 220 welder to handle the AMD juice load and my DVD/RW/DL/LS melted tight to my CM heavy tower and dripped bubbling fire plastic drops through my liquid AMD loop... bye bye overclock
You can't take those numbers seriously though, as they are wrong. Anandtech is *STILL* using reference cards for these tests. You have not been able to buy reference cards for over a year now. The current cards are run MUCH cooler, MUCH quieter, use less power, and have better performance.
Quick edit, it seems XFX is still selling a reference 290X. No clue why, but they are. You can get custom cooled AIB cards for less. Could just be leftover stock though I suppose.
Using reference cards is consistent with our long-standing policy to use them. Aside from the immediate definition of reference, I believe that it is very important not to cherry pick results. The results you in our reviews should be equal to or lower than the results you will get with a retail card - we specifically want to avoid publishing results higher than what the buyer can get.* We don't want to overstate the performance of a card.
* Using the same testbed hardware as us of course.
Still it shows the 290(x) in a way poorer light than is actually true. At least that should be stated but better would be to add a AIB card to the reviews,
AMD fanboys went apeshit when AT used non-ref cooled GTX 460s that showed those cards in better light than the reference coolers, AMD fanboys need to pick a story and stick to it tbh.
That had nothing to do with non-ref cooled 460's. It was using FTW versions, which were heavily overclocked vs stock clocked AMD cards. It had nothing to do with what cooler was on them. Just the overclocked being portrayed as though they were standard edition cards. The article was later changed as I recall to state that they were overclocked cards and not stock 460s.
Its the same difference, the AMD cards are no more "stock" with custom coolers than those FTW editions. AMD baked an overclock into their boost that they weren't able to sustain without extravagant cooling. And when I mean extravagant cooling, I am talking about the Cadillac Aluminum Boat we used to be like "Oh wow, maybe one day I will fork out $60 for that massive Arctic Cooling cooler".
So yeah, now you get "stock clocked, stock cooled reference cards", if you want AMD to show better in benchmarks, have them design either 1) better cooling or 2) less power hungry cards.
@Ryan, we can't revisit all the nerdrage and angst from AMD fanboys over EVGA sending you non-reference cooled GeForce cards because they were too good over reference? Funny what happens when the shoe is on the other foot!
My solution: AMD should design cards that are capable of comfortably fitting within a 250W TDP and cooler designed to dissipate that much heat, and not have to resort to a cooler that looks like an Autobot. I'm not kidding, briefly owned a Sapphire Tri-X 290X and the thing doubles as its own appliance.
Stop stating that it was because of the cooler. As I mentioned above, FTW cards are heavily overclocked and should not be portrayed as standard edition cards.
And custom-cooled, higher clocked cards should? It took months for AMD to bring those to market and many of them cost more than the original reference cards and are also overclocked.
I hope you do realize calling out AMD fanboys in each and every one of your comments essentially paints you as Nvidia fanboy in the eyes of other readers. I'm here to read some constructive comments and all I see is you bitching about fanboys and being one yourself.
@Witchunter, the difference is, I'm not afraid to admit I'm a fan of the best, but I'm going to at least be consistent on my views and opinions. Whereas these AMD fanboys are crying foul for the same thing they threw a tantrum over a few years ago, ultimately leading to this policy to begin with. You don't find that ironic, that what they were crying about 4 years ago is suddenly a problem when the shoe is on the other foot? Maybe that tells you something about yourself and where your own biases reside? :)
@chizow either way you don't really offer constructive criticism and you call people dishonest without proving them wrong in any way and offering facts. You are one of the biggest fanboys out there and it kind of makes you lose credibility.
Ok wanted to add to this, I do like some of the comments you make but you are so fan boyish I am unable to take much stock in what you say. If you could offer more facts and stop just bashing AMD and praising the all powerful Nvidia is better in every way, despite the fact that AMD has advantages and has outperformed Nvidia in many ways, so has Nvidia outperformed AMD, they leap frog...if you did that we might all like to hear what you have to say.
Like I said, I'm not here to sugarcoat things or keep it constructive, I'm here to set the record straight and keep the discussion honest. If that involves bruising some fragile AMD fanboy egos and sensibilities, so be it.
I'm completely comfortable in my own skin knowing I'm a fan of the best, and that just happens to be Nvidia for graphics cards for the last near-decade since G80, and I'm certainly not afraid to tell you why that's the case backed with my usual facts, references etc. etc. You're free to verify my sources and references if you like to come to your own conclusion, but at the end of the day, that's the whole point of the internet, isn't it? Lay out the facts, let informed people make their own conclusions?
In any case, the entire discussion and you can be the judge of whether my take on the topic is fair, you can clearly see, AMD fanboys caused this dilemma for themselves, many of which are the ones you see crying in this thread. Queue that Alanis Morissette song....
...right, and why would these non reference cards consume less power? Just hypothetically speaking, ignoring for a moment all the benchmarks out there that suggest otherwise.
There is some science behind it, heat results in higher leakage resulting in higher power consumption. But yes I agree, the reviews show otherwise, in fact, they show the cards that dont' throttle and boost unabated draw even more power, closer to 300W. So yes, that increased perf comes at the expense of higher power consumption, not sure why the AMD faithful believe otherwise.
Yes, some of the new designs from aftermarket are cooler and quiter, but they dont use less power, the GPU is generating the power, the aftermarket companies can't alter that. They can only tame the beast, so to speak.
Would be a good point if the performance were the same. But the Titan X is 50% faster. The scores are also total system power usage under gaming load, not card usage. Running at 50% faster frame rates is going to tax other parts of the system more, as well.
The main point of my post is that Titan X gets 50% more performance/system watt. But yes, your frame rate should affect your power usage if you are GPU-bound. The CPU, for instance, will be working harder maintaining the higher frame rates. How much harder, I have no idea, but it's a variable that needs to be considered before testbug00's antecedent can be considered true.
Actually frame rates have a lot to do with power usage.
I don't think that needs any further explanation, anyone who's even moderately informed knows this, and even if they didn't could probably figure out why this might be the case in about 10 seconds.
You misunderstand my point. The 290x was labeled as a 300W TPD (unofficial) card by Anandtech when they reviewed. Everyone refers to it as 300W, sometimes 290W. So, is the 290x a 250W card like the Titan X, or are they both closer to 300W?
I understand that TPDs don't mean power draw, but, they are used to express power draw by everyone. Sigh. Partially trying to highlight that issue also. Just like Intel TPD versus AMD TPD, Nvidia's and AMD's TPDs also mean the same thing (I believe Nvidia's is the TPD under heavy load at the base clock, not 100% sure)
Oh right, sorry for the misunderstanding. TDP (Thermal Design Power) deals with the heat that a processor releases when under load. Watt is a unit that can be used to measure the flow of any type of energy, not just electricity. "Heat", just like electricity, is a form of energy, so the rate of its flow can also be measured in Watts.
well the cpu uses more power when the graphics hit higher frames so I'm assuming that the Titan X uses less power than the 290x but since it gets higher frames the CPU has to work harder to feed it so the CPU power draw goes up.
Really explain idiot. The benchmarks reflect that the 295x2 destroyed the card. So it's better because it's the best single gpu option available. Who cares. It 300 dollars less and still wipes the floor. Benchmarks are really what matters.
@packerman So all I saw was that AMD was wiping the floor with it for 300 dollars less. Am I missing something. While I agree with the new Nvidia card being overpriced, ultimately one cannot disregard the facts that the 295x2, - is Dual GPU, so its added performance is tied to a crossfire profile. - consumes nearly twice the power under load, inevitably needing a much more expensive PSU. - comes with factory water-cooling, and hence the added space requirement. - is limited to the DX12.0 feature set, compared the DX12.1 for the Titan x. - launched at $500 more.
"the original GTX Titan's time as NVIDIA's first prosumer card was short-lived"
I don't know about that. What's the definition of a prosumer card now? It was originally because of FP64 performance. Now, this doesn't have that that. Granted, single precision is better, but not astronomical compared to the original. I'd argue it's not a prosumer part (anymore), just a really good consumer part.
Why is the 290x ueber mode not highlighted on the charts? For people that this segment aims at, they would use that. Makes a review that is good put a bad taste in my mouth. Nice card for gamers (if you can pay the price) still :)
Monster of a card, I was pretty anti-Titan when they first released it but this one actually makes sense now that Nvidia shed all the false pretenses of it doubling as a "Compute" card.
But in comparison we see Titan:
1) fully enabled ASIC from the outset 2) first launched GM200 3) Quadruple standard VRAM of last major flagship GPU 4) Nearly double performance of previous flagship (GK210) 5) ~1.5x perf of same-gen performance 980, and just slower than 2x980 in SLI ($1100).
Nvidia's sales strategy is odd though, going direct sales first, hopefully that doesn't anger their retailers and partners too much. Made sense though given Nvidia has been selling self-branded cards at BestBuy for awhile now.
I was going to either pick up a 2nd 980 for less or one of these, looks like it will be one of these. Was all set to check out til I was hit with sales tax, I'll have to wait a few weeks for Newegg and I'll just pick up EVGA's SuperClocked version for the same total price.
AMD will most likely launch a comparable performance part in the 390X in a few weeks/months, but it will most likely come with a bunch of caveats and asterisks. Good option for AMD fans though!
I think AMD might actually win this generation due to having a head start on HBM. Hopefully there arent long delays though. I think AMD's problem isn't their cards, just that they have been late to the dance the last couple of generations.
I guess we will see, I don't think HBM will make the impact people think it will. Titan X has what 30% more bandwidth than the 980 and still seems to scale better with core overclocking (same for 980).
In any case, changed my mind and placed my order, figure no point in waiting a few weeks to save $60 when I'm already dropping $999 and $30 on next day shipping lol.
Sorry yeah I was factoring in likely overclocked speeds, I get 8000MHz on my 980, but I've read the Titan X RAM doesn't overclock nearly as well, which is no surprise given higher density chips, more memory controllers etc. I'm only expecting ~30-40% higher effective memory rather than the full 50%.
Totally agree with the rest of your comment. I doubt that HBM will make the real world performance impact everyone seems to be expecting. Ultimately it's just memory people, and performance doesn't scale anywhere close to linearly with memory bandwidth unless you're bandwidth constrained to begin with, which AMD is not. So fiji is expected to have ~45% more compute resources than Hawaii, I would expect at most ~45% increase in real world performance if clockspeeds are maintained, and if we ignore other potential per-core performance enhancements. But considering we haven't seen anything like this through GCN 1.2, I'm not expecting any big breakthroughs with GCN 1.3.
Yes, it shifts more of the TDP envelope of a card to the GPU. TSVs can also help to reduce GPU die complexity. So yes there are definitely other benefits to HBM besides raw bandwidth that can be used to either improve performance within a given TDP, or reduce power consumption at the same performance.
@chizow and curious what you will say when Nvidia releases HBM, probably not " I don't think HBM will make the impact people think it will." But I will agree AMD needs to get on the ball and not delay, they need a good head start with HBM to stay competitive, especially with their higher power consumption.
I am very excited to actually see how HBM pans out, DDR/GDDR is old tech now and pushing new technologies like HBM and wide I/O I believe will be huge for the computer world in general. Nice theoretical(hopefully they are higher than DDR first gen) bandwidth improvements.
What do you consider winning the generation? An architecture should be faster than one that came out 9 months prior. If AMD doesn't come out with something soon it will seem more like they are skipping the generation than they are winning the generation.
This card doesn't really make much sense. At 1440p the 980 rocks, at 4K even the titan X struggles. Yes, Gsync can reduce tearing, but those 40fps will still be 40fps. Not nearly as smooth as 60, and 4K monitors tend to have high input lag anyway.
I think it's kind of biased and unfair to publish only Sony Vegas (for video editing) benchmarks which favor AMD/OpenCL. Aside from being a better programs and more popular in the actual industry, you should also publish Adobe Premiere Pro and After Effect benchmarks, as it uses Cuda (& OpenCL) and favors Nvidia. This would balance out AMD Vegas favorism. Also, having a 3000 cuda cores in Premiere and After Effects is game changing and the gap would be much greater than AMD's in Vegas.
They can't compare multiple video editors for hardware that is clearly aimed at game playing. Not sure that rendering to XDCAM EX was a good choice for a consumer card though.
Why not? AMD works in Sony, and NV works in Adobe. Many home users use these, just as pros do. They should run the same operations in both (clearly anandtech chooses Sony Vegas because it favors AMD and OpenCL), or use ADobe since it's more popular and supports both OpenCL and Cuda. They are showing NV is the worst light possible by using Vegas instead of Adobe with Cuda. There is a reason NV owns 75-80% of the workstation market, and it isn't VEGAS. It's CUDA and the fact that you have to dig to find apps in the pro market that don't use it (like Vegas...LOL).
Adobe would work I suppose, but certainly not after effects - thats a Nvidia supported only feature. A few problems I see is the running subscription for GPU testing (since GPU's aren't released consistently throughout the year) and the lack of Adobe support for the latest video cards.
I was helping my friend build a compatible computer for video editing using Adobe 2-3 weeks ago. All of Adobe's white papers etc didn't list the 970 nor 980 as supported by Adobe premier - it's been a while since those have been released, and I'm aware there are "mods" to make the 970/980 compatible but my friend was your average joe, would have been unable to do the mods. As I recognize Adobe is more popular, using Adobe as a comparison wouldn't add much
Are you serious? The said "mods" are editing a txt file and saving it. I'm pretty sure the average joe can figure that out, especially one that is a video editor and uses a computer reguraly to edit video. Its a really easy process and in 2015, Adobe not updating their support for the latest cards is a moot point.
Agreed, Boggles my mind why they use something that isn't #1. Adobe is, but then that would make AMD look bad so as an AMD portal site...They have to use Sony Vegas and act like Cuda doesn't exist (for which this card is a MONSTER). Since you can do the same operation in both Vegas/Premiere, they should use Vegas for AMD and Adobe for NV. If I were buying either, I'd buy the BEST software for MY hardware. IE sony for AMD and Adobe if I was buying NV. It is silly not to compare them. If that is a problem they should switch to Adobe as it's used more and can still show OpenCL vs. Cuda. Again, better than this Vegas crap for NV which they hate and users complain about problems with this combo. But then that's the point of a AMD Portal site I guess :)
If u wanted to use Adobe in its stock configurations as a benchmark, currently, AT would be able to test the 780 versus the 295X or 290X. I think your results would still be "skewed" (paraphrasing you) to AMD in what you would want AT to do. AT is trying its best as its not easy to do what they do, so please research just a little bit before suggesting "how they failed" and attacking them. The editors are humans.
Somewhat dissapointed to see that the DisplayPort spec is still 1.2. 4k@60Hz is fine, but a couple of these in SLI should be able to get 60FPS+, which DP 1.2 doesn't have the bandwidth for.
Granted, there are no 4K displays capable of going above 60hz at the moment, but presumably that will change in the not-to-distant future. I suppose they could do some sort of MST config with dual 1.2 DP to drive 120hz @ 4K, but when they did that for first gen of 4K displays there were compatibility and other issues that caused it not to be 100% reliable.
Would have been nice if the "first" 4K GPU would support the next gen 4K display interconnect.
DP v1.3 has been ratified to do 4K 120 Hz over a single cable or 5K at 60 Hz. 8K at 60 Hz is also possible with either a reduced color space or via the Display Stream Compression option.
Considering the timing of the release, I don't think nVidia had time to implement v1.3 after ratification even if they laid some of the work from an earlier draft spec.
Ryan, you get an alibi on the 960 review. I'm sure the 960 review is just a verification of what we expect it to be. Basically, a 1080p workhorse card. Only curiosity is the watts.
I am impressed to see the Titan review today. Gonna get to readin'.
Just so it's noted, we've had the drivers since last Thursday (which actually isn't a lot of time to put together an article this big, especially with travel to GTC in the middle)
Does this card support 10bpp colour like the Quadro cards? You can easily verify this by simply checking out whether the Windows desktop can be set to 40-bit colour ("billions of colors"), when connected to a 10bpp display like the Dell 3011 via DisplayPort.
To all naysayers: Yes, I do have a native 10bpp monitor...two in fact. $999 would be cheap for a 10bpp GPU for me when compared to >$2000 Quadros.
Those results don't seem very accurate. BF4 on Ultra @1440p gives me 75-80 FPS on a single 980 and 130-144 FPS on SLI without any GPU overclock applied.
It depends on how you benchmark. We use an on-rails sequence from the Tashgar level; it's not perfect, but it's highly repeatable. More importantly from it we can still give accurate advice on how much performance is needed for a given resolution/setting to be playable.
a dual gpu setup will bring stuttering and input lag in pretty much any case. It's not always THAT noticeable, but it usually is quite bothersome at least.
If you look at SLI 980 price and performance the titan pricing actually makes sense. If it was cheaper no one in the right mind would consider SLI 980 anymore. But then the 980 is also way overpriced....In reality the price is a joke. I assume NV made a market study an concluded most Titan buyers are gamers (and not entry-level compute users) so the just removed FP64 which justified the original titans price.
That is insane overclock performance. Bf4 4k at 70 FPS!
I think this is the first single GPU that 4k is viable with. And that's significant, because not all games support multi-GPU at launch. Also, you can finally get PS4 performance and graphical settings or better at 4k with only 1 card.
Finally, power consumption is pretty good. Same TDP as AMD's 280x, but performance is in a totally different zip code.
Hopefully AMD counters with something good soon. I expect we will get another 290x situation, where AMD's next flagship card will have similar performance to a Titan, but at a lower price and higher power consumption.
I appreciate you including the GTX 580 in the performance chart. It's amazing that since the GTX 580 there's been a 4x increase in graphics performance with only a single process node advancement.
Firestrike: After the NVIDIA 3DMark optimization fiasco, we've sworn off 3DMark as a graphics benchmark. It's no secret that NV and AMD will try to optimize for games that are used as benchmarks, and we would like to not give them (more) incentive to optimize for a workload that is not a real game.
SoM: No texture add-on.
DA: It does, but we don't use it. It has a fixed number of frames and is too short on fast hardware. We're using a save game from early in the game.
Thanks. I guess it makes sense to avoid heavily benchmarked games, to get the real world performance. Anandtech's benchmarking has always been top notch, with consistent results that can be tested at home.
Thanks. I guess it makes sense to avoid heavily benchmarked games, to get the real world performance. Anandtech's benchmarking has always been top notch, with consistent results that can be tested at home.
Anything but a compute powerhouse, this seems to be more about play than work. Maybe the name should have been more aptly christened Tits X instead of Titan X?
Nice performance/watt, but at $1000, I find the performance/dollar to be unacceptable. Without a double-precision edge, this GPU is essentially a 980Ti, and Nvidia seems to want to get away with slapping on a Titan decal (and the consequential 1K price tag) by just adding a useless amount of graphics memory.
Take out about 4 gigs of VRAM, hold the "Titan" brand, add maybe 5-10% core clock, with an MSRP of at least $300 less, and I'll be interested. But I guess, for Nvidia to feel the need to do something like that, we'll have to wait for the next Radeon launch.
It's 980Ti with double the VRAM, a year earlier, if you are going off previous timelines. Don't undervalue the fact this is the first big Maxwell only 6 months after #2 Maxwell.
I agree the pricing has gotten ridiculous on these graphics cards, but this is the market we live and play in now. I typically spent $800-$1000 every 2 years on graphics cards, but I would get 2 flagship cards. After the whole 7970/680 debacle where mid-range became flagship, I can now get 2 high-end midrange for that much, or 1 super premium flagship. Going with the flagship, and I'm happy! :D
@chizow It's 980Ti with double the VRAM Yes, pretty much - a Ti GPU, with more VRAM than necessary, with the price tag of a Titan. I agree the pricing has gotten ridiculous on these graphics cards, but this is the market we live and play in now. The market is the way it is because we, consumers, let it be that way, through our choices. For us to obediently accept, at any time, overpricing as an acceptable trend of the market, is basically like agreeing with the fox who wants to be a guard for our henhouse.
Except the 780Ti came much later, it was the 3rd GK210 chip to be released, so there is a premium on that time and money. While this is the 1st GM200 based chip, no need to look any further beyond it. Also, how many 780Ti owners complained about not enough VRAM? Looks like Nvidia addressed that. There's just no compromises with this card, its Nvidia's best foot forward for this chip and only 6 months after GTX 980. No complaints here and I had plenty when Titan launched.
Sure the market is this way partially because we allow it, but the reality is, the demand is overwhelmingly there. I was thoroughly against paying $1000 for what I used to get for $500-$650 for Nvidia's big chip flagship card with the original Titan, but the reality is, Nvidia has raised the bar on all fronts (and AMD has done well also) and they are looking to be rewarded for doing so. I used to buy 2x cards before because 1 just wasn't good enough. Now, 1 is good enough, so I don't mind paying the same amount for that relative level of performance and enjoyment.
@chizow Except the 780Ti came much later, ...... plenty when Titan launched. Both the 780Ti and the Titan X were released exactly when Nvidia needed them in the market. For the 780Ti, the reason was to challenge the 290X for the top spot. The Titan X was made available sooner because a) Nvidia needed the positive press after the 970 VRAM fiasco and b) because Nvidia wanted to take some attention away from the recent 3xx announcements by AMD.
Hence I really can't find any logical reason to agree with your spin that the Nvidia staff was doing overtime as some sort of a public service, and so deserve some reward for their noble sacrifices.
Sure the market is this way partially because we allow it, but the reality is, the demand is overwhelmingly there. I was thoroughly against paying $1000 for what I used to get for $500-$650 for Nvidia's big chip flagship card with the original Titan, but the reality is, Nvidia has raised the bar on all fronts (and AMD has done well also) and they are looking to be rewarded for doing so. I used to buy 2x cards before because 1 just wasn't good enough. Now, 1 is good enough, so I don't mind paying the same amount for that relative level of performance and enjoyment. http://media2.giphy.com/media/13ayyyRnHJKrug/giphy...
Uh, you make a lot of assumptions while trying to dismiss the fact there is a huge difference in time to market and relative geography on Nvidia's release timeline for Titan X, and that difference carries a premium to anyone who observed or felt burned by how Titan and Kepler launches played out over 2012, 2013, 2014.
Fact remains, Titan X is the full chip very close to the front of Maxwell's line-up release, while the 780Ti came near the end of Kepler's life cycle. The correct comparison is if Nvidia launched Titan Black in 2013 instead of the original Titan, because that's what Titan X is.
The bolded portion should be pretty easy to digest, not sure why you are having trouble with it. Nvidia's advancement on the 28nm node has been so good (someone showed a 4x increase from the 40nm GTX 480 to the Titan X, which is damn amazing on the same node) and the relatively slow advancement in game requirements mean I no longer need 2 GPUs to push the game resolutions and settings I need. A single, super flagship card is all I need, and Nvidia has provided just that with the Titan X.
For those who don't think it is worth it, you can always wait for something cheaper and faster to come along, but for me, I'm good until Pascal in 2016 (maybe? Oh wait, don't need to worry about that).
Bit of a sidenote, but wow looks like 980 SLI scaling has REALLY improved in the last few months. I don't recall it being that good at launch, but that's not a huge surprise given Maxwell was a new architecture and has gone through a number of big (on paper) driver improvements. Looks really good though, made it harder to go with the Titan X over a 2nd 980 for SLI, but I think I'll be happier this way for now.
You're assuming they'll beat this card, and I doubt you'll see them in June as the channel is stuffed with AMD's current stuff. I say Q3 and won't be as good as you think. HBM will cause pricing issues, won't net any perf (isn't needed, bandwidth isn't a problem, so wasted extra cost here) so the gpu will have to win on it's own vs. NV. You'd better hope AMD's is good enough to sell like hotcakes, as they really need the profits finally. This Q already wasted and will result in a loss most likely, and NV is good for the next 3 months at least until something competitive arrives, at which point NV just drops pricing eating any chance of AMD profits anyway. AMD has a very tough road ahead and console sales drop due to mobile closing the gap at 16/14nm for xmas (good enough that is, to have some say screw a console this gen, and screw $60 game pricing - go android instead).
Despite their ability to release the 980Ti at the same time as the Titan ex, we know they will try to milk the rich gamers as long as possible first. I just hope the Ti gets releases no more than 4 weeks from now and drives the price of the 980 down into mainstream levels.
I'm in Canada right now and the cheapest GTX 970 I can find is $390 and the cheapest R9 290x is $340. GTX 980 is $660 for the cheapest one. (79 cent US buck)
If Nvidia doesn't come out with reasonable pricing for it's mid-range and high-end cards, they WILL start to lose market share to AMD. Dammit.
Ideally they want to rely on using chips that couldn't perform well enough to reach the Titan bin for the 980Ti, otherwise they are wasting the potential of those chips. That means making and selling enough Titans and Quadros to produce a large enough supply of the lower-binned chips. So unless they are having yield problems, it makes sense they come out with products based on the fully-enabled chip first and then begin to introduce lower-specified chips later.
As far as losing market share, they are well aware of how things are selling, and if people aren't choosing to buy cards with their chips over the competition's at the current price, they will lower the price or offer incentives. Right now NVIDIA is selling newer, quieter, more efficient chips against AMD's older, louder, less efficient chips so they are able to charge a premium.
This on time review is interesting and this Titan is all well and good, but I'm still waiting on the review of the card that must people can actually buy. The 960 one. Or, at least, to see it added to the gpu bench tool..
This *is* a compute card, but for an application that doesn't need FP64: deep learning. In fact, deep learning would do even better with FP16. What deep learning does need is lots of ALUs (check) and lots of RAM (double check). Deep learning people were asking for more RAM and they got it. I'm considering buying one just for training neural nets.
Yes, I got that idea from the keynote address, and I think that's why they have 12GB of RAM. But how much deep-learning-specific compute demand is there? Are there lots of people who use compute just for deep learning and nothing else that demands FP64 performance? Enough that it warrants building an entire GPU (M200) just for them? Surely NVIDIA is counting mostly on gaming sales for Titan and whatever cut-down M200 card arrives later.
Nearly double the performance of a single 780 when heavily OC'd, jesus christ, I wish I had disposable income.
I already got burned by buying a 780 though ($722 before it dropped $200 a month later due to the Ti's release), so I'd much rather at this point extend the lifespan of my system by picking up some cheap second hand 780 and dealing with SLI's issues again (haven't used it since my 2x 460's) while I sit and wait for the 980 Ti to get people angry again or even until the next die shrink.
At any rate, I won't get burned again buying my first ever enthusiast card, that's for damn sure.
Well Titan X looks like a really mean machine.A bit pricey but Top Dog has always been like that for NV so you can't ping it too badly on that. I'm really glad NVDA has set their "Big Maxwell" benchmark because now it's up to R390X to defeat it. This will be flagship V flagship with the winner taking all the honors.
Couldn't u show us a chart of VRAM usage for Shadows of Mordor instead of minimum frames? Argus Monitor charts VRAM usage, it would've been great to see how much average and maximum VRAM Shadows of Mordor uses (of the available 12gb).
I'm surprised how often the ageing 7990 tops this. I had no doubt what so ever that the 295x2 was going to stomp all over this & that's what bothered me about everyone claiming the Titan X was going to be the fastest graphics card, blah, blah, blah. Yes I'm aware those are dual GPU cards in xfire, no I don't care because they're single cards & can be found for significantly lower prices if price/performance is the only concern.
So... as a person who has the absolute worst timing ever when it comes to purchasing technology, I built a brand new PC - FOR THE FIRST TIME IN NINE YEARS - just three days ago with 2 x GTX 980s. I haven't even received them yet, and I run across several reviews for this - today. Now, the question is: do I attempt to return the two 980s, saving $100 in the process? Or is it just better to keep the 980s? (Thankfully I didn't build the system yet, and consequently open them already, or I'd be livid.). Thanks for any advice, and sorry for any arguments I spark, yikes.)
It feels nVidia are just taking the pee out of us now. I was semi-miffed at the 970 controversy, I know for business reasons etc. it doesn't make sense to truly trounce the competition (and your own products) when you can instead hold something back and keep it tighter, and have something to release in case they surprise you.
And I was semi-miffed when I heard it would be more like a 33% improvement over the current cream of the crop, instead of the closer to 50% increase the Titan was over the 680, because they have to worry about the 390x, and leave room for a Titan X White Y Grey SuperHappyTime version.
But to still charge $1000 even though they are keeping the DP performance low, this is just too far. The whole reasoning for the high price tag was you were getting a card that was not only a beast of a gaming card, but it would hold its own as a workstation card too, as long as you didn't need the full Quadro service. Now it is nothing more than a high end card, a halo product...that isn't actually that good!
When it comes down to it, you're paying 250% the cost for 33% more performance, and that is disgusting. Don't even bring RAM into it, it's not only super cheap and in no way a justification for the cost, but in fact is useless, because NO GAMER WILL EVER NEED THAT MUCH, IT WAS THE FLIM FLAMMING WORKSTATION CROWD WHO NEEDING THAT FLIM FLAMMING AMOUNT OF FLOOMING RAM YOU FLUPPERS!
This feels like a big juicy gob of spit in our faces. I know most people bought these purely for the gaming option and didn't use the DP capability, but that's not the point - it was WORTH the $999 price tag. This simply is not, not in the slightest. $650, $750 tops because it's the best, after all..but $999 ? Not in this lifetime.
I've not had an AMD card since way back in the days of ATi, I am well and truly part of the nVidia crowd, even when they had a better card I'd wait for the green team reply. But this is actually insulting to consumers.
I was never gonna buy one of these, I was waiting on the 980Ti for the 384bit bus and the bumps that come along with it...but now I'm not only hoping the 390x is better than people say because then nVidia will have to make it extra good..I'm hoping it's better than they say so I can actually buy it.
For shame nVidia, what you're doing with this card is unforgivable
no he's not. He's blaming a for-profit compaby abusing it's position at the expense of its customers. Maxwell is great, and i've got 2 of them in my rig. But titan X is a bit of a joke. The only justification the previous titan had was that it could be viewed as a cheap professional cards. Now that's gone but you're still paying the same price. Unfortunately nvidia will put the highest price they can get away with, and 999$ doesn't seem to deter some hardcore fans no matter how much poor value it represents. I certainly hope the sales don't meet their expectations.
I would argue that the vram may be needed later on. 4GB is already tight with SoM, and future games will only push that up. people said that 6GB was too much for the OG titan, but SoM can eat that up at 4k, and other games are not far behind. especially for SLI setups, that memory will come in handy. Thats what really killed the 770. gpu was fine for me, but 2GB was way to little vram.
Not being a gamer, I would like to see a review in which many of these top-of-the-line gaming cards are tested against a different sort of environment. For example, I'd love to see how cards compare handling graphics software packages such as Photoshop, Premier Pro, Lightwave, Cinema 4D, SolidWorks and others. If these cards are really pushing the envelope, then they should compare against the Quadro and FirePro lines.
I think it's safe to say that Nvidia make technically superior cards as compared to AMD, at least as far as the last 2 generations of GPUs are concerned. While the AMD cards consume more power and produce more heat, this issue is not a determining factor when I upgrade unlike price and choice.
I will not buy this card, despite the fact that I find it to be a very desirable and techically impressive card, because I don't like being price-raped and because I want AMD to be competitive.
I will buy the 390X because I prefer a "consumer wins" situation where there are at least 2 companies producing competitive products and lets be clear AMD GPUs are competitve, even when you factor in what is ultimately a small increase in heat and noise, not to mention lower prices.
It was a pleasant surprise to see the R295X2 at one point described as "very impressive" yet I think it would have been fair if Ryan had drawn more attention to AMD "wins," even though they are not particularly significant, such as the most stressful Shadow of Mordor benchmarks.
Most people favour a particular brand, but surely even the most ardent supporters wouldn't want to see a situation where there is ONLY Intel and ONLY Nvidia. We are reaping the rewards of this scenario already in terms of successive generations of Intel CPUs offering performance improvements that are mediocre at best.
I can only hope that the 390X gets a positive review at Anandtech.
Looking forward to a 390 with the same performance for 400-500. I certainly got my money's worth out of the r9 290 when it was released. Don't understand how anyone could advocate this $1000 single card price bracket created for "top tier".
I actually really like how the new titan looks, shows what can be done. The problem with this card at this price point is it defeats what the titan really should be. Without the couple precision performance this card becomes irrelevant I feel(overpriced gaming card). The original titan was an entry level compute card outside of the quadro lineup. I know there are drawbacks to multiGPU setups but I would go for 2 980's or 970's for same or less money than the Titan X.
I also found these benchmarks very interesting because you can see how much each game can be biased to a certain card. AMDs 290x, an old card, beat out the 980 in some cases, mostly at 4k resolutions and lost in others at the same resolution. Just goes to show that you also have to look at individual game performance as well as overall performance when buying a card.
Can't wait for the 390x from AMD that should be very interesting.
"Unlike the GTX 980 then, for this reason NVIDIA is once again back to skipping the backplate, leaving the back side of the card bare just as with the previous GTX Titan cards."
Don't you mean "again back to SHIPPING the backplate?"
I'm confused as the article doesn't show any pictures of the back of the card. Does it have a backplate or not?
It's a blower cooler. So everything goes out the side of the case, which can be desirable if you have cards right on top of each other as the airflow is unobstructed.
It's just Nvidia. Unless you need PhysX, you're much better off waiting for the R300s.
Spring pricing is a bit off. R9 290x's go below $300 after rebates quite often now, Febuary I picked up a 290x for about $240 after rebate which was the lowest but have seen several at or below $300 without a rebate. R9 290s run around $250 and have gone down to $200-$220 recently as a low. 970s have been hovering around $320 but have gone to $290-$300.
Otherwise the Titan X was more for marketing since the 290x (2yr old tech) claws at the 980 at 4k and the 970 falls on it's face at 4k. This cards a beast don't get me wrong especially when it chases the 295x2 after overclocking, but when you can get a 295x2 for $600 after rebates a couples times a month it just doesn't make sense. $800 and I could see these selling like hotcakes and they'd still pocket a solid chunk, probably just going to drop a 980ti in a few months after the 390x is released making these 2nd place cards like they did with the og Titans
I go back and forth between Nvidia and AMD but Nvidia has been extra sketchy recently with their drivers and of course the 970.
I've been a fan of Green Team since i was a young boy, but anymore I usually lean Red team.
Just not satisfied with what I'm paying over on the other side to be honest.
Yes when I'm on the Red side I don't always have the same peak performance as Green. But I had enough money afterwards to pay my car payment and take the old lady out to dinner still. ;)
There is R9 290x available for nearly half of 980's price, being only 5-15% slower. (and 300w vs 370w total power consumption, I'm sure you can live with it)
There is R9 295x2 which handily beats Titan X in all performance benchmarks, with power consumption being the only donwside.
@Ryan Smith. For future reviews, as you briefly touched on it with this one, especially at high resolutions, can you start providing how much VRAM is actually in use with each game? For cards such as this, I'd like to see whether 12GB is actually useful, or pointless at this point. Based on the review and some of the results, it's pointless at the moment, even at 4K.
For people thinking that VRAM is unneeded, you must not be heavy into modding. Especially with Fallout 4 and GTA 5 on the horizon, massive amounts of room for texture mods will come in handy.
As is often the case with "doubled RAM" models, by the time that 12GB of VRAM is useful, we'll be a couple of generations down the road, and cards with 12GB of VRAM will be much faster, much cheaper, or both.
Maybe at that point a Titan X owner could pick up a cheap used card and run them in SLI, but even then they're laying out more money than a user who buys a $500 card every couple of years and has the VRAM he/she needs when it's actually useful.
I agree with you but don't forget how vram is used in sli and cf. Vram of gpu 1 mirrors vram of 0 so if have 2x 4gb you're only taking advantage of 4gb. Anyway i prefer fast ram than hughe amounts of it.
We've already had a game which called for 6GB VRAM for an advanced texture pack. Imagine an Elder Scrolls or a Fallout where every single object in the game has a 4k resolution texture. I think it'd be a challenge even for the titan.
The way that RAM works is the worse your system is, the more RAM you end up needing.
There are plateaus, but as GPUs get faster you need less VRAM to store the same amount of information.
The Titan X is much faster than the Titan BE, and thus needs less VRAM, assuming that the application is the same.
Then we get into Direct X 12 and Vulkan. They're supposed to increase efficiency all-around, reducing the demand for resources like RAM and cores even more.
"the card is generally overpowered for the relatively low maximum resolutions of DL-DVI " So I can drive my 1440p 105Hz display with it and get above 105fps? No? So what kind of statement is that then. DL-DVI may be old, but to say that 1440p is a low maximum resolution, especially with 100Hz+ IPS displays which rely on DL-DVI input, is strange to say the least.
290x was the original Titan killer. Not only did it kill the original release but killed its over-inflated price as well. I suspect the next reiteration of AMD flagship card will be Titan X killer as well. History usually repeats itself over and over again.
R9 290x only haves 4Gb at 5ghz and does a awsome job at 4k. the 295 only operates with 4Gb the other 4 are mirrored and shines in 4k. So i can't understand everybody concerns with 4k gaming with upcoming fiji. This Titan X has 12GB at 7Ghz and only shows how gddr5 is obsolete.
Hello everyone, I would like you to read the final words on the Titan X. It says the performance increase over a single gtx 980 is 33%, except the price is 100% over the gtx 980. If you are lucky enough to pay just 1000$ for the Titan X. Please people do not waste your money on this card. If you do then Nvidia will keep releasing Extremely overpriced cards. DO NOT BUY THIS CARD. Please instead wait for the gtx 980 TI if you want dx12. I will certainly pay 1 grand and more for a card, but this card is a particular rip off at that price point. Don't just throw your money away. Read the performance chart yourself, it is in no way shape or form worth 1000$.
I suppose we can't buy a Rolex, Tesla, a vacation condo, or even a pony? Paying for the best available is always more money. Get a job where another $500 doesn't affect you when you purchase something. Plus price is only perception on worth. People could say $20 is too much for a video card and they would be right.
I wish they would have thrown in 780sli, which is what I run. I would like to have more VRAM, but I'm running all the new games pretty much maxed out. I made the mistake of buying them when they first came out and payed over $600 a piece. I will definitely wait for price drops this time.
4K gaming not quite there yet. Not going to pay $500+ for it. And in the mean time still jamming Full HD games like a baws using my old 280X "on my Full HD monitor".
I actually went to the trouble to make an account to say sometimes I come here just to read the comments, some of the convos have me rolling on the floor busting my guts laughing, seriously, this is free entertainment at its best! Aside from that, the cost of this Nvidia e-penis would feed 10 starving children for a month. I mean seriously, at what point is it overkill? By that I mean is there any game out there that would absolutely not run good enough on a slightly lesser card at half the price? When I read this card alone requires 250W, my eyes popped out of my head, holy electric bill batman, but I guess if somebody has a 1G to throw away on an e-penis, they don't have electric bill worries. One more question, what kind of CPU/motherboard would you need to back this sucker up? I think this card would be retarded without at least the latest i7 Extreme(ly overpriced), can you imagine some tool dropping this in an i3? What I'm saying is, this sucker would need an expensive 'bed' too, otherwise, you'd just be wasting your time and money.
So much silly complaining about value. This is an incredible bargain for compute compared to Tesla -- absolutely crushes at single precision for a fraction of the price! For my application the new Titan X is the absolute best that money can buy, and it's comparatively cheap. So, I'll buy 10 of them, and 100 more if they work out.
Or, if you're the kind of person who actually needs CUDA and isn't just using it because they made a mistake in choosing their software and just chose something with a bloated price tag and fancy webpage then you get a Quadro card instead of wasting your money on a Titan.
You know. The sort of people who need Solidworks because they're working for a multimillion or even multibillion dollar corporation that wants 3D models or is using GPU computing, or if you're using Maya to animate a movie for a multimillion dollar studio.
Even if you're an indie on a budget, you don't buy a Titan. Because you won't be using software with CUDA or special Nvidia optimization. Because you won't be using iRay.
With the exception of industry applications (excluding individual/small businesses), Nvidia is currently just a choice for brand loyalists or people who want a big epeen.
Your "in-house project developed by our very own Dr. Ian Cutress" is garbage and is obviously not dividing workloads between multi-GPUs, a very simple task for any programmer with access to Google.
It's plain as day to see, but gives NV the lead in another benchmark - was this the goal of such awful programming?
Umm I find it pointless to compare AMD R9 290x with GTX 980, R9 290x is build to be competitive to Nvidia's stock 780 not 780ti and sure as hell not GTX 980, it's dumb, it's like trying to ask a grandma(R9 290x) to compete with supermodel(GTX 980) in a beauty pageant, of course Nvidia is going to win, but it's not like the winning gap is spectacular or something to be astonished about. Last but not least GTX 980's lead over the grandma is the largest sub 2k, let's not forget that both the GTX 980 and the grandma are build to handle 4k so given the time Nvidia has to prepare the GTX980, it should had obliterated the grandma in 4k but the performance gap is not that fricking big and deserved to be woved, especially FarCry 4. Fanboys always bash AMD for their terrible drivers but it's not like they are ignored you dumb witt, they are slowly improving their drivers. Did AMD ever said We are going to pretend that our driver don't suck and so we are not going to fix it.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
276 Comments
Back to Article
stun - Tuesday, March 17, 2015 - link
I hope AMD announces R9 390X fast.I am finally upgrading my Radeon 6870 to either GTX 980, TITAN X, or R9 390X.
joeh4384 - Tuesday, March 17, 2015 - link
I do not think Nvidia will have that long with this being the only mega GPU on the market. I really wish they allowed partner models of the Titan. I think a lot of people would go nuts over a MSI Lightning Titan or something like that.farealstarfareal - Tuesday, March 17, 2015 - link
Yes, a big mistake like the last Titan to not allow custom AIB cards. Good likelihood the 390X will blow the doors off the card with many custom models like MSI Lightning, DCU2 etc.Also $1000 for this ??! lol is the only sensible response, none of the dual precision we saw in the original Titan to justify that price, but all of the price. Nvidia trying to cash in here, 390X will force them to do a card probably with less VRAM so people will actually buy this overpriced/overhyped card.
chizow - Tuesday, March 17, 2015 - link
Titan and NVTTM are just as much about image, style and quality as much as performance. Its pretty obvious Nvidia is proud of the look and performance of this cooler, and isn't willing to strap on a hunking mass of Al/Cu to make it look like something that fell off the back of a Humvee.They also want to make sure it fits in the SFF and Lanboxes that have become popular. In any case I'm quite happy they dropped the DP nonsense with this card and went all gaming, no cuts, max VRAM.
It is truly a card made for gamers, by gamers! 100% GeForce, 100% gaming, no BS compute.
ratzes - Tuesday, March 17, 2015 - link
What do you think they give up when they add DP? Its the same fabrication, was for titan vs 780ti. If I'm mistaken, the only difference between cards are whether the process screwed up 1 or more of the smps, then they get sold as gaming cards at varying decreasing prices...MrSpadge - Tuesday, March 17, 2015 - link
Lot's of die space, since they used dedicated FP64 ALUs.chizow - Wednesday, March 18, 2015 - link
@ratzes, its well documented, even in the article. DP/FP64 requires extra registers for the higher precision, which means more transistors allocated to that functionality. GM200 is only 1Bn more transistors than GK210 on the same process node, yet they managed to cram in a ton more functional units. Now compare to GM204 to GK204 3.5Bn to 5.2Bn and you can see, its pretty amazing they were even able to logically increase by 1.5x over the GM204, which we know is all gaming, no DP compute also.hkscfreak - Wednesday, March 18, 2015 - link
Someone didn't read...nikaldro - Tuesday, March 17, 2015 - link
fanboysm to the Nth p0waH..furthur - Wednesday, March 18, 2015 - link
which meant fuck all when Hawaii was releasedDenithor - Wednesday, March 18, 2015 - link
Correct, but then they should have priced it around $800, not $1k. The reason they could demand $1k for the original Titan was due to the FP64 compute functionality on board.This is exactly what they did when they made the GTX 560 Ti, chopped out the compute features to maximize gaming power at a low cost. The reason that one was such a great card was due to price positioning, not just performance.
chizow - Monday, March 23, 2015 - link
@Denithor, I disagree, the reason they could charge $1K for the original Titan was because there was still considerable doubt there would ever be a traditionally priced GeForce GTX card based on GK110, the compute aspect was just add-on BS to fluff up the price.Since then of course, they released not 1, but 2 traditional GTX cards (780 and Ti) that were much better received by the gaming market in terms of both price and in the case of the Ti, performance. Most notably was the fact the original Titan price on FS/FT and Ebay markets quickly dropped below that of the 780Ti. If the allure of the Titan was indeed for DP compute, it would have held its price, but the fact Titan owners were dumping their cards for less than what it cost to buy a 780Ti clearly showed the demand and price justification for a Titan for compute alone simply wasn't there. Also, important to note Titan's drivers were still GeForce, so even if it did have better DP performance, there were still a lot of driver limitations related to CUDA preventing it from reaching Quadro/Tesla levels of performance.
Simply put, Nvidia couldn't pull that trick again under the guise of compute this time around, and people like me who weren't willing to pay a penny for compute over gaming weren't willing to justify that price tag for features we had no use for. Titan X on the other hand, its 100% dedicated to gamers, not a single transistor budgeted for something I don't care about, and no false pretenses to go with it.
Samus - Thursday, March 19, 2015 - link
The identity crisis this card has with itself is that for all the effort, it's still slower than two 980's in SLI, and when overclocked to try to catch up to them, ends up using MORE POWER than two 980's in SLI.So for the price (being identical) wouldn't you just pick up two 980's which offer more performance, less power consumption and FP64 (even if you don't need it, it'll help the resell value in the future)?
LukaP - Thursday, March 19, 2015 - link
The 980 have the same 1/32 DP performance as the Titan X. And Titan never was a sensible card. Noone sensible buys it over a x80 of that generation (which i assume will be 1080 or whatever they call it, based on GM200 with less ram, and maybe some disabled ROPs).The Titan is a true flagship. making no sense economically, but increasing your penis size by miles
chizow - Monday, March 23, 2015 - link
I considered going this route but ultimately decided against it despite having used many SLI setups in the past. There's a number of things to like about the 980 but ultimately I felt I didn't want to be hamstrung by the 4GB in the future. There are already a number of games that push right up to that 4GB VRAM usage at 1440p and in the end I was more interested in bringing up min FPS than absolutely maxing out top-end FPS with 980 SLI.Power I would say is about the same, 980 is super efficient but once overclocked, with 2 of them I am sure the 980 set-up would use as much if not more than the single Titan X.
naxeem - Saturday, March 21, 2015 - link
You're forgetting three things:1. NO game uses even close to 8GB, let alone 12
2. $1000/1300€ puts it to exactly double the price of exactly the same performance level you get with any other solution: 970 SLI kicks it with $750, 295x2 does the same, 2x290X also...
In Europe, the card is even 30% more expensive than in US and than other cards so even less people will buy it there.
3. In summer, when AMD releases 390X for $700 and gives even better performance, Nvidia will either have to drop TitanX to the same price or suffer being smashed around at the market.
Keep in mind HBM is seriously a performance kicker for high resolutions, end-game gaming that TitanX is intended for. No amount of RAM can counter RAM bandwidth, especially when you don't really need over 6-7GB for even the most demanding games out there.
ArmedandDangerous - Saturday, March 21, 2015 - link
Or they could just say fuck it and keep the Titan at it's exact price and release a x80 GM200 at a lower price with some features cut that will still compete with whatever AMD has to offer. This is the 3rd Titan, how can you not know this by now.naxeem - Tuesday, March 24, 2015 - link
Well, yes. But without any compute performance of previous Titans, who would any why buy a 1000 Titan X while having exact same performance in some 980Ti or alike?Those who need 12GB for rendering may as well buy Quadros with more VRAM... When you need 12, you need more anyway... For gaming, 12GB means jack sht.
Thetrav55 - Friday, March 20, 2015 - link
Well its only the fastest card in the WORLD look at it that way the fattest card in the world ONLY 1000$ I know I know 1000 does not justify the performance but its the fastest card in the WORLD!!!agentbb007 - Wednesday, June 24, 2015 - link
LOL had to laugh @ farealstarfareal's comment that the 390X would likely blow the doors off the Titan X, the 390X is nowhere near the Titan X, it's closer to a 980. The all mighty R9 FuryX reviews posted this morning and it's not even beating the 980ti.looncraz - Tuesday, March 17, 2015 - link
If the most recent slides (allegedly leaked from AMD) hold true, the 390x will be at least as fast as the Titan X, though with only 8GB of RAM (but HBM!).A straight 4096SP GCN 1.2/3 GPU would be a close match-up already, but any other improvements made along the way will potentially give the 390X a fairly healthy launch-day lead.
I think nVidia wanted to keep AMD in the dark as much as possible so that they could not position themselves to take more advantage of this, but AMD decided to hold out, apparently, until May/June (even though they apparently already have some inventory on hand) rather than give nVidia a chance to revise the Titan X before launch.
nVidia blinked, it seems, after it became apparent AMD was just going to wait out the clock with their current inventory.
zepi - Wednesday, March 18, 2015 - link
Unless AMD has achieved considerable increase in perf/w, they are going to have really hard time tuning those 4k shaders to a reasonable frequency without being a 450W card.Not that being a 500W is necessarily a deal breaker for everyone, but in practice cooling a 450W card without causing ear shattering level of noise is very difficult compared to cooling a 250W card.
Let us wait and hope, since AMD really would need to get a break and make some money on this one...
looncraz - Wednesday, March 18, 2015 - link
Very true. We know that with HBM there should already be a fairly beefy power savings (~20-30W vs 290X it seems).That doesn't buy them room for 1,280 more SPs, of course, but it should get them a healthy 256 of them. Then, GCN 1.3 vs 1.1 should have power advantages as well. GCN 1.2 vs 1.0 (R9 285 vs R9 280) with 1792 SPs showed a 60W improvement, if we assume GCN 1.1 to GCN 1.3 shows a similar trend the 390X should be pulling only about 15W more than the 290X with the rumored specs without any other improvements.
Of course, the same math says the 290X should be drawing 350W, but that's because it assumes all the power is in the SPs... But I do think it reveals that AMD could possibly do it without drawing much, if any, more power without making any unprecedented improvements.
Braincruser - Wednesday, March 18, 2015 - link
Yeah, but the question is, How well will the memory survive on top of a 300W GPU?Because the first part in a graphic card to die from high temperatures is the VRAM.
looncraz - Thursday, March 19, 2015 - link
It will be to the side, on a 2.5d interposer, I believe.GPU thermal energy will move through the path of least resistance (technically, to the area with the greatest deltaT, but regulated by the material thermal conductivity coefficient), which should be into the heatsink or water block. I'm not sure, but I'd think the chips could operate in the same temperature range as the GPU, but maybe not. It may be necessary to keep them thermally isolated. Which shouldn't be too difficult, maybe as simple as not using thermal pads at all for the memory and allowing them to passively dissipate heat (or through interposer mounted heatsinks).
It will be interesting to see what they have done to solve the potential issues, that's for sure.
Xenonite - Thursday, March 19, 2015 - link
Yes, I agree that AMD would be able to absolutely destroy NVIDIA on the performance front if they designed a 500W GPU and left the PCB and waterblock design to their AIB partners.I would also absolutely love to see what kind of performance a 500W or even a 1kW graphics card would be able to muster; however, since a relatively constant 60fps presented with less than about 100ms of total system latency has been deemed sufficient for a "smooth and responsive" gaming experience, I simply can't imagine such a card ever seeing the light of day.
And while I can understand everyone likes to pretend that they are saving the planet with their <150W GPUs, the argument that such a TDP would be very difficult to cool does not really hold much water IMHO.
If, for instance, the card was designed from the ground up to dissipate its heat load over multiple 200W~300W GPUs, connected via a very-high-speed, N-directional data interconnect bus, the card could easily and (most importantly) quietly be cooled with chilled-watercooling dissipating into a few "quad-fan" radiators. Practically, 4 GM200-size GPUs could be placed back-to-back on the PCB, with each one rendering a quarter of the current frame via shared, high-speed frame buffers (thereby eliminating SLI-induced microstutter and "frame-pacing" lag). Cooling would then be as simple as installing 4 standard gpu-watercooling loops with each loop's radiator only having to dissipate the TDP of a single GPU module.
naxeem - Tuesday, March 24, 2015 - link
They did solve that problem with a water-cooling solution. 390X WCE is probably what we'll get.ShieTar - Wednesday, March 18, 2015 - link
Who says they don't allow it? EVGA have already anounced two special models, a superclocked one and one with a watercooling-block:http://eu.evga.com/articles/00918/EVGA-GeForce-GTX...
Wreckage - Tuesday, March 17, 2015 - link
If by fast you mean June or July. I'm more interested in a 980ti so I don't need a new power supply.ArmedandDangerous - Saturday, March 21, 2015 - link
There won't ever be a 980 Ti if you understand Nvidia's naming schemes. Ti's are for unlocked parts, there's nothing to further unlock on the 980 GM204.Urizane - Monday, March 23, 2015 - link
660 and 660 Ti are different chips entirely, with 660 Ti not fully enabled.chizow - Tuesday, March 17, 2015 - link
@stun you're in for a huge upgrade either way. Makes sense to wait though, but I am not sure if 390X will change current pricing if at all. But Nvidia may also launch a cut down GM200 in that timeframe to give you another option in that $500+ range.Da W - Tuesday, March 17, 2015 - link
Usually, the last one out is the fastest.furthur - Wednesday, March 18, 2015 - link
you're an absolute idiot if you jump on this crap. grab a 290 in the mean time and a 390x on release,Michael Bay - Wednesday, March 18, 2015 - link
Maybe he doesn`t need an equivalent of a room heater in his case like you do, brah.Phartindust - Wednesday, March 18, 2015 - link
At 83c, you're not exactly making ice cubes with titan.cactusdog - Wednesday, March 18, 2015 - link
Im not convinced about this TitanX and the last titan turned out to be a bad investment for the $1,000 asking price. Last time, Titan came out (at $1,000) then a matter of weeks later , the 780TI came out with the same performance for $300 less. This time, we have the 390X soon but no doubt Nvidia have a 980TI up their sleeve, so the value of these highend $1,000 cards disappears quickly making it a bad investment. I expect a $1,000 card to hold the performance crown for at least 6-12 months not a few weeks, then get out performed by a card that costs $300 less.Laststop311 - Wednesday, March 18, 2015 - link
it wasn't weeks later it was many months laterD. Lister - Wednesday, March 18, 2015 - link
@cactusdog"Titan came out (at $1,000) then a matter of weeks later , the 780TI came out with the same performance for $300 less."
Actually the 780Ti, having a lot more CUDA cores, destroys the original Titan in gaming performance. The 780Ti equivalent was the "Titan Black", with the same amount of cores, but twice the VRAM, slightly higher default core clock, and fully unlocked compute.
Phartindust - Wednesday, March 18, 2015 - link
^Thisnos024 - Wednesday, March 18, 2015 - link
Well lets see. Even when it launches, will it be readily available and not highly priced like the 290X. If the 290x was readily available when it was launched, I would've bought one.eanazag - Wednesday, March 18, 2015 - link
Based on leaked slides referencing Battlefield 4 at 4K resolution the 390X is 1.6x the 290X. In the context of this review results we could guess it comes up slightly short at 4K ultra and 10 fps faster than the Titan X at 4K medium. Far Cry 4 came in at 1.55 x the 290X.290X non-uber 4K ultra - BF4 - 35.5 fps x 1.6 = 56.8. >> Titan 58.3
290X non-uber 4K medium - BF4 - 65.9 fps x 1.6 = 105.44 >> Titan 94.8
290X non-uber 4K ultra - FC4 - 31.2 fps x 1.55 = 48.36 >> Titan 42.1
290X non-uber 4K medium - FC4 - 40.9 fps x 1.55 = 63.395 >> Titan 60.5
These numbers don't tell the whole story on how AMD arrived with the figures, but it paints the picture of a GPU that goes toe-to-toe with the Titan X. The slides also talk about a water cooler edition. I'm suspecting the wattage will be in the same ball park as the 290X and likely higher.
With the Titan X full breadth compute muscle, I am not sure what the 980 Ti will look like. I suspect Nvidia is holding that back based on whatever AMD releases, so they can unload a smack down trump card. Rumored $700 for the 390X WCE with 8GB HBM (high bandwidth memory - 4096 bit width) and in Q2 (April-June). Titan X and 390X at the same price given what I know at the moment I would go with the Titan X.
Stack your GPU $'s for July.
FlushedBubblyJock - Thursday, April 2, 2015 - link
If the R9 390X doesn't come out at $499 months and months from now, it won't be worth it.shing3232 - Tuesday, March 17, 2015 - link
1/32 FP32? so, this is a big gaming core.Railgun - Tuesday, March 17, 2015 - link
Exactly why it's not a $999 card.shing3232 - Tuesday, March 17, 2015 - link
but, it was priced at 999.Railgun - Tuesday, March 17, 2015 - link
What I mean is that it's not worth being a 999 card. Yes, it's priced at that, but it's value doesn't support it.Flunk - Tuesday, March 17, 2015 - link
Plenty of dolts bought the first Titan as a gaming card so I'm sure someone will buy this. At least there's a bigger performance difference between the Titan X and GTX 980 than there was between the Titan and GTX 780.Kevin G - Tuesday, March 17, 2015 - link
Except the GTX 780 came after the Titan launched. Rather it was the original Titan compared to the GTX 680 and here we see a similar gap between the Titan X and the GTX 980. It is also widely speculated that we'll see a cut down GM200 to fit between the GTX 980 and the Titan X so history looks like it will repeat itself.chizow - Tuesday, March 17, 2015 - link
@Railgun, I'd disagree and I was very vocal against the original Titan for a number of reasons. Mainly because Nvidia used the 7970 launch as an opportunity to jump their 2nd fastest chip as mainstream. 2ndly, because they held back their flagship chip nearly a full year (GTX 680 launched Mar 2012, Titan Feb 2013) while claiming the whole time there was no bigger chip, they tried to justify the higher price point because it was a "compute" card and lastly because it was a cut down chip and we knew it.Titan X isn't being sold with any of those pretenses and now that the new pricing/SKU structure has settled in (2nd fastest chip = new $500 flagship), there isn't any of that sticker shock anymore. Its the full chip, there's no complaints about them holding anything back, and 12GB of VRAM is a ridiculous amount of VRAM to stick on a card, and that costs money. If EVGA wants to release an $800 Classified 980 and people see value in it, then certainly this Titan X does as well.
At least for me, it is the more appealing option for me now than getting a 2nd 980 for SLI. Slightly lower performance, lower heat, no SLI/scaling issues, and no framebuffer VRAM concerns for the foreseeable future. I game at 2560x1440p on an ROG Swift btw, so that is right in this card's wheelhouse.
Kevin G - Wednesday, March 18, 2015 - link
There was indeed a bigger chip due closer to the GK104/GTX 680's launch: the GK100. However it was cancelled due to bugs in the design. A fixed revision eventually became the GK110 which was ultimately released as the Titan/GTX 780.After that there have been two more revisions. The GK110B is quick respin which all fully enabled dies stem from (Titan Black/GTX 780 Ti). Then late last nVidia surprised everyone with the GK210 which has a handful of minor architectural improvements (larger register files etc.).
The morale of the story is that building large dies is hard and takes lots of time to get right.
chizow - Monday, March 23, 2015 - link
We don't know what happened to GK100, it is certainly possible as I've guessed aloud numerous times that AMD's 7970 and overall lackluster pricing/performance afforded Nvidia the opportunity to scrap GK100 and respin it to GK110 while trotting GK104 out as its flagship, because it was close enough to AMD's best and GK100 may have had problems as you described. All of that led to considerable doubt whether or not we would see a big Kepler, a sentiment that was even dishonestly echoed by some Nvidia employees I got into it with on their forums.Only in October 2012 did we see signs of Big Kepler in the Titan supercomputer with K20X, but still no sign of a GeForce card. Its no doubt that a big die takes time, but Nvidia had always led with their big chip first, since G80 and this was the first time they deviated from that strategy while parading what was clearly their 2nd best, mid-range performance ASIC as flagship.
Titan X sheds all that nonsense and goes back to their gaming roots. It is their best effort, up front, no BS. 8Bn transistors Inspired by Gamers and Made by Nvidia. So as someone who buys GeForce for gaming first and foremost, I'm going to reward them for those efforts so they keep rewarding me with future cards of this kind. :)
Railgun - Wednesday, March 18, 2015 - link
With regards to the price, 12GB of RAM isn't justification enough for it. Memory isn't THAT expensive in the grand scheme of things. What the Titan was originally isn't what the Titan X is now. They can't be seen as the same lineage. If you want to say memory is the key, the original Titan with its 6GB could be seen as more than still relevant today. Crysis is 45% faster in 4K with the X than the original. Is that the chip itself or memory helping? I vote the former given the 690 is 30% faster in 4K with the same game than the original Titan, with only 4GB total memory. VRAM isn't going to really be relevant for a bit other than those that are running stupidly large spans. It's a shame as Ryan touches on VRAM usage in Middle Earth, but doesn't actually indicate what's being used. There too, the 780Ti beats the original Titan sans huge VRAM reserves. Granted, barely, but point being is that VRAM isn't the reason. This won't be relevant for a bit I think.You can't compare an aftermarket price to how an OEM prices their products. The top tier card other than the TiX is the 980, which has been mentioned ad nauseam that the TiX is NOT worth 80% more given its performance. If EVGA wants to OC a card out of their shop and charge 45% more than a stock clock card, then buyer beware if it's not a 45% gain in performance. I for one don't see the benefit of a card like that. The convenience isn't there given the tools and community support for OCing something one's self.
I too game on 25x14 and there've been zero issues regarding VRAM, or the lack thereof.
chizow - Monday, March 23, 2015 - link
I didn't say VRAM was the only reason, I said it was one of the reasons. The bigger reason for me is that it is the FULL BOAT GM200 front and center. No waiting. No cut cores. No cut SMs for compute. No cut down part because of TDP. It's 100% of it up front, 100% of it for gaming. I'm sold and onboard until Pascal. That really is the key factor, who wants to wait for unknown commodities and timelines if you know this will set you within +/-10% of the next fastest part's performance if you can guarantee you get it today for maybe a 25-30% premium? I guess it really depends on how much you value your current and near-future gaming experience. I knew from the day I got my ROG Swift (with 2x670 SLI) I would need more to drive it. 980 was a bit of a sidegrade in absolute performance and I still knew i needed more perf, and now I have it with Titan X.As for VRAM, 12GB is certainly overkill today, but I'd say 6GB isn't going to be enough soon enough. Games are already pushing 4GB (SoM, FC4, AC:U) and that's still with last-gen type textures. Once you start getting console ports with PC texture packs I could see 6 and 8GB being pushed quite easily, as that is the target framebuffer for consoles (2+6). So yes, while 12GB may be too much, 6GB probably isn't enough, especially once you start looking at 4K and Surround.
Again, if you don't think the price is worth it over a 980 that's fine and fair, but the reality of it is, if you want better single-GPU performance there is no alternative. A 2nd 980 for SLI is certainly an option, but for my purposes and my resolution, I would prefer to stick to a single-card solution if possible, which is why I went with a Titan X and will be selling my 980 instead of picking up a 2nd one as I originally intended.
Best part about Titan X is it gives another choice and a target level of performance for everyone else!
Frenetic Pony - Tuesday, March 17, 2015 - link
They could've halved the ram, dropped the price by $200, and done a lot better without much to any performance hit.Denithor - Wednesday, March 18, 2015 - link
LOL.You just described the GTX 980 Ti, which will likely launch within a few months to answer the 390X.
chizow - Wednesday, March 18, 2015 - link
@Frenetic Pony, maybe now, but what about once DX12 drops and games are pushing over 6GB? We already see games saturating 4GB, and we still haven't seen next-gen engine games like UE4. Why compromise for a few hundred less? You haven't seen all the complaints from 780Ti users about how 3GB isn't enough anymore? Shoudn't be a problem for this card, which is just 1 less thing to worry about.LukaP - Thursday, March 19, 2015 - link
Games dont push 4GB... Check the LTT Ultrawide video, where he barely got Shadow of Mordor on ultra to go past 4GBs on 3 ulrawide 1440p screens.And as a game dev i can tell you, with proper optimisations, more than 4GB is insane, on a GPU, unless you just load stuff in with a predictive algorithm, to avoid PCIe bottlenecks.
And please do show me where a 780Ti user isnt happy with his cards performance at 1080-1600p. Because the card does, and will continue to perform great on those resolutions, since games wont really advance, due to consoles limiting again.
LukaP - Thursday, March 19, 2015 - link
Also, DX12 wont make games magically use more VRAM. all it really does is it makes the CPU and GPU communicate better. It wont magically make games run or look better. both of those are up to the devs, and the look better part is certainly not the textures or polycounts. Its merely the amount of drawcalls per frame going up, meaning more UNIQUE objects. (contrary to more objects, which can be achieved through instancing easily in any modern engine, but Ubisoft havent learned that yet)chizow - Monday, March 23, 2015 - link
DX12 raises the bar for all games by enabling better visuals, you're going to get better top-end visuals across the board. Certainly you don't think UE4 when it debuts will have the same reqs as DX11 based games on UE3?Even if you have the same size textures as before 2K or 4K assets as is common now, the fact you are drawing more polygons enabled by DX12's lower overhead, higher draw call/poly capabilities means they need to be textured, meaning higher VRAM requirement unless you are using the same textures over and over again.
Also, since you are a game dev, you would also know Devs are going more and more towards bindless or megatextures that specifically make great use of textures staying resident in local VRAM for faster accesses, rather than having to optimize and cache/load/discharge them.
Dug - Thursday, March 19, 2015 - link
Thank you for pointing this out.chizow - Monday, March 23, 2015 - link
Uh, they absolutely do push 4GB, its not all for the framebuffer but they use it as a texture cache that absolutely leads to a smoother gaming experience. I've seen SoM, FC4, AC:Unity all use the entire 4GB on my 980 at 1440p Ultra settings (textures most important ofc) even without MSAA.You can optimize as much as you like but if you can keep texture buffered locally it is going to result in a better gaming experience.
And for 780Ti owners not being happy, believe what you like, but these are the folks jumping to upgrade even to 980 because that 3GB has crippled the card, especially at higher resolutions like 4K. 780Ti beats 290X in everything and every resolution, until 4K.
https://www.google.com/?gws_rd=ssl#q=780+ti+3gb+no...
FlushedBubblyJock - Thursday, April 2, 2015 - link
Funny how 3.5GB wass just recently a kickk to the insufficient groin, a gigantic and terrible lie, and worth a lawsuit due to performance issues... as 4GB was sorely needed, now 4GB isn't used....Yes 4GB isn't needed. It was just 970 seconds ago, but not now !
DominionSeraph - Tuesday, March 17, 2015 - link
You always pay extra for the privilege of owning a halo product.Nvidia already rewrote the pricing structure in the consumer's favor when they released the GTX 970 -- a card with $650 performance -- at $329. You can't complain too much that they don't give you the GTX 980 for $400. If you want above the 970 you're going to pay for it. And Nvidia has hit it out of the ballpark with the Titan X. If Nvidia brought the high end of Maxwell down in price AMD would pretty much be out of business considering they'd have to sell housefire Hawaii at $150 instead of being able to find a trickle of pity buyers at $250.
MapRef41N93W - Tuesday, March 17, 2015 - link
Maxwell architecture is not designed for FP64. Even the Quadro doesn't have it. It's one of the ways NVIDIA saved so much power on the same node.shing3232 - Tuesday, March 17, 2015 - link
I believe they could put FP64 into it if they want, but power efficiency is a good way to make ads.MapRef41N93W - Tuesday, March 17, 2015 - link
Would have required a 650mm^2 die which would have been at the limits of what can be done on TSMC 28nm node. Would have also meant a $1200 card.MapRef41N93W - Tuesday, March 17, 2015 - link
And the Quadro a $4000 card doesn't have it, so why would a $999 gaming card have it.testbug00 - Tuesday, March 17, 2015 - link
would it have? No. They could have given it FP64. Could they have given it FP64 without pushing the power and heat up a lot? Nope.the 390x silicon will be capable of over 3TFlop FP64 (the 390x probably locked to 1/8 performance, however) and will be a smaller chip than this. The price to pay will be heat and power. How much? Good question.
dragonsqrrl - Tuesday, March 17, 2015 - link
Yes, it would've required a lot more transistors and die area with Maxwell's architecture, which relies on separate fp64 and fp32 cores. Comparing the costs associated with double precision performance directly to GCN is inaccurate.Kevin G - Tuesday, March 17, 2015 - link
Last I checked, rectal limits are a bit north of 700 mm^2. However, nVidia is already in the crazy realm in terms of economics when it comes to supply/demand/yields/cost. Getting fully functional chips with die sizes over 600 mm^2 isn't easy. Then again, it isn't easy putting down $999 USD for a graphics card.However, harvested parts should be quiet plentiful and the retail price of such a card should be appropriately lower.
Michael Bay - Wednesday, March 18, 2015 - link
>rectal limits are a bit north of 700 mm^2Oh wow.
Kevin G - Wednesday, March 18, 2015 - link
@Michael BayIntel's limit is supposed to be between 750 and 800 mm^2. They have released a 699 mm^2 product commercially (Tukwilla Itanium 2) a few years ago so it can be done.
Michael Bay - Wednesday, March 18, 2015 - link
>rectal limitsD. Lister - Wednesday, March 18, 2015 - link
lolchizow - Tuesday, March 17, 2015 - link
Yes its clear Nvidia had to make sacrifices somewhere to maintain advancements on 28nm and it looks like FP64/DP got the cut. I'm fine with it though, at least on GeForce products I don't want to pay a penny more for non-gaming products, if someone wants dedicated compute, go Tesla/Quadro.Yojimbo - Tuesday, March 17, 2015 - link
Kepler also has dedicated FP64 cores and from what I see in Anandtech articles, those cores are not used for FP32 calculations. How does NVIDIA save power with Maxwell by leaving FP64 cores off the die? The Maxwell GPUs seem to still be FP64 capable with respect to the number of FP64 cores placed on the die. It seems what they save by having less FP64 cores is die space and, as a result, the ability to have more FP32 cores. In other words, I haven't seen any information about Maxwell that leads me to believe they couldn't have added more FP64 cores when designing GM200 to make a GPU with superior double precision performance and inferior single precision performance compared with the configuration they actually chose for GM200. Maybe they just judged single precision performance to be more important to focus on than double precision, with a performance boost for double precision users having to wait until Pascal is released. Perhaps it was a choice between making a modest performance boost for both single and double precision calculations or making a significant performance boost for single precision calculations by forgoing double precision. Maybe they thought the efficiency gain of Maxwell could not carry sales on its own.testbug00 - Tuesday, March 17, 2015 - link
If this is a 250W card using about the same power as the 290x under gaming load, what does that make the 290x?Creig - Tuesday, March 17, 2015 - link
A very good value for the money.shing3232 - Tuesday, March 17, 2015 - link
Agree.joeh4384 - Tuesday, March 17, 2015 - link
Old news.joeh4384 - Tuesday, March 17, 2015 - link
I bet if you overclock the crap out of this, its TDP shoots north of 300 watts.cmdrdredd - Tuesday, March 17, 2015 - link
People buying this don't care about TDP.Kutark - Tuesday, March 17, 2015 - link
Which means even with OC it would still be at or under a 290x. I'm failing to see the problem here.TDP is really only super important for compute cards that will be running for hours on end at 100% load.
FlushedBubblyJock - Thursday, April 2, 2015 - link
I didn't care either, my $800 FX9590 loved 320 watts and my "uber" two niner zeroX loved double dipping that 320 watts, so I converted my carbon arc Linclon 220 welder to handle the AMD juice load and my DVD/RW/DL/LS melted tight to my CM heavy tower and dripped bubbling fire plastic drops through my liquid AMD loop... bye bye overclockshing3232 - Tuesday, March 17, 2015 - link
It does not make anything, because 290x is a old CardStuka87 - Tuesday, March 17, 2015 - link
You can't take those numbers seriously though, as they are wrong. Anandtech is *STILL* using reference cards for these tests. You have not been able to buy reference cards for over a year now. The current cards are run MUCH cooler, MUCH quieter, use less power, and have better performance.Stuka87 - Tuesday, March 17, 2015 - link
Quick edit, it seems XFX is still selling a reference 290X. No clue why, but they are. You can get custom cooled AIB cards for less. Could just be leftover stock though I suppose.Ryan Smith - Tuesday, March 17, 2015 - link
Using reference cards is consistent with our long-standing policy to use them. Aside from the immediate definition of reference, I believe that it is very important not to cherry pick results. The results you in our reviews should be equal to or lower than the results you will get with a retail card - we specifically want to avoid publishing results higher than what the buyer can get.* We don't want to overstate the performance of a card.* Using the same testbed hardware as us of course.
beginner99 - Tuesday, March 17, 2015 - link
Still it shows the 290(x) in a way poorer light than is actually true. At least that should be stated but better would be to add a AIB card to the reviews,chizow - Tuesday, March 17, 2015 - link
AMD fanboys went apeshit when AT used non-ref cooled GTX 460s that showed those cards in better light than the reference coolers, AMD fanboys need to pick a story and stick to it tbh.evolucion8 - Tuesday, March 17, 2015 - link
Ahh, we can't never miss your daily dose of AMD bashing under any GPU article, thanks for the fun.shing3232 - Tuesday, March 17, 2015 - link
Yep, it is quite entertaining.Kutark - Tuesday, March 17, 2015 - link
Entertaining but still accurate. Fanbois are fanbois, they're stupid regardless of what side of the isle they're on.chizow - Tuesday, March 17, 2015 - link
Like I said, happy to keep the AMD fanboys honest. as I did again here.Stuka87 - Tuesday, March 17, 2015 - link
That had nothing to do with non-ref cooled 460's. It was using FTW versions, which were heavily overclocked vs stock clocked AMD cards. It had nothing to do with what cooler was on them. Just the overclocked being portrayed as though they were standard edition cards. The article was later changed as I recall to state that they were overclocked cards and not stock 460s.chizow - Wednesday, March 18, 2015 - link
Its the same difference, the AMD cards are no more "stock" with custom coolers than those FTW editions. AMD baked an overclock into their boost that they weren't able to sustain without extravagant cooling. And when I mean extravagant cooling, I am talking about the Cadillac Aluminum Boat we used to be like "Oh wow, maybe one day I will fork out $60 for that massive Arctic Cooling cooler".So yeah, now you get "stock clocked, stock cooled reference cards", if you want AMD to show better in benchmarks, have them design either 1) better cooling or 2) less power hungry cards.
chizow - Tuesday, March 17, 2015 - link
@Ryan, we can't revisit all the nerdrage and angst from AMD fanboys over EVGA sending you non-reference cooled GeForce cards because they were too good over reference? Funny what happens when the shoe is on the other foot!My solution: AMD should design cards that are capable of comfortably fitting within a 250W TDP and cooler designed to dissipate that much heat, and not have to resort to a cooler that looks like an Autobot. I'm not kidding, briefly owned a Sapphire Tri-X 290X and the thing doubles as its own appliance.
Stuka87 - Tuesday, March 17, 2015 - link
Stop stating that it was because of the cooler. As I mentioned above, FTW cards are heavily overclocked and should not be portrayed as standard edition cards.MrSpadge - Tuesday, March 17, 2015 - link
Go back and read that article. The factory-OC was clearly stated.chizow - Wednesday, March 18, 2015 - link
And custom-cooled, higher clocked cards should? It took months for AMD to bring those to market and many of them cost more than the original reference cards and are also overclocked.http://www.newegg.com/Product/ProductList.aspx?Sub...
Like I said, AMD fanboys made this bed, time to lie in it.
Witchunter - Wednesday, March 18, 2015 - link
I hope you do realize calling out AMD fanboys in each and every one of your comments essentially paints you as Nvidia fanboy in the eyes of other readers. I'm here to read some constructive comments and all I see is you bitching about fanboys and being one yourself.chizow - Wednesday, March 18, 2015 - link
@Witchunter, the difference is, I'm not afraid to admit I'm a fan of the best, but I'm going to at least be consistent on my views and opinions. Whereas these AMD fanboys are crying foul for the same thing they threw a tantrum over a few years ago, ultimately leading to this policy to begin with. You don't find that ironic, that what they were crying about 4 years ago is suddenly a problem when the shoe is on the other foot? Maybe that tells you something about yourself and where your own biases reside? :)Crunchy005 - Wednesday, March 18, 2015 - link
@chizow either way you don't really offer constructive criticism and you call people dishonest without proving them wrong in any way and offering facts. You are one of the biggest fanboys out there and it kind of makes you lose credibility.Crunchy005 - Wednesday, March 18, 2015 - link
Ok wanted to add to this, I do like some of the comments you make but you are so fan boyish I am unable to take much stock in what you say. If you could offer more facts and stop just bashing AMD and praising the all powerful Nvidia is better in every way, despite the fact that AMD has advantages and has outperformed Nvidia in many ways, so has Nvidia outperformed AMD, they leap frog...if you did that we might all like to hear what you have to say.FlushedBubblyJock - Thursday, April 2, 2015 - link
I know what the truth is so I greatly enjoy what he says.If you can't handle the truth, that should be your problem, not everyone else's, obviously.
chizow - Monday, March 23, 2015 - link
Like I said, I'm not here to sugarcoat things or keep it constructive, I'm here to set the record straight and keep the discussion honest. If that involves bruising some fragile AMD fanboy egos and sensibilities, so be it.I'm completely comfortable in my own skin knowing I'm a fan of the best, and that just happens to be Nvidia for graphics cards for the last near-decade since G80, and I'm certainly not afraid to tell you why that's the case backed with my usual facts, references etc. etc. You're free to verify my sources and references if you like to come to your own conclusion, but at the end of the day, that's the whole point of the internet, isn't it? Lay out the facts, let informed people make their own conclusions?
In any case, the entire discussion and you can be the judge of whether my take on the topic is fair, you can clearly see, AMD fanboys caused this dilemma for themselves, many of which are the ones you see crying in this thread. Queue that Alanis Morissette song....
http://anandtech.com/comments/3987/amds-radeon-687...
http://anandtech.com/show/3988/the-use-of-evgas-ge...
Phartindust - Wednesday, March 18, 2015 - link
Um, AMD doesn't manufacture after market cards.dragonsqrrl - Tuesday, March 17, 2015 - link
"use less power"...right, and why would these non reference cards consume less power? Just hypothetically speaking, ignoring for a moment all the benchmarks out there that suggest otherwise.
squngy - Tuesday, March 17, 2015 - link
Undervolting?dragonsqrrl - Tuesday, March 17, 2015 - link
Had no idea that non reference Hawaii cards were generally undervolted resulting in lower power consumption. Source?chizow - Tuesday, March 17, 2015 - link
There is some science behind it, heat results in higher leakage resulting in higher power consumption. But yes I agree, the reviews show otherwise, in fact, they show the cards that dont' throttle and boost unabated draw even more power, closer to 300W. So yes, that increased perf comes at the expense of higher power consumption, not sure why the AMD faithful believe otherwise.FlushedBubblyJock - Saturday, March 21, 2015 - link
Duh. It's because they hate Physx.Kutark - Tuesday, March 17, 2015 - link
Yes, some of the new designs from aftermarket are cooler and quiter, but they dont use less power, the GPU is generating the power, the aftermarket companies can't alter that. They can only tame the beast, so to speak.Yojimbo - Tuesday, March 17, 2015 - link
Would be a good point if the performance were the same. But the Titan X is 50% faster. The scores are also total system power usage under gaming load, not card usage. Running at 50% faster frame rates is going to tax other parts of the system more, as well.Kutark - Tuesday, March 17, 2015 - link
You're kidding right. Your framerate in no way affects your power usage.nevcairiel - Tuesday, March 17, 2015 - link
Actually, it might. If the GPU is faster, it might need more CPU power, which in turn can increase power draw from the CPU.DarkXale - Tuesday, March 17, 2015 - link
Of course. Its the entire point of DX12/Mantle/Vulcan/Metal to reduce per-frame CPU work, and as a consequence per-frame CPU power consumption.Yojimbo - Tuesday, March 17, 2015 - link
The main point of my post is that Titan X gets 50% more performance/system watt. But yes, your frame rate should affect your power usage if you are GPU-bound. The CPU, for instance, will be working harder maintaining the higher frame rates. How much harder, I have no idea, but it's a variable that needs to be considered before testbug00's antecedent can be considered true.dragonsqrrl - Wednesday, March 18, 2015 - link
Actually frame rates have a lot to do with power usage.I don't think that needs any further explanation, anyone who's even moderately informed knows this, and even if they didn't could probably figure out why this might be the case in about 10 seconds.
D. Lister - Tuesday, March 17, 2015 - link
@testbug00If this is a 250W card using about the same power as the 290x under gaming load, what does that make the 290x?
Considering the several tens of percent of performance lead over the 290x, I would say it makes the 290x appear laughably inefficient by comparison.
testbug00 - Wednesday, March 18, 2015 - link
You misunderstand my point. The 290x was labeled as a 300W TPD (unofficial) card by Anandtech when they reviewed. Everyone refers to it as 300W, sometimes 290W. So, is the 290x a 250W card like the Titan X, or are they both closer to 300W?I understand that TPDs don't mean power draw, but, they are used to express power draw by everyone. Sigh. Partially trying to highlight that issue also. Just like Intel TPD versus AMD TPD, Nvidia's and AMD's TPDs also mean the same thing (I believe Nvidia's is the TPD under heavy load at the base clock, not 100% sure)
D. Lister - Wednesday, March 18, 2015 - link
Oh right, sorry for the misunderstanding. TDP (Thermal Design Power) deals with the heat that a processor releases when under load. Watt is a unit that can be used to measure the flow of any type of energy, not just electricity. "Heat", just like electricity, is a form of energy, so the rate of its flow can also be measured in Watts.You may want to check out the following wiki for further reading about TDP: http://en.wikipedia.org/wiki/Thermal_design_power
will54 - Wednesday, March 18, 2015 - link
well the cpu uses more power when the graphics hit higher frames so I'm assuming that the Titan X uses less power than the 290x but since it gets higher frames the CPU has to work harder to feed it so the CPU power draw goes up.packerman - Tuesday, March 17, 2015 - link
So all I saw was that AMD was wiping the floor with it for 300 dollars less. Am I missing something.Intervenator - Tuesday, March 17, 2015 - link
Didnt you read the review? It was explained *multiple* times...smilingcrow - Tuesday, March 17, 2015 - link
Yes, a brain.packerman - Tuesday, March 17, 2015 - link
Really explain idiot. The benchmarks reflect that the 295x2 destroyed the card. So it's better because it's the best single gpu option available. Who cares. It 300 dollars less and still wipes the floor. Benchmarks are really what matters.dragonsqrrl - Tuesday, March 17, 2015 - link
Benchmarks are what really matter now? That's kind of ironic.Kutark - Tuesday, March 17, 2015 - link
Man, that guys comment was a serious /facepalm.D. Lister - Tuesday, March 17, 2015 - link
@packermanSo all I saw was that AMD was wiping the floor with it for 300 dollars less. Am I missing something.
While I agree with the new Nvidia card being overpriced, ultimately one cannot disregard the facts that the 295x2,
- is Dual GPU, so its added performance is tied to a crossfire profile.
- consumes nearly twice the power under load, inevitably needing a much more expensive PSU.
- comes with factory water-cooling, and hence the added space requirement.
- is limited to the DX12.0 feature set, compared the DX12.1 for the Titan x.
- launched at $500 more.
Railgun - Tuesday, March 17, 2015 - link
"the original GTX Titan's time as NVIDIA's first prosumer card was short-lived"I don't know about that. What's the definition of a prosumer card now? It was originally because of FP64 performance. Now, this doesn't have that that. Granted, single precision is better, but not astronomical compared to the original. I'd argue it's not a prosumer part (anymore), just a really good consumer part.
dragonsqrrl - Tuesday, March 17, 2015 - link
I think that's precisely what he's trying to say.testbug00 - Tuesday, March 17, 2015 - link
Why is the 290x ueber mode not highlighted on the charts? For people that this segment aims at, they would use that. Makes a review that is good put a bad taste in my mouth. Nice card for gamers (if you can pay the price) still :)testbug00 - Tuesday, March 17, 2015 - link
On a side note, if you do use both 290x versions, please note so under "the test" as to be more clear. Thanks.FlushedBubblyJock - Thursday, April 2, 2015 - link
So the super rebranded, overclocked tricked out cranked to the max housefire no new card. card ?Why don't we just strap on a liquid nitrogen tank below a block of dry ice and compare ?
chizow - Tuesday, March 17, 2015 - link
Monster of a card, I was pretty anti-Titan when they first released it but this one actually makes sense now that Nvidia shed all the false pretenses of it doubling as a "Compute" card.But in comparison we see Titan:
1) fully enabled ASIC from the outset
2) first launched GM200
3) Quadruple standard VRAM of last major flagship GPU
4) Nearly double performance of previous flagship (GK210)
5) ~1.5x perf of same-gen performance 980, and just slower than 2x980 in SLI ($1100).
Nvidia's sales strategy is odd though, going direct sales first, hopefully that doesn't anger their retailers and partners too much. Made sense though given Nvidia has been selling self-branded cards at BestBuy for awhile now.
I was going to either pick up a 2nd 980 for less or one of these, looks like it will be one of these. Was all set to check out til I was hit with sales tax, I'll have to wait a few weeks for Newegg and I'll just pick up EVGA's SuperClocked version for the same total price.
AMD will most likely launch a comparable performance part in the 390X in a few weeks/months, but it will most likely come with a bunch of caveats and asterisks. Good option for AMD fans though!
joeh4384 - Tuesday, March 17, 2015 - link
I think AMD might actually win this generation due to having a head start on HBM. Hopefully there arent long delays though. I think AMD's problem isn't their cards, just that they have been late to the dance the last couple of generations.chizow - Tuesday, March 17, 2015 - link
I guess we will see, I don't think HBM will make the impact people think it will. Titan X has what 30% more bandwidth than the 980 and still seems to scale better with core overclocking (same for 980).In any case, changed my mind and placed my order, figure no point in waiting a few weeks to save $60 when I'm already dropping $999 and $30 on next day shipping lol.
dragonsqrrl - Tuesday, March 17, 2015 - link
... Titan X has 50% more bandwidth than the 980.chizow - Tuesday, March 17, 2015 - link
Sorry yeah I was factoring in likely overclocked speeds, I get 8000MHz on my 980, but I've read the Titan X RAM doesn't overclock nearly as well, which is no surprise given higher density chips, more memory controllers etc. I'm only expecting ~30-40% higher effective memory rather than the full 50%.dragonsqrrl - Tuesday, March 17, 2015 - link
Totally agree with the rest of your comment. I doubt that HBM will make the real world performance impact everyone seems to be expecting. Ultimately it's just memory people, and performance doesn't scale anywhere close to linearly with memory bandwidth unless you're bandwidth constrained to begin with, which AMD is not. So fiji is expected to have ~45% more compute resources than Hawaii, I would expect at most ~45% increase in real world performance if clockspeeds are maintained, and if we ignore other potential per-core performance enhancements. But considering we haven't seen anything like this through GCN 1.2, I'm not expecting any big breakthroughs with GCN 1.3.testbug00 - Wednesday, March 18, 2015 - link
The impact it makes is that HBM will allow you to save power from the VRAM, letting you make a bigger GPU, and/or clocking it higher.inderect benefits help quite a bit!
testbug00 - Wednesday, March 18, 2015 - link
indirect* oops :)dragonsqrrl - Wednesday, March 18, 2015 - link
Yes, it shifts more of the TDP envelope of a card to the GPU. TSVs can also help to reduce GPU die complexity. So yes there are definitely other benefits to HBM besides raw bandwidth that can be used to either improve performance within a given TDP, or reduce power consumption at the same performance.Crunchy005 - Wednesday, March 18, 2015 - link
@chizow and curious what you will say when Nvidia releases HBM, probably not " I don't think HBM will make the impact people think it will." But I will agree AMD needs to get on the ball and not delay, they need a good head start with HBM to stay competitive, especially with their higher power consumption.I am very excited to actually see how HBM pans out, DDR/GDDR is old tech now and pushing new technologies like HBM and wide I/O I believe will be huge for the computer world in general. Nice theoretical(hopefully they are higher than DDR first gen) bandwidth improvements.
Intervenator - Tuesday, March 17, 2015 - link
Do they? Source?TallestJon96 - Tuesday, March 17, 2015 - link
If AMD has HBM and a die shrink they will be in good shape. Maxwell's power consumption is a good selling point though.extide - Tuesday, March 17, 2015 - link
No die shrink but HBM is just about confirmed.TEAMSWITCHER - Tuesday, March 17, 2015 - link
Don't you think AMD should actually ship a product before you make such a claim?Yojimbo - Wednesday, March 18, 2015 - link
What do you consider winning the generation? An architecture should be faster than one that came out 9 months prior. If AMD doesn't come out with something soon it will seem more like they are skipping the generation than they are winning the generation.testbug00 - Wednesday, March 18, 2015 - link
Er, the only time AMD has been late since 2009 or so has been with Hawaii and this time. 0.o? Maybe I'm missing something. Thanks.nikaldro - Tuesday, March 17, 2015 - link
This card doesn't really make much sense. At 1440p the 980 rocks, at 4K even the titan X struggles. Yes, Gsync can reduce tearing, but those 40fps will still be 40fps. Not nearly as smooth as 60, and 4K monitors tend to have high input lag anyway.mlkmade - Tuesday, March 17, 2015 - link
I think it's kind of biased and unfair to publish only Sony Vegas (for video editing) benchmarks which favor AMD/OpenCL. Aside from being a better programs and more popular in the actual industry, you should also publish Adobe Premiere Pro and After Effect benchmarks, as it uses Cuda (& OpenCL) and favors Nvidia. This would balance out AMD Vegas favorism. Also, having a 3000 cuda cores in Premiere and After Effects is game changing and the gap would be much greater than AMD's in Vegas.smilingcrow - Tuesday, March 17, 2015 - link
They can't compare multiple video editors for hardware that is clearly aimed at game playing.Not sure that rendering to XDCAM EX was a good choice for a consumer card though.
TheJian - Tuesday, March 17, 2015 - link
Why not? AMD works in Sony, and NV works in Adobe. Many home users use these, just as pros do. They should run the same operations in both (clearly anandtech chooses Sony Vegas because it favors AMD and OpenCL), or use ADobe since it's more popular and supports both OpenCL and Cuda. They are showing NV is the worst light possible by using Vegas instead of Adobe with Cuda. There is a reason NV owns 75-80% of the workstation market, and it isn't VEGAS. It's CUDA and the fact that you have to dig to find apps in the pro market that don't use it (like Vegas...LOL).Sushisamurai - Tuesday, March 17, 2015 - link
Adobe would work I suppose, but certainly not after effects - thats a Nvidia supported only feature. A few problems I see is the running subscription for GPU testing (since GPU's aren't released consistently throughout the year) and the lack of Adobe support for the latest video cards.I was helping my friend build a compatible computer for video editing using Adobe 2-3 weeks ago. All of Adobe's white papers etc didn't list the 970 nor 980 as supported by Adobe premier - it's been a while since those have been released, and I'm aware there are "mods" to make the 970/980 compatible but my friend was your average joe, would have been unable to do the mods. As I recognize Adobe is more popular, using Adobe as a comparison wouldn't add much
mlkmade - Thursday, March 19, 2015 - link
Are you serious? The said "mods" are editing a txt file and saving it. I'm pretty sure the average joe can figure that out, especially one that is a video editor and uses a computer reguraly to edit video. Its a really easy process and in 2015, Adobe not updating their support for the latest cards is a moot point.TheJian - Tuesday, March 17, 2015 - link
Agreed, Boggles my mind why they use something that isn't #1. Adobe is, but then that would make AMD look bad so as an AMD portal site...They have to use Sony Vegas and act like Cuda doesn't exist (for which this card is a MONSTER). Since you can do the same operation in both Vegas/Premiere, they should use Vegas for AMD and Adobe for NV. If I were buying either, I'd buy the BEST software for MY hardware. IE sony for AMD and Adobe if I was buying NV. It is silly not to compare them. If that is a problem they should switch to Adobe as it's used more and can still show OpenCL vs. Cuda. Again, better than this Vegas crap for NV which they hate and users complain about problems with this combo. But then that's the point of a AMD Portal site I guess :)Sushisamurai - Tuesday, March 17, 2015 - link
If u wanted to use Adobe in its stock configurations as a benchmark, currently, AT would be able to test the 780 versus the 295X or 290X. I think your results would still be "skewed" (paraphrasing you) to AMD in what you would want AT to do. AT is trying its best as its not easy to do what they do, so please research just a little bit before suggesting "how they failed" and attacking them. The editors are humans.testbug00 - Wednesday, March 18, 2015 - link
Er, the AMD portal site which doesn't exist anymore? Lol! Intel is the one sponsoring an area on Anandtech currently.Morawka - Tuesday, March 17, 2015 - link
disappointing it doesn't do FP 64. that's the whole point of calling it a super computer card, and the whole point of why you buy a titan IMO.Otherwise just wait a month or two for the 980ti at half the cost.
MrSpadge - Tuesday, March 17, 2015 - link
I'd expect GTX980 to stay at ~500$ and a future GTX980Ti to occupy ~700$ until AMD R390X arrives.chizow - Monday, March 23, 2015 - link
Exactly my expectations, and that is even if the 980Ti and 390X edge out the Titan X by 5-10% in raw performance.Novacius - Tuesday, March 17, 2015 - link
601mm². Oh my.HammerStrike - Tuesday, March 17, 2015 - link
Somewhat dissapointed to see that the DisplayPort spec is still 1.2. 4k@60Hz is fine, but a couple of these in SLI should be able to get 60FPS+, which DP 1.2 doesn't have the bandwidth for.Granted, there are no 4K displays capable of going above 60hz at the moment, but presumably that will change in the not-to-distant future. I suppose they could do some sort of MST config with dual 1.2 DP to drive 120hz @ 4K, but when they did that for first gen of 4K displays there were compatibility and other issues that caused it not to be 100% reliable.
Would have been nice if the "first" 4K GPU would support the next gen 4K display interconnect.
Kevin G - Tuesday, March 17, 2015 - link
@HammerStrikeDP v1.3 has been ratified to do 4K 120 Hz over a single cable or 5K at 60 Hz. 8K at 60 Hz is also possible with either a reduced color space or via the Display Stream Compression option.
Considering the timing of the release, I don't think nVidia had time to implement v1.3 after ratification even if they laid some of the work from an earlier draft spec.
eanazag - Tuesday, March 17, 2015 - link
Ryan, you get an alibi on the 960 review. I'm sure the 960 review is just a verification of what we expect it to be. Basically, a 1080p workhorse card. Only curiosity is the watts.I am impressed to see the Titan review today. Gonna get to readin'.
Sushisamurai - Tuesday, March 17, 2015 - link
Yeah man, you got the review out a few hours post Nvidia live blog on AT, I'm guessing you guys must have got it in earlier - those pesky NDA'sRyan Smith - Wednesday, March 18, 2015 - link
Just so it's noted, we've had the drivers since last Thursday (which actually isn't a lot of time to put together an article this big, especially with travel to GTC in the middle)kaelynthedove78 - Tuesday, March 17, 2015 - link
Does this card support 10bpp colour like the Quadro cards? You can easily verify this by simply checking out whether the Windows desktop can be set to 40-bit colour ("billions of colors"), when connected to a 10bpp display like the Dell 3011 via DisplayPort.To all naysayers: Yes, I do have a native 10bpp monitor...two in fact. $999 would be cheap for a 10bpp GPU for me when compared to >$2000 Quadros.
dragonsqrrl - Tuesday, March 17, 2015 - link
Doubt it, every GeForce card I've seen has been limited to 8bit.Vandalism - Tuesday, March 17, 2015 - link
Those results don't seem very accurate. BF4 on Ultra @1440p gives me 75-80 FPS on a single 980 and 130-144 FPS on SLI without any GPU overclock applied.Nfarce - Tuesday, March 17, 2015 - link
Yeah Guru3D's benchmark of BF4 at 1440/Ultra with 2xMSAA (which was removed in this Anandtech's test) is 74fps with a 980. Something isn't right.Nfarce - Tuesday, March 17, 2015 - link
Addendum: I will also say that Guru3D's overclock test of the X really makes it a beast however.Ryan Smith - Wednesday, March 18, 2015 - link
It depends on how you benchmark. We use an on-rails sequence from the Tashgar level; it's not perfect, but it's highly repeatable. More importantly from it we can still give accurate advice on how much performance is needed for a given resolution/setting to be playable.medi03 - Tuesday, March 17, 2015 - link
So, which games do not run well in dual GPU mode please?nikaldro - Tuesday, March 17, 2015 - link
a dual gpu setup will bring stuttering and input lag in pretty much any case.It's not always THAT noticeable, but it usually is quite bothersome at least.
Ryan Smith - Wednesday, March 18, 2015 - link
Right now Far Cry 4 and Total War: Attila are particularly picky about multi-GPU.beginner99 - Tuesday, March 17, 2015 - link
If you look at SLI 980 price and performance the titan pricing actually makes sense. If it was cheaper no one in the right mind would consider SLI 980 anymore. But then the 980 is also way overpriced....In reality the price is a joke. I assume NV made a market study an concluded most Titan buyers are gamers (and not entry-level compute users) so the just removed FP64 which justified the original titans price.Refuge - Thursday, March 19, 2015 - link
They've been charging flagship prices for mid range cards since the beginning of Keplar.TallestJon96 - Tuesday, March 17, 2015 - link
That is insane overclock performance. Bf4 4k at 70 FPS!I think this is the first single GPU that 4k is viable with. And that's significant, because not all games support multi-GPU at launch. Also, you can finally get PS4 performance and graphical settings or better at 4k with only 1 card.
Finally, power consumption is pretty good. Same TDP as AMD's 280x, but performance is in a totally different zip code.
Hopefully AMD counters with something good soon. I expect we will get another 290x situation, where AMD's next flagship card will have similar performance to a Titan, but at a lower price and higher power consumption.
The Von Matrices - Tuesday, March 17, 2015 - link
I appreciate you including the GTX 580 in the performance chart. It's amazing that since the GTX 580 there's been a 4x increase in graphics performance with only a single process node advancement.TallestJon96 - Tuesday, March 17, 2015 - link
A few question about anandtech's new benchmarks.Any chance we could get FireStrkie added as a synthetic test? Lots of people use it, and it'd be cool to see it added.
Does Shadow of Mordor on Ultra use the optional Ultra texture settings that are an additional download?
Does Dragon Age have a built in Benchmark? If not, what is being tested?
Ryan Smith - Wednesday, March 18, 2015 - link
Firestrike: After the NVIDIA 3DMark optimization fiasco, we've sworn off 3DMark as a graphics benchmark. It's no secret that NV and AMD will try to optimize for games that are used as benchmarks, and we would like to not give them (more) incentive to optimize for a workload that is not a real game.SoM: No texture add-on.
DA: It does, but we don't use it. It has a fixed number of frames and is too short on fast hardware. We're using a save game from early in the game.
TallestJon96 - Wednesday, March 18, 2015 - link
Thanks. I guess it makes sense to avoid heavily benchmarked games, to get the real world performance. Anandtech's benchmarking has always been top notch, with consistent results that can be tested at home.Tylanner - Tuesday, March 17, 2015 - link
This has the makings of a legendary GPU. Stunning.TallestJon96 - Wednesday, March 18, 2015 - link
Thanks. I guess it makes sense to avoid heavily benchmarked games, to get the real world performance. Anandtech's benchmarking has always been top notch, with consistent results that can be tested at home.yannigr2 - Tuesday, March 17, 2015 - link
Well. We can thank AMD's 390X for that $999, even if it is 2-3 months away.Daniel Egger - Tuesday, March 17, 2015 - link
Anything but a compute powerhouse, this seems to be more about play than work. Maybe the name should have been more aptly christened Tits X instead of Titan X?Refuge - Thursday, March 19, 2015 - link
Honestly this looks more like a Ti than a Titan.D. Lister - Tuesday, March 17, 2015 - link
Nice performance/watt, but at $1000, I find the performance/dollar to be unacceptable. Without a double-precision edge, this GPU is essentially a 980Ti, and Nvidia seems to want to get away with slapping on a Titan decal (and the consequential 1K price tag) by just adding a useless amount of graphics memory.Take out about 4 gigs of VRAM, hold the "Titan" brand, add maybe 5-10% core clock, with an MSRP of at least $300 less, and I'll be interested. But I guess, for Nvidia to feel the need to do something like that, we'll have to wait for the next Radeon launch.
chizow - Tuesday, March 17, 2015 - link
It's 980Ti with double the VRAM, a year earlier, if you are going off previous timelines. Don't undervalue the fact this is the first big Maxwell only 6 months after #2 Maxwell.I agree the pricing has gotten ridiculous on these graphics cards, but this is the market we live and play in now. I typically spent $800-$1000 every 2 years on graphics cards, but I would get 2 flagship cards. After the whole 7970/680 debacle where mid-range became flagship, I can now get 2 high-end midrange for that much, or 1 super premium flagship. Going with the flagship, and I'm happy! :D
D. Lister - Tuesday, March 17, 2015 - link
@chizowIt's 980Ti with double the VRAM
Yes, pretty much - a Ti GPU, with more VRAM than necessary, with the price tag of a Titan.
I agree the pricing has gotten ridiculous on these graphics cards, but this is the market we live and play in now.
The market is the way it is because we, consumers, let it be that way, through our choices. For us to obediently accept, at any time, overpricing as an acceptable trend of the market, is basically like agreeing with the fox who wants to be a guard for our henhouse.
chizow - Wednesday, March 18, 2015 - link
Except the 780Ti came much later, it was the 3rd GK210 chip to be released, so there is a premium on that time and money. While this is the 1st GM200 based chip, no need to look any further beyond it. Also, how many 780Ti owners complained about not enough VRAM? Looks like Nvidia addressed that. There's just no compromises with this card, its Nvidia's best foot forward for this chip and only 6 months after GTX 980. No complaints here and I had plenty when Titan launched.Sure the market is this way partially because we allow it, but the reality is, the demand is overwhelmingly there. I was thoroughly against paying $1000 for what I used to get for $500-$650 for Nvidia's big chip flagship card with the original Titan, but the reality is, Nvidia has raised the bar on all fronts (and AMD has done well also) and they are looking to be rewarded for doing so. I used to buy 2x cards before because 1 just wasn't good enough. Now, 1 is good enough, so I don't mind paying the same amount for that relative level of performance and enjoyment.
D. Lister - Wednesday, March 18, 2015 - link
@chizowExcept the 780Ti came much later, ...... plenty when Titan launched.
Both the 780Ti and the Titan X were released exactly when Nvidia needed them in the market. For the 780Ti, the reason was to challenge the 290X for the top spot. The Titan X was made available sooner because a) Nvidia needed the positive press after the 970 VRAM fiasco and b) because Nvidia wanted to take some attention away from the recent 3xx announcements by AMD.
Hence I really can't find any logical reason to agree with your spin that the Nvidia staff was doing overtime as some sort of a public service, and so deserve some reward for their noble sacrifices.
Sure the market is this way partially because we allow it, but the reality is, the demand is overwhelmingly there. I was thoroughly against paying $1000 for what I used to get for $500-$650 for Nvidia's big chip flagship card with the original Titan, but the reality is, Nvidia has raised the bar on all fronts (and AMD has done well also) and they are looking to be rewarded for doing so. I used to buy 2x cards before because 1 just wasn't good enough. Now, 1 is good enough, so I don't mind paying the same amount for that relative level of performance and enjoyment.
http://media2.giphy.com/media/13ayyyRnHJKrug/giphy...
chizow - Monday, March 23, 2015 - link
Uh, you make a lot of assumptions while trying to dismiss the fact there is a huge difference in time to market and relative geography on Nvidia's release timeline for Titan X, and that difference carries a premium to anyone who observed or felt burned by how Titan and Kepler launches played out over 2012, 2013, 2014.Fact remains, Titan X is the full chip very close to the front of Maxwell's line-up release, while the 780Ti came near the end of Kepler's life cycle. The correct comparison is if Nvidia launched Titan Black in 2013 instead of the original Titan, because that's what Titan X is.
The bolded portion should be pretty easy to digest, not sure why you are having trouble with it. Nvidia's advancement on the 28nm node has been so good (someone showed a 4x increase from the 40nm GTX 480 to the Titan X, which is damn amazing on the same node) and the relatively slow advancement in game requirements mean I no longer need 2 GPUs to push the game resolutions and settings I need. A single, super flagship card is all I need, and Nvidia has provided just that with the Titan X.
For those who don't think it is worth it, you can always wait for something cheaper and faster to come along, but for me, I'm good until Pascal in 2016 (maybe? Oh wait, don't need to worry about that).
chizow - Tuesday, March 17, 2015 - link
Bit of a sidenote, but wow looks like 980 SLI scaling has REALLY improved in the last few months. I don't recall it being that good at launch, but that's not a huge surprise given Maxwell was a new architecture and has gone through a number of big (on paper) driver improvements. Looks really good though, made it harder to go with the Titan X over a 2nd 980 for SLI, but I think I'll be happier this way for now.mdriftmeyer - Tuesday, March 17, 2015 - link
Buy these like hotcakes. And when the R9 390/390X arrives in June I pick either up and laugh at all that used hardware being dumped on EBay.TheJian - Tuesday, March 17, 2015 - link
You're assuming they'll beat this card, and I doubt you'll see them in June as the channel is stuffed with AMD's current stuff. I say Q3 and won't be as good as you think. HBM will cause pricing issues, won't net any perf (isn't needed, bandwidth isn't a problem, so wasted extra cost here) so the gpu will have to win on it's own vs. NV. You'd better hope AMD's is good enough to sell like hotcakes, as they really need the profits finally. This Q already wasted and will result in a loss most likely, and NV is good for the next 3 months at least until something competitive arrives, at which point NV just drops pricing eating any chance of AMD profits anyway. AMD has a very tough road ahead and console sales drop due to mobile closing the gap at 16/14nm for xmas (good enough that is, to have some say screw a console this gen, and screw $60 game pricing - go android instead).garry355 - Tuesday, March 17, 2015 - link
After some extensive tests, it turns out this video card has 11,5GB VRAM.Consumers beware!
H3ld3r - Tuesday, March 17, 2015 - link
Does anyone have an idea how much profit margin/ manufacturing costs/ yields nvidia is making?madwolfa - Wednesday, March 18, 2015 - link
With 28mm process being so mature, I'd think the yields are pretty good...deeps6x - Tuesday, March 17, 2015 - link
Despite their ability to release the 980Ti at the same time as the Titan ex, we know they will try to milk the rich gamers as long as possible first. I just hope the Ti gets releases no more than 4 weeks from now and drives the price of the 980 down into mainstream levels.I'm in Canada right now and the cheapest GTX 970 I can find is $390 and the cheapest R9 290x is $340. GTX 980 is $660 for the cheapest one. (79 cent US buck)
If Nvidia doesn't come out with reasonable pricing for it's mid-range and high-end cards, they WILL start to lose market share to AMD. Dammit.
Yojimbo - Wednesday, March 18, 2015 - link
Ideally they want to rely on using chips that couldn't perform well enough to reach the Titan bin for the 980Ti, otherwise they are wasting the potential of those chips. That means making and selling enough Titans and Quadros to produce a large enough supply of the lower-binned chips. So unless they are having yield problems, it makes sense they come out with products based on the fully-enabled chip first and then begin to introduce lower-specified chips later.As far as losing market share, they are well aware of how things are selling, and if people aren't choosing to buy cards with their chips over the competition's at the current price, they will lower the price or offer incentives. Right now NVIDIA is selling newer, quieter, more efficient chips against AMD's older, louder, less efficient chips so they are able to charge a premium.
HisDivineOrder - Tuesday, March 17, 2015 - link
I really wish nVidia would allow 8GB versions of the 980. I think most of us wouldn't mind some of those 7GB versions of the 970, either.Either would be a lot more bang for your buck than this expensive, if admittedly awesome, card.
althaz - Tuesday, March 17, 2015 - link
Ryan, this sentence bugged me: "In other words this is the “purist” flagship graphics GPU in 9 years."Either you meant "purest" or you meant "...this is the first "purist" flagship grahics..."
Otherwise, great article :).
Ryan Smith - Tuesday, March 17, 2015 - link
This is why Microsoft Word cannot be trusted...Fixed, thanks!
pvgg - Tuesday, March 17, 2015 - link
This on time review is interesting and this Titan is all well and good, but I'm still waiting on the review of the card that must people can actually buy. The 960 one. Or, at least, to see it added to the gpu bench tool..pvgg - Tuesday, March 17, 2015 - link
*most peoplemodeless - Tuesday, March 17, 2015 - link
This *is* a compute card, but for an application that doesn't need FP64: deep learning. In fact, deep learning would do even better with FP16. What deep learning does need is lots of ALUs (check) and lots of RAM (double check). Deep learning people were asking for more RAM and they got it. I'm considering buying one just for training neural nets.Yojimbo - Tuesday, March 17, 2015 - link
Yes, I got that idea from the keynote address, and I think that's why they have 12GB of RAM. But how much deep-learning-specific compute demand is there? Are there lots of people who use compute just for deep learning and nothing else that demands FP64 performance? Enough that it warrants building an entire GPU (M200) just for them? Surely NVIDIA is counting mostly on gaming sales for Titan and whatever cut-down M200 card arrives later.Yojimbo - Wednesday, March 18, 2015 - link
Oh, and of course also counting on the Quadro sales in the workstation market.DAOWAce - Tuesday, March 17, 2015 - link
Nearly double the performance of a single 780 when heavily OC'd, jesus christ, I wish I had disposable income.I already got burned by buying a 780 though ($722 before it dropped $200 a month later due to the Ti's release), so I'd much rather at this point extend the lifespan of my system by picking up some cheap second hand 780 and dealing with SLI's issues again (haven't used it since my 2x 460's) while I sit and wait for the 980 Ti to get people angry again or even until the next die shrink.
At any rate, I won't get burned again buying my first ever enthusiast card, that's for damn sure.
Will Robinson - Wednesday, March 18, 2015 - link
Well Titan X looks like a really mean machine.A bit pricey but Top Dog has always been like that for NV so you can't ping it too badly on that.I'm really glad NVDA has set their "Big Maxwell" benchmark because now it's up to R390X to defeat it.
This will be flagship V flagship with the winner taking all the honors.
poohbear - Wednesday, March 18, 2015 - link
Couldn't u show us a chart of VRAM usage for Shadows of Mordor instead of minimum frames? Argus Monitor charts VRAM usage, it would've been great to see how much average and maximum VRAM Shadows of Mordor uses (of the available 12gb).Meaker10 - Wednesday, March 18, 2015 - link
They only show paged ram, not actual usage.ChristopherJack - Wednesday, March 18, 2015 - link
I'm surprised how often the ageing 7990 tops this. I had no doubt what so ever that the 295x2 was going to stomp all over this & that's what bothered me about everyone claiming the Titan X was going to be the fastest graphics card, blah, blah, blah. Yes I'm aware those are dual GPU cards in xfire, no I don't care because they're single cards & can be found for significantly lower prices if price/performance is the only concern.Pc_genjin - Wednesday, March 18, 2015 - link
So... as a person who has the absolute worst timing ever when it comes to purchasing technology, I built a brand new PC - FOR THE FIRST TIME IN NINE YEARS - just three days ago with 2 x GTX 980s. I haven't even received them yet, and I run across several reviews for this - today. Now, the question is: do I attempt to return the two 980s, saving $100 in the process? Or is it just better to keep the 980s? (Thankfully I didn't build the system yet, and consequently open them already, or I'd be livid.). Thanks for any advice, and sorry for any arguments I spark, yikes.)D. Lister - Wednesday, March 18, 2015 - link
The 2x980s would be significantly more powerful than a single Titan X, even with 1/3rd the total VRAM.Braincruser - Wednesday, March 18, 2015 - link
The titan was teased 10 days ago...Tunnah - Wednesday, March 18, 2015 - link
It feels nVidia are just taking the pee out of us now. I was semi-miffed at the 970 controversy, I know for business reasons etc. it doesn't make sense to truly trounce the competition (and your own products) when you can instead hold something back and keep it tighter, and have something to release in case they surprise you.And I was semi-miffed when I heard it would be more like a 33% improvement over the current cream of the crop, instead of the closer to 50% increase the Titan was over the 680, because they have to worry about the 390x, and leave room for a Titan X White Y Grey SuperHappyTime version.
But to still charge $1000 even though they are keeping the DP performance low, this is just too far. The whole reasoning for the high price tag was you were getting a card that was not only a beast of a gaming card, but it would hold its own as a workstation card too, as long as you didn't need the full Quadro service. Now it is nothing more than a high end card, a halo product...that isn't actually that good!
When it comes down to it, you're paying 250% the cost for 33% more performance, and that is disgusting. Don't even bring RAM into it, it's not only super cheap and in no way a justification for the cost, but in fact is useless, because NO GAMER WILL EVER NEED THAT MUCH, IT WAS THE FLIM FLAMMING WORKSTATION CROWD WHO NEEDING THAT FLIM FLAMMING AMOUNT OF FLOOMING RAM YOU FLUPPERS!
This feels like a big juicy gob of spit in our faces. I know most people bought these purely for the gaming option and didn't use the DP capability, but that's not the point - it was WORTH the $999 price tag. This simply is not, not in the slightest. $650, $750 tops because it's the best, after all..but $999 ? Not in this lifetime.
I've not had an AMD card since way back in the days of ATi, I am well and truly part of the nVidia crowd, even when they had a better card I'd wait for the green team reply. But this is actually insulting to consumers.
I was never gonna buy one of these, I was waiting on the 980Ti for the 384bit bus and the bumps that come along with it...but now I'm not only hoping the 390x is better than people say because then nVidia will have to make it extra good..I'm hoping it's better than they say so I can actually buy it.
For shame nVidia, what you're doing with this card is unforgivable
Michael Bay - Wednesday, March 18, 2015 - link
So you`re blaming a for-profit company for being for-profit.maximumGPU - Wednesday, March 18, 2015 - link
no he's not. He's blaming a for-profit compaby abusing it's position at the expense of its customers.Maxwell is great, and i've got 2 of them in my rig. But titan X is a bit of a joke. The only justification the previous titan had was that it could be viewed as a cheap professional cards. Now that's gone but you're still paying the same price.
Unfortunately nvidia will put the highest price they can get away with, and 999$ doesn't seem to deter some hardcore fans no matter how much poor value it represents.
I certainly hope the sales don't meet their expectations.
TheinsanegamerN - Wednesday, March 18, 2015 - link
I would argue that the vram may be needed later on. 4GB is already tight with SoM, and future games will only push that up.people said that 6GB was too much for the OG titan, but SoM can eat that up at 4k, and other games are not far behind. especially for SLI setups, that memory will come in handy.
Thats what really killed the 770. gpu was fine for me, but 2GB was way to little vram.
Tal Greywolf - Wednesday, March 18, 2015 - link
Not being a gamer, I would like to see a review in which many of these top-of-the-line gaming cards are tested against a different sort of environment. For example, I'd love to see how cards compare handling graphics software packages such as Photoshop, Premier Pro, Lightwave, Cinema 4D, SolidWorks and others. If these cards are really pushing the envelope, then they should compare against the Quadro and FirePro lines.Ranger101 - Wednesday, March 18, 2015 - link
I think it's safe to say that Nvidia make technically superior cards as compared to AMD,at least as far as the last 2 generations of GPUs are concerned. While the AMD cards consume
more power and produce more heat, this issue is not a determining factor when I upgrade unlike
price and choice.
I will not buy this card, despite the fact that I find it to be a very desirable and
techically impressive card, because I don't like being price-raped and because
I want AMD to be competitive.
I will buy the 390X because I prefer a "consumer wins" situation where there are at least 2
companies producing competitive products and lets be clear AMD GPUs are competitve, even when you factor in what is ultimately a small increase in heat and noise, not to mention lower prices.
It was a pleasant surprise to see the R295X2 at one point described as "very impressive" yet
I think it would have been fair if Ryan had drawn more attention to AMD "wins," even though they
are not particularly significant, such as the most stressful Shadow of Mordor benchmarks.
Most people favour a particular brand, but surely even the most ardent supporters wouldn't want to see a situation where there is ONLY Intel and ONLY Nvidia. We are reaping the rewards of this scenario already in terms of successive generations of Intel CPUs offering performance improvements that are mediocre at best.
I can only hope that the 390X gets a positive review at Anandtech.
Mystichobo - Wednesday, March 18, 2015 - link
Looking forward to a 390 with the same performance for 400-500. I certainly got my money's worth out of the r9 290 when it was released. Don't understand how anyone could advocate this $1000 single card price bracket created for "top tier".Geforce man - Wednesday, March 18, 2015 - link
What still frustrates me, is the lack of using a modern aftermarket r9 290/x.Crunchy005 - Wednesday, March 18, 2015 - link
I actually really like how the new titan looks, shows what can be done. The problem with this card at this price point is it defeats what the titan really should be. Without the couple precision performance this card becomes irrelevant I feel(overpriced gaming card). The original titan was an entry level compute card outside of the quadro lineup. I know there are drawbacks to multiGPU setups but I would go for 2 980's or 970's for same or less money than the Titan X.I also found these benchmarks very interesting because you can see how much each game can be biased to a certain card. AMDs 290x, an old card, beat out the 980 in some cases, mostly at 4k resolutions and lost in others at the same resolution. Just goes to show that you also have to look at individual game performance as well as overall performance when buying a card.
Can't wait for the 390x from AMD that should be very interesting.
BurnItDwn - Wednesday, March 18, 2015 - link
So its like 50% faster vs a R9 290, but costs 3x as much ... awesome card, but expensive.uber_national - Thursday, March 19, 2015 - link
I think there's something strange going on in your benchmark if the 7990 is only 3 fps slower than the 295x2 in the 2560x1440 chart...Samus - Thursday, March 19, 2015 - link
"Unlike the GTX 980 then, for this reason NVIDIA is once again back to skipping the backplate, leaving the back side of the card bare just as with the previous GTX Titan cards."Don't you mean "again back to SHIPPING the backplate?"
I'm confused as the article doesn't show any pictures of the back of the card. Does it have a backplate or not?
xchaotic - Thursday, March 19, 2015 - link
Nope. A $999 card and it doesn't have a backplate. This is possibly due to easier cooling in SLI configsAntronman - Thursday, March 19, 2015 - link
It's a blower cooler. So everything goes out the side of the case, which can be desirable if you have cards right on top of each other as the airflow is unobstructed.It's just Nvidia. Unless you need PhysX, you're much better off waiting for the R300s.
Mikmike86 - Thursday, March 19, 2015 - link
Spring pricing is a bit off.R9 290x's go below $300 after rebates quite often now, Febuary I picked up a 290x for about $240 after rebate which was the lowest but have seen several at or below $300 without a rebate.
R9 290s run around $250 and have gone down to $200-$220 recently as a low.
970s have been hovering around $320 but have gone to $290-$300.
Otherwise the Titan X was more for marketing since the 290x (2yr old tech) claws at the 980 at 4k and the 970 falls on it's face at 4k.
This cards a beast don't get me wrong especially when it chases the 295x2 after overclocking, but when you can get a 295x2 for $600 after rebates a couples times a month it just doesn't make sense.
$800 and I could see these selling like hotcakes and they'd still pocket a solid chunk, probably just going to drop a 980ti in a few months after the 390x is released making these 2nd place cards like they did with the og Titans
I go back and forth between Nvidia and AMD but Nvidia has been extra sketchy recently with their drivers and of course the 970.
Refuge - Thursday, March 19, 2015 - link
I just dont' appreciate their price premiums.I've been a fan of Green Team since i was a young boy, but anymore I usually lean Red team.
Just not satisfied with what I'm paying over on the other side to be honest.
Yes when I'm on the Red side I don't always have the same peak performance as Green. But I had enough money afterwards to pay my car payment and take the old lady out to dinner still. ;)
sna1970 - Saturday, March 21, 2015 - link
Nvidia intentionaly made GTX 970 only 4G of ram ... why ? so no one use them in 4K for cheap SLI.I hate nvidia ways.
imagine 3x GTX 970 in SLI for only $900 (300 each)
or 2x GTX 970 , which will be slightly faster than Titan X for $600
but noooooooooo, nvidia will never allow 8G GTX 970 , keep it at 4G so people buy Titan X ...
disgusting . AMD wake up .. we need competition.
medi03 - Thursday, March 26, 2015 - link
There is R9 290x available for nearly half of 980's price, being only 5-15% slower. (and 300w vs 370w total power consumption, I'm sure you can live with it)There is R9 295x2 which handily beats Titan X in all performance benchmarks, with power consumption being the only donwside.
Railgun - Thursday, March 19, 2015 - link
@Ryan Smith. For future reviews, as you briefly touched on it with this one, especially at high resolutions, can you start providing how much VRAM is actually in use with each game? For cards such as this, I'd like to see whether 12GB is actually useful, or pointless at this point. Based on the review and some of the results, it's pointless at the moment, even at 4K.Antronman - Thursday, March 19, 2015 - link
The Titan has always been marketed as a hybrid between a gaming and graphics development card.H3ld3r - Thursday, March 19, 2015 - link
Agree 100%H3ld3r - Thursday, March 19, 2015 - link
http://tpucdn.com/reviews/NVIDIA/GeForce_GTX_Titan...Evarin - Thursday, March 19, 2015 - link
For people thinking that VRAM is unneeded, you must not be heavy into modding. Especially with Fallout 4 and GTA 5 on the horizon, massive amounts of room for texture mods will come in handy.Black Obsidian - Thursday, March 19, 2015 - link
6-8GB would seem to meet that requirement nicely.As is often the case with "doubled RAM" models, by the time that 12GB of VRAM is useful, we'll be a couple of generations down the road, and cards with 12GB of VRAM will be much faster, much cheaper, or both.
Maybe at that point a Titan X owner could pick up a cheap used card and run them in SLI, but even then they're laying out more money than a user who buys a $500 card every couple of years and has the VRAM he/she needs when it's actually useful.
H3ld3r - Thursday, March 19, 2015 - link
I agree with you but don't forget how vram is used in sli and cf. Vram of gpu 1 mirrors vram of 0 so if have 2x 4gb you're only taking advantage of 4gb. Anyway i prefer fast ram than hughe amounts of it.Evarin - Thursday, March 19, 2015 - link
We've already had a game which called for 6GB VRAM for an advanced texture pack. Imagine an Elder Scrolls or a Fallout where every single object in the game has a 4k resolution texture. I think it'd be a challenge even for the titan.Antronman - Sunday, March 22, 2015 - link
The way that RAM works is the worse your system is, the more RAM you end up needing.There are plateaus, but as GPUs get faster you need less VRAM to store the same amount of information.
The Titan X is much faster than the Titan BE, and thus needs less VRAM, assuming that the application is the same.
Then we get into Direct X 12 and Vulkan. They're supposed to increase efficiency all-around, reducing the demand for resources like RAM and cores even more.
Death666Angel - Thursday, March 19, 2015 - link
"the card is generally overpowered for the relatively low maximum resolutions of DL-DVI "So I can drive my 1440p 105Hz display with it and get above 105fps? No? So what kind of statement is that then. DL-DVI may be old, but to say that 1440p is a low maximum resolution, especially with 100Hz+ IPS displays which rely on DL-DVI input, is strange to say the least.
H3ld3r - Thursday, March 19, 2015 - link
Based in what i saw in ryan's review 4k games aren't that much memory demanding. If so how can anyone explain R9 performance?Jdubo - Thursday, March 19, 2015 - link
290x was the original Titan killer. Not only did it kill the original release but killed its over-inflated price as well. I suspect the next reiteration of AMD flagship card will be Titan X killer as well. History usually repeats itself over and over again.jay401 - Thursday, March 19, 2015 - link
You say this is not the same type of pro-sumer card as the previous Titan yet the price is the same. No thanks.Ballist1x - Thursday, March 19, 2015 - link
No gtx970/970 sli in the review;) Anand you let the consumers down...H3ld3r - Thursday, March 19, 2015 - link
R9 290x only haves 4Gb at 5ghz and does a awsome job at 4k. the 295 only operates with 4Gb the other 4 are mirrored and shines in 4k. So i can't understand everybody concerns with 4k gaming with upcoming fiji. This Titan X has 12GB at 7Ghz and only shows how gddr5 is obsolete.oranos - Friday, March 20, 2015 - link
The ratio of potential buyers to comments on this article is atronomical.leignheart - Friday, March 20, 2015 - link
Hello everyone, I would like you to read the final words on the Titan X. It says the performance increase over a single gtx 980 is 33%, except the price is 100% over the gtx 980. If you are lucky enough to pay just 1000$ for the Titan X. Please people do not waste your money on this card. If you do then Nvidia will keep releasing Extremely overpriced cards. DO NOT BUY THIS CARD.Please instead wait for the gtx 980 TI if you want dx12. I will certainly pay 1 grand and more for a card, but this card is a particular rip off at that price point. Don't just throw your money away. Read the performance chart yourself, it is in no way shape or form worth 1000$.
Dug - Monday, March 30, 2015 - link
I suppose we can't buy a Rolex, Tesla, a vacation condo, or even a pony?Paying for the best available is always more money. Get a job where another $500 doesn't affect you when you purchase something. Plus price is only perception on worth. People could say $20 is too much for a video card and they would be right.
themac79 - Friday, March 20, 2015 - link
I wish they would have thrown in 780sli, which is what I run. I would like to have more VRAM, but I'm running all the new games pretty much maxed out. I made the mistake of buying them when they first came out and payed over $600 a piece. I will definitely wait for price drops this time.H3ld3r - Friday, March 20, 2015 - link
You need is more transistors, memory speed, stream processors, bus, rops, tmu's not memory amountArchetype - Friday, March 20, 2015 - link
4K gaming not quite there yet. Not going to pay $500+ for it. And in the mean time still jamming Full HD games like a baws using my old 280X "on my Full HD monitor".FlushedBubblyJock - Saturday, March 21, 2015 - link
Wow, it's stomping all over 2 of AMDs best gpu's combined.It's a freaking monster.
cykodrone - Saturday, March 21, 2015 - link
I actually went to the trouble to make an account to say sometimes I come here just to read the comments, some of the convos have me rolling on the floor busting my guts laughing, seriously, this is free entertainment at its best! Aside from that, the cost of this Nvidia e-penis would feed 10 starving children for a month. I mean seriously, at what point is it overkill? By that I mean is there any game out there that would absolutely not run good enough on a slightly lesser card at half the price? When I read this card alone requires 250W, my eyes popped out of my head, holy electric bill batman, but I guess if somebody has a 1G to throw away on an e-penis, they don't have electric bill worries. One more question, what kind of CPU/motherboard would you need to back this sucker up? I think this card would be retarded without at least the latest i7 Extreme(ly overpriced), can you imagine some tool dropping this in an i3? What I'm saying is, this sucker would need an expensive 'bed' too, otherwise, you'd just be wasting your time and money.sna1970 - Saturday, March 21, 2015 - link
What dual GTX 980 Anand ?for 2 x $300 Gtx 970 you will get the same or better performance than Titan X for $600 ONLY.
almost same power as well.
$1000 for this card is too much , Just tooooo much.
rolfaalto - Saturday, March 21, 2015 - link
So much silly complaining about value. This is an incredible bargain for compute compared to Tesla -- absolutely crushes at single precision for a fraction of the price! For my application the new Titan X is the absolute best that money can buy, and it's comparatively cheap. So, I'll buy 10 of them, and 100 more if they work out.rolfaalto - Saturday, March 21, 2015 - link
... and the 12GB is the deal maker, 6 GB on the previous Titans was way too little.yiling cao - Sunday, March 22, 2015 - link
for people using cuda, there is just no AMD option, Upgrading every nvidia new releases.Antronman - Sunday, March 22, 2015 - link
Or, if you're the kind of person who actually needs CUDA and isn't just using it because they made a mistake in choosing their software and just chose something with a bloated price tag and fancy webpage then you get a Quadro card instead of wasting your money on a Titan.You know. The sort of people who need Solidworks because they're working for a multimillion or even multibillion dollar corporation that wants 3D models or is using GPU computing, or if you're using Maya to animate a movie for a multimillion dollar studio.
Even if you're an indie on a budget, you don't buy a Titan. Because you won't be using software with CUDA or special Nvidia optimization. Because you won't be using iRay.
With the exception of industry applications (excluding individual/small businesses), Nvidia is currently just a choice for brand loyalists or people who want a big epeen.
r13j13r13 - Sunday, March 22, 2015 - link
titan x vs R9 295x2MyNuts - Sunday, March 22, 2015 - link
Ill take 2 pleaseXsjado Koncept - Sunday, March 22, 2015 - link
Your "in-house project developed by our very own Dr. Ian Cutress" is garbage and is obviously not dividing workloads between multi-GPUs, a very simple task for any programmer with access to Google.It's plain as day to see, but gives NV the lead in another benchmark - was this the goal of such awful programming?
cmoney408 - Tuesday, March 24, 2015 - link
can you please post the settings you used for the 295x2? not the in game settings, but what you used in catalyst.FlushedBubblyJock - Thursday, April 2, 2015 - link
" and the Radeon R9 295X2, the latter of which is down to ~$699 these days and "I knew it wouldn't be $699 when i clicked the link...
its frikkin $838 , $ 1,176 $990, $978 ...
Yep, that's the real amd card price, not the fantasy one.
gianluca - Sunday, April 5, 2015 - link
Hi!Just a question: Do you suggest me to buy r9 295x2?
Thx
Kyururin - Wednesday, April 8, 2015 - link
Umm I find it pointless to compare AMD R9 290x with GTX 980, R9 290x is build to be competitive to Nvidia's stock 780 not 780ti and sure as hell not GTX 980, it's dumb, it's like trying to ask a grandma(R9 290x) to compete with supermodel(GTX 980) in a beauty pageant, of course Nvidia is going to win, but it's not like the winning gap is spectacular or something to be astonished about. Last but not least GTX 980's lead over the grandma is the largest sub 2k, let's not forget that both the GTX 980 and the grandma are build to handle 4k so given the time Nvidia has to prepare the GTX980, it should had obliterated the grandma in 4k but the performance gap is not that fricking big and deserved to be woved, especially FarCry 4. Fanboys always bash AMD for their terrible drivers but it's not like they are ignored you dumb witt, they are slowly improving their drivers. Did AMD ever said We are going to pretend that our driver don't suck and so we are not going to fix it.alexreffand - Monday, May 18, 2015 - link
Why is the GTX 580 in the tests? Why not the Titan Z or even the 970?ajboysen - Monday, July 25, 2016 - link
I'm not sure if the specs have changed since this post but they list the boost clock speed as 1531 MHz, Not 1002