I've been saying it for a while, but NVIDIA wants to be Apple. Closed garden "ecosystem" (Shield/GeForce Experience/GameStream/G-Sync), planned obsolescence (poor support for previous gen GPUs), and obscene markup of halo products.
That said, I bet this Titan X (2016) will be the first ever, truly 4K Ultra 60+fps GPU. That's an accomplishment worth celebrating. When the 1080Ti comes out for half this price, that will be an accomplishment worth buying.
A lot of what they are doing suggests that, but thankfully so far they are going by the middle path, semi-open ecosystem which I think is the best for both innovation and consumers.
Semi-open? GameStream/Shield only works with NVIDIA hardware. G-Sync only works with NVIDIA hardware. CUDA? PhysX? HairWorks? SmokeWorks? I don't know if that last one is real, but I mean, the list goes on and on. They have done very little to pursue anything "open". Not that they are required to, of course.
For the record, I'm not suggesting that AMD is altruistic in this regard either, but at least their pursuit of OpenCL, FreeSync, and HSA is in contrast to much of what NVIDIA is doing.
You are aware Hairworks (and all of gameworks AFAIK) works fine on AMD hardware correct? The only problem comes when you jack up tessellation to above 4x (8x in some stuff?) in games like Witcher 3. There is nothing wrong with NV exploiting their hardware to the max for users each gen and AMD should do the same. Even Nvidia's older hardware gets hit (980 and below), the same way as AMD's with 64x. It was seemingly targeted at maxwell2 and up for that level which admittedly doesn't change things much. The 4-8x settings look fine and is easily adjustable but you still get to see the hair stuff (which is pretty cool). Cuda is different and they have spent ~8Billion+ on developing it for the specific architecture in their gpus. Why anyone would share those gains over the last decade with the competition is beyond me. The fact that AMD can't afford to do it, is a benefit again to NV buyers. If AMD hadn't gone console/apu they would have the R&D to spend on CPU/GPU (their CORE tech) instead of low margin stuff, and there are many more management decisions that have screwed them (paying 3x price for ATI etc). Both VEGA/ZEN were pushed off due to AMD's own selection of consoles first. Nvidia passed stating consoles would rob R&D from CORE products. People scoffed saying they were butthurt...LOL. NO, it was looking out for their core customers! You're complaining that Nvidia isn't helping AMD. With ~80% share, they don't have to (at least you got that part).
https://www.pcper.com/reviews/Graphics-Cards/Borde... physx does run on cpu, just not as well since it's single threaded (last I checked, not sure today as of gameworks integration now). Consoles use it too. Again, it isn't Nvidia's job to help AMD.
I already gave solutions for gamestream in a previous response to you, but of course YMMV (though people use them fine supposedly low lag etc). I just think NV doing their own solution is the best way to go when talking timings, lag etc as they know their hardware. The feature would likely lose quality if done by 3rd party for AMD or NV. You may think AMD has a better approach to their stuff, but it's only because they are weak. It is also the reason they don't draw many users for "features" that are literally game changers. AMD pursued OpenCL because they couldn't afford to fund a cuda alternative on their own or funding for the schools/certification system to get people to use it and learn it ;) Same story with everything else. If you can't afford your own cuda, you either lack the feature, or go in with a group hoping to at least have some success. OpenCL was their only option IMHO, but again all of these issues are due to a lack of profits yearly and losing ~8B in the last 15yrs, which ironically is about the amount NV spent on getting Cuda to where it is. Cuda is one of the main reasons I may go NV this time as I have aspirations past just games (at some point soon) and have delayed this purchase far longer than normal (usually every 3yrs, was waiting on gsync monitor I wanted, die shrink etc). But if Vega is awesome, cool and uses less watts...I could go that route now and upgrade when the need for pro stuff really hits me & just toss vega to my dad :) Whine about what AMD is doing to themselves, this isn't an NV problem. Put out a great product and quit low-balling them. I hate that as a consumer as I like cheap stuff too, but they need 5-10yrs of profits, not 15yrs of losses (well they had a few profit years but c'mon).
Will it come out for half the price? I'm guessing it wouldn't happen this year but early next year at best; but considering the price, along with current 10x0 pricing, and AMD's schedule (or lack of competition at the high end)... It seems likely NV could get away with an $800 GTX 1080 Ti.
I'd love to be wrong... I'm sitting on a pair of R9 290 right now (2x 6950 before that, largely NV cards before that). I feel like we're finally at a point where a single card would satisfy me for 4K/Surround resolutions, and this would be it, but I'm not feeling like paying over $750 for the privilege.
Otherwise I might as well just stick to SLI/CF and go 2x GTX 1070.
I've been drawing this comparison for a while as well. Nvidia dominates the high end performance area, are massively popular, highly priced, and use closed-end technology. AMD is much like the Android platform... more open-source, does not compete at the high end in terms of raw performance, and is not viewed as favorably by most people. I'm just holding off on a 1070 for now and hoping to see AMD offer something in the $300-$400 range to replace by 770, which has not aged well at all ever since Maxwell landed.
I don't think the markup is that obscene. This is a very expensive GPU, but a very cheap compute card. It's a hard line to walk in terms of pricing and marketing.
Except it's only AMD who has poor support for previous gen gpus (no money for dx11 etc) :(
http://www.pcmag.com/article2/0,2817,2458186,00.as... And I seem to remember radeon halo products going for $1500 (and an even steeper NV $3k, though that didn't last long...LOL - it was a price the market wouldn't bare). So both sides do this. As long as they are leaving the shelves FASTER than they can be made why should you set a price lower than whatever you can get? Business is not in business to be nice and make everyone happy (AMD should learn more from nvidia here, it's about making money fools), but rather to make a profit. This is simply supply and demand at work, and a company who is pretty good at figuring out what will help their bottom line. You seem to not understand that an M6000 goes for $5k at launch. The people who are unable to buy that but want to do content creation (games, 4k video etc) will see this as a massive discounted card. If you're struggling with the price of this card, you're not the target audience...LOL. These will fly off the shelves to the people who can't swallow a $5000 quadro and/or don't need 24GB. Many times before people have been seen buying 2-4...ROFL. To a prosumer, your version of obscene markup is downright obscene markdown. Your comment only makes sense if you're a PURE gamer with no other intentions, and even then it's still #1 and we all know what you pay on the top end. HEDT Intel chips for $1730 for instance.
But yes, if you're gaming without needing the pro side perks, by all means wait for the card with GTX in the name (1080ti) and save $500, not 600. 1080ti will not be $600. It will be $650-750 depending on Vega most likely. No need to push down 1080 with no competition. They will cherry pick the crap out of this gpu for the next 4-5 months and launch 1080ti with faster clocks and 8-10GB and a stack of cards on the shelf on day 1. It would be plum crazy to put out a $600 1080ti if AMD takes until Dec to put out vega and HBM2 will drive up their price vs. a GDDR5x card for no gain in perf unfortunately (NV doesn't even need their current bandwidth).
I really wish AMD had chose to go with GDDR5x for Vega. They got screwed by HBM1, and looks like they're going to do it again. IE too small of a market to lower cost, 4GB limit on HBM1, production more difficult than other mem leading to shortage etc. The only thing I see fixed this time is the 4GB limit. It doesn't even matter if your card is fast if you can't produce near enough to meet the demand. You should be limited by YOUR production of your product, not some part ON your product on top of your own issues.
I still can't wait for vega vs. Titan X/1080ti but it sucks to see AMD might be set up to fail profit wise yet again with their halo product. Samsung HBM2 will likely go to HPC first, and Sk-hynix is mass production in Q3, so AMD will be lucky to get an xmas card out, let alone ramping it for then. Nvidia meanwhile will be using GDDR5x that has been produced for a year by the time 1080ti ships and was already cheaper to move to from GDDR5.
As far as the closed garden goes, no different than AMD with mantle which never was designed for others, they just failed to push their own API due to small market share etc. How would NV make gamestream (of games)/gsync work with gpus from AMD? That would require a lot of effort on timings etc to help your competition (uh, silly). Shield can be bought by anyone, it just gets better if you have an NV gpu, which inspires sales. The only reason AMD is even sort of friendly is they are always behind and have no share. When they were ahead, they had $1000 cpus also and all of their chips were more expensive vs. today across the product line and I'd know as a reseller for 8yrs. Today they're cheap because nobody wants them.
http://moonlight-stream.com/ You can however get NV stuff to stream to other devices not owned by NV. It's a work in progress, but still...Again, why would NV want to do that and how much work when vid is encoded via NV's gpu? It's not as easy to control another company's gpu in this case IMHO.
There is also few other solutions for AMD such as remotr or splashtop. Again, it's AMD (or someone else) who needs to do their homework here, not NV. The lack of an AMD based solution is an AMD funding problem. Nvidia is doing you a favor by offering this feature and since it's on their gpus it will be the best experience they can offer. AMD should be doing the same (adding features!). Adding value to your product is a GOOD thing for customers. It's just one more reason I might buy their gpu vs. AMD unless Vega is VERY good on power, perf and price. The only one I'd waver on there for AMD is price. I'll pay more as long as you win on perf and watts/heat. I won't pay more if you lose either of these as your product is inferior IMHO if that is the case so I'd expect at least some price discount. Though IMHO AMD gives far too much and kills their bottom line repeatedly. Their cpus suck, but their gpus are usually close enough to charge more.
It's taken NV almost 10yrs to get back to 2007 profits. AMD should quit the price war as it's only hurting them and R&D for cpu/gpu, driver support for dx11, gamestream competitor, etc etc. AMD hasn't made 2006 profits since, well 2006 and they had 3Q's of 58% margins then too! I can't believe management went 480 instead of VEGA first. Vega is 58% margins or more, while 480 is ~30% probably. Since NV still can't keep 1080 in stock there was plenty of room for AMD to be in there making a mint from Vega. Now it will be facing a 1080ti (serving gamers) and titan x (serving rich gamers+prosumers) and likely miss most of xmas sales with a small supply if it even hits before xmas. I believe AMD has good products in the pipeline (ZEN/Vega) but they are useless if they're late and pushed off for low margin stuff instead.
Vega should be out now (GDDR5x!) and ZEN should be out for back to school and actually ZEN should have been out last year. But instead...We got new consoles (and old xbox1/ps4 started the delays of zen etc) and radeon 480. ~10-15% margins on the consoles (last ones started in single digits, moved to almost 15% according to amd), and probably 25-30% on 480 chips. Both bad decisions. AMD's next 12 months could have been a billion dollar year like NV (maybe even better as Zen has potential to pull down HEDT prices on the to end), but not now.
Note for AMD fanboys, I'm telling AMD to make more money! I'm telling them to choose products that have HIGHER margins FIRST, and low margin stuff second! AMD can't afford to keep losing money year after year. I own a 5850...LOL. Still waiting for Vega to show me something, then I'll jump. But I have no qualms about saying I'm leaning NV. The days of me "donating" to AMD with an inferior product are over. I still have to rebuild my dad's PC (giving him my Devils Canyon chip and likely the gpu too), so AMD has still has a shot at that cpu too. I won't take "close enough". They have to win or I buy Intel again and Nvidia. The only thing I'll fudge on is price, not perf, watts or heat. You have to win at least one of those and tie the others basically or bust. I have no trouble paying AMD for a WINNER. Other than a few driver issues (not a problem for me as long as it's fixed quickly), I got exactly what I expected from my gpu (low heat/noise with perf).
Titan X is nothing more than a great example of how much Nvidia are cunts.
Instead of simply releasing the 1080 Ti, they first milk as much money out of idiots who will buy the Titan X, because these people are too impatient to wait for the PURPOSELY delayed 1080 Ti version - Which will no doubt offer the same performance with a slight overclock.
None of these cards should exist in the first place though. They should just release the top-end cards first. Like what used to happen years ago (excluding dual-GPU cards). A 1080 Ti straight out of the gate at the price point of a 1080.
But Nvidia just want to keep ripping off the consumer, and it will continue to happen until AMD can get their act together and make a good GPU for once that actually matches or beats Nvidia's best offering.
It's exactly the same thing happening with Intel, because AMD are even worse at competing on CPU performance there, so you get these ridiculously priced Intel CPU's.
So fuck AMD for incompetence, and fuck Nvidia for just being all-round anti-consumer pro-proprietary cunts.
The market is what determines the price of this product, not Nvidia. If AMD can come out with something that even remotely competes with this, that's all it would take for Nvidia to price things more competitively.
It's like complaining about BMW's being too expensive becase you can't afford one. There's a market for them, you're just not a part of it.
Hard covers cost a fraction more to manufacture, but the books are much pricier.
This is because the fans will jump on it ASAP even if they have to pay 30$. Then the other people who don't buy it at 30$ will buy the pocket edition for 15$ a few months later.
If you choose not to buy nVidia, you are also choosing not to buy a high-end GPU.
AMD haven't been competitive in this space for years.
So I continue to think nVidia are a pack of dickheads - but I'll still buy their product, because it's the best. If AMD has caught up next time I need a graphics card (I just got my GTX1080), I'll look very hard at them.
What's with all this self-entitled whining? It's a freakin' super high-end gaming device - if you don't like the price, don't buy it. They are a business, they don't owe you anything. They do owe their shareholders something though, and that is to extract as much money as possible out of the market.
How does AMD not have high end cards? Fury X is keeping up pretty well with a 1080 in the upcoming api's and lets all embrace the fact we got in the stage where every major game release will be Vulkan or Dx12 from now.
while I agree with you, you have to look at how much behind AMD really is...most times, it's just a few precentage points behind...Vulkan is shaping up to be real big...I built a Intel 660K system and have it running OC'd to 4.5ghz and bought a 8gb RX480...I'm playing doom now on Ultra settings at 2k without a single hiccup...I haven't tried playing it at 4k but I will now.
Maybe in some different dimension where NVIDIA is a charity NGO. What the hell is wrong with some people, where does this idea even come from? No one owes you anything little princess, it's a business. Complain about their shady anti-competitive practices vs AMD but enough with this "oooh it's sooo expensive, that's unfair" whining. Vote with your wallet, don't buy it then, no one is forcing you. Geeez.
Whilst I wouldn't have phrased it like you did, I completely agree with the sentiment; Nvidia are in full-on rinse mode right now. It's not only them, Intel are just as bad. Both companies are up against sub-standard competition and are squeezing ever-more money from their customers. That's pretty much what always happens when anyone has a monopoly. This is capitalism in it's purest and unchecked form.
Agreed. "Don't buy it" doesn't work as a solution when the problem is caused by the /market/ buying it, not the individual. Some people are happy to dig their own graves monetarily speaking it and it's messing up gaming hardware for the rest of us. The only financially sound solution is to stay behind the curve and buy second-hand.
And yes I know that buying a high-end graphics card has never been "sensible" but $1200 is patently absurd. I can build a whole gaming PC for that!
If the market is buying it, why is that a problem then? Do you not understand how a market works? If a company can sell a desirable quantity of X for $1200, why in the world should they ever do that for less? I have news for you, there is no such thing as a company that is purely "pro-consumer." All companies are "pro-their own bottom line." Any that isn't is incompetent and will not last long.
You guys are all blowing smoke. Microsoft will release a gaming console by next year called Scorpio. At about $600 it will be doing VR and 4k gaming in style. For another $600 you can get an Oculus Rift and you're all set. PC gaming is being going down for years for a few basic reasons: -PC gaming rigs are way too expensive compared to consoles -most games if not all are made and optimized for consoles -physical copies of console games ( disc in a box ) can be traded or exchanged for credit or cash. With PC games you're screwed. Last week I bought a game for $70 on Steam, played it for a few hours while it kept crashing. Good luck trying to get a refund. With Microsoft releasing such a powerful console, expect Sony to follow suit. So you guys keep buying those expensive Porsches. Fine with me. I'll settle for a Tesla Model 3 and keep the change. Don't get me wrong, I've been playing PC games for years. But enough is enough.
It look like you haven't been following the updated steam return policy: you're well within your rights if you have played the game for few hours for a refund. Key phrase being few hours (number is specified clearly in their terms but can't recall), the Nvidia centric Batman Arkham Origins refused to play balls with my 295x2, in 5 days, the money was back in my PayPal account (and not using PayPal protection).
Consoles will use outdated hardware from day one. Their big problem is that they have to survive for years to come as console players don't want to buy a new one every 2 years.
The current-gen consoles (XBOne, PS4) use something that was at best mid-range when it came out, and today you get to play 720p at 30 fps because they just don't cut it. The new consoles will also not use high-end hardware, because thats just too expensive.
There will always be a market for actual powerful gaming systems.
Not to mention that there is a bunch of genres of games that just don't work well on consoles.
They may release FPS games for consoles, but the entire genre just is no fun on a controller, especially competitive play. Not to mention complex strategy games, like Civilization.
Your thought is that "gaming" hardware should be cheap. That's what Xbox and Playstation is for.
This is a super computer. 11 teraflops is not easy to achieve, let alone in a 2-slot card. This kind of computing power cost millions of dollars just a few years ago. Spot complaining that $1,200 is expensive.
The computer you want will always cost about $3,000. This has held true for 30 years.
.....except most of the comments completely ignore the fact that this isn't neccesserily a gaming card? As if dropping the GTX tag wasn't enough of a clue. Most of these will probably end up in render farms with a tiny niche going to top end gaming PCs.
There's a difference between not being able to afford a $1200 card and being annoyed at the market failure that leads to the card being priced that way to begin with.
I wouldn't think twice about buying a $1200 video card, but that's not what the Titan X is; it's an $800 video card that requires you to put another $400 in a pile and light it on fire. And yeah, that's going to piss people off.
Exactly. Folks don't understand that larger chips have higher chances of defects. The yields are lower. There are reasons, like you said, for why 66% more cores costs 240% more dollars.
Nvidia only builds these huge top-end designs so they can sell a $1000+ Titan card. If they need to compete, then they release a cost-reduced version, but the intent was never to release this a a lower price point.
What you have to understand is that this card has uses beyond gaming. NVIDIA needs to be careful not to incentivize the deep learning community to spend their time and money trying to figure out ways to do their learning tasks with these far cheaper cards rather than spend money on the more expensive Tesla cards. The $1200 price tag is meant to give more reason for people to buy their $5000 Tesla cards with more memory instead of trying to cobble together something more exotic with the 12 GB Titan.
The 8800GTX was $600 in 2006 which would be about $720 in today's dollars. Clearly Nvidia is inflating the prices of their top-end cards, which is contrary to the established idea that tech continually gets better AND cheaper. This would imply that Nvidia is certainly taking advantage of their market position and their ability to set pricing. That's what companies do, is increase prices when they can. All that said, that would mean that the 1080 which is more of a direct successor to the 8800GTX, is actually cheaper adjusted for inflation than the 8800GTX was. So maybe NVidia isn't quite as bad as you think, they just introduced an additional tier of cards? I am not so sure it's as simple as "Nvidia is gouging what a rotten awful company!"
PS I rock an HD 5850, and had a 512Mb 8800GTS, so I am brand agnostic, I buy the best value.
I hope in 10 years we're getting much better than this performance for $200. Hopefully we'll have at least 25 TFLOPS for $200 by then, unless inflation becomes an issue between now and then.
there is no market failure here, they're not actively locking out AMD or anybody else. GPU market has high entry barriers and that's an issue but it's not nvidia's decision to make it that way. They're pursuing legitimate objectives with legitimate means.
It's not a monopoly, with integrated intel GPUs, nvidia, AMD all present on it.
They need something to offset the low yields with big chips hence price point, even gp 104 is mia, when yields will improve then comes the 1080ti as was previous generations!
Dude welcome to the free market and business 101. Why should Nvidia leave money on the table? I have most recently purchased AMD cards but you have to give it to Nvidia, they are firing on all GPU cylinders. Nvidia could be doing FAR worse when it comes to performance/$ considering AMD has absolutely NO answer for high performance, at least not for quite a long while, and honestly the outlook is not so good. You could still be paying top doallar for Fury X performance right now if it wasnt for NVidia.
I think your assumption that they're going to release a 1080 TI is pretty premature. If AMD doesn't release anything competitive I don't see them doing that. It will be $1200 Titan or $599 GTX 1080.
Ehh, they'll release it just because at some point the market for people willing to shell out well over a grand will be tapped but gap between those two is still sizeable, and plenty of people would see a new 1080 Ti at $700 or even $800 as a good deal at that point... Even if it's six months down the line. Heck if I was in the market for a new card by then (and I likely will be) I wouldn't pass it up if it's better enough than 1080.
I don't think you know what a Ti card really is. The Titan X most like has 3840 cores with 256 cores disabled due to manufacturing defects. The chips that have no defects (all cores functional) are not sold as Titan X's, but saved for later. When the yields improve enough that many 3840 core chips have been produced, they'll come out with a Ti card. It's never a day 1 thing.
It also works in the other direction. They'll also hold onto chips with more than 256 cores defective; which cannot be sold as a Titan X. Once they have enough to sell, that have say 512 cores disabled, a Ti card comes out at a discount.
Yep... They wait untill there Are huge pile of defective 102 chips that Are no good for TitanX. Then chose how much They need to cut of those chips so that They can use as Many as possible to those junk chips and sell them as 1080ti cards. They most propably Also use gddr5 in 1080ti to save more money. The price will be somewhere between 850-1050$ depending on how much vega cost and how fast or slow it is compared to this. So that mean that 1080 will come out q1 2017 sometimes after vega has been released or even before if They already know the speed and price in advantage.
Oh get over it. This is nothing more than the video card equivalent of a million dollar "Supercar". It's purely for show and the tiny few who don't care at all about value and have a bunch of money burning a hole in their pocket. There's plenty of great options for gamers that have been released already by Nvidia and AMD and more to come.
nvidia is a private company that has to make money for its investors, what's your problem with that?
They're not ripping off anybody, you don't have to buy it if you don't think it's worth your money. Other people will, that's why they're doing this without a care for your opinion.
intel wont be able to make us jump through hoops for to much longer. When zen hits and it gives us 8 cores with haswell level performance, intel will have to compete and start offering 8 skylake cores at under $350
You are only ripping yourself off when you buy something you don't really want - and then whine about it.
The Titan has always been very much a niche card, and even more so this time around as they are strongly re-targeting it towards compute tasks instead of gaming. It was never a good economy decision to buy it.
So in short, you think the Titan is too expensive? Just don't buy it.
If the price sucks it will be left on shelves until they lower it. But since it IS NOT a gaming card (though it's still the fastest out there probably for this task), they will fly off the shelves vs. M6000's that run $4000-5000. PROSUMER, is who this is aimed out. They are wisely expanding their market to poorer people who'd like to get into content creation (games vids etc) but can't afford it at $5k. You don't seem to understand the article. It point blank says it's a prosumer card, heck they even removed the GTX from it...LOL. Nobody can rip you off, they do not force you to buy anything. If the price is too high, the market will let them know and price will come down. It's that simple.
Get a better job. I could easily afford this and I'm not rich by any standard. It's a tough pill as a pure gamer, but as I move on to other things (pro stuff) it's a massive discount to quadro. If these cards didn't exist an indie dev etc would be forced to pay $5000 even if they had no need for support, ECC etc. No thanks. Thank you Nvidia, please ignore the ignorant and keep releasing prosumer cards! It sounds like the card for you will come a few months from now for $700 or so.
BTW, what they are doing is cherry picking chips for the 1080ti right now for it's launch. Again, you don't seem to understand how this stuff works. 1080ti will likely be a salvaged or cherry picked titan die with higher clocks and maybe a bad part disabled. The higher clocks will make it faster in either case, and save dies. It's quite possible yields will be good enough to not disable anything other then deep learning crap and keep the gaming side fully enabled while jacking up clocks. I guess their strategy depends on Vega's perf.
"NVIDIA this morning has stated that the primary market is FP32 and INT8 compute, not gaming."
Did you miss that part? So content creation and int8 work. But sure it will game too, but why would you buy it? Wait a few months if you're a gamer only. AMD will have to release more than a single card, as NV has multiple market segments here. PRO, Prosumer, & gamer. Everyone gets the best of the best this way for their tasks. Price would go up if they tried to do it all in ONE solution. You also seem unaware of the fact that NV has taken nearly a decade to get back to 2007 profits. They aren't ripping anyone off. If you take out Intel's payment they still would be struggling to match 2007...LOL. Jeez, read some quarterly reports and balance sheets.
I'm feeling a mixed combination of 1) Holy cow, Nvidia is killing it with their Pascal timeline, 2) Is anybody else wondering how much fun it's going to be to try and buy the 5 of these they can make every week (jkjk), and 3) These prices.
I know the best performance comes at a premium, but we could really use an AMD spoiler right about now. With the amount of development time it's going to get, Vega 10 needs to be notably better than GP104, and Vega 11 better be able to compete with the Titan X and get ready for a fully enabled GP102 (presumably with HBM?). Nvidia is absolutely phenomenal at making quality GPUs, but it's also highlighting how badly we need flagship level competition.
Agreed, NV's timing is probably putting a ton of pressure on AMD's timeline. I'd like to upgrade my 2x R9 290 already but until I see AMD's hand and a 1080 Ti I'm not sure I wanna bother.
Oops misread the GP102 part. Still, why is TFLOPS so low? Isn't the compute full fat Pascal close to 16-18 TFLOPS? I'd expect the consumer version to be lower, maybe 14-15, but not 11.
GP100 officially has only 10.6 TFLOPs SP performance as well. Which is pretty logical considering it has the same number of ALUs and a similar frequency.
It's the lower clockrate. 40% more cores, but only 88% of the clock rate results in only 24% faster overall. I wish nVidia would've bumped the TDP a bit on this one to get the clocks up; the gap between it and the 1080 is narrow enough there's not much room to fit a 1080 Ti in if they wanted to.
OTOH at least there's not much reason to wait for one either. I'll be getting a 1080 whenever the price/availability stops being crazy.
Eh, it's smart. They get three things. It's still faster, anyways. They get to say "look! Our fastest GPU ever still only consumes 250W!" And then they get an even better reputation when it turns out this thing can be clocked higher for how great of an overclocker it is.
The "full fat" Pascal is already out. If you want one, buy a Tesla P100. This card, the Titan X is the top of their gaming line and I don't see them cannibalizing their Tesla line by releasing a GP100 Titan or Geforce.
I think they are trying to release as quick as possible to make the most of their lack of competition at the high end. Also the high sales of the 1080 may have encouraged them that the demand is there for something even more pricey.
I agree, it's just good business decision making utilising the good position they're in. Really need a leak from AMD to put a boomerang in the works and get some folk to wait for the red team.
It might if it's positioned against a GPU that's "released" now but probably won't actually be in stock for the same 3-4 months, if the stock levels of the 1080 are any indication.
It has more CUDA cores, but its clock speed is lower. Together they combine to only being slightly higher than the 1080. Maybe the yields on the larger chip aren't as good so they have to set more conservative clock speeds?
More like its prosumer related card to satisfy gamers and compute market together. Having higher clocks would damage Tesla sales so they're striking a balance. Wouldn't be surprised if nVidia locked down those clocks too to reinforce my point, hence why only nVidia have selling rights. No custom boards allowed !!
That is not how it works at all. If anything the longest signal path is what limits the clock rate. Adding more cores adds more transistors while affecting the longest signal path by a very small amount. There is probably a heat issue with the die size that prevents higher clocks. A low thermal resistance cooler could probably pump a GP102 chip to 2 GHz. We'll see.
38% more TDP, 66% more transistors; it has to run at a lower power per transistor (probably slightly lower max voltage as well as the lower clock speeds). What puzzles me is why they left the TDP sticky at 250W, going to 275 would give enough TDP headroom to leave the clocks alone.
The reason for the 250W cap might be more for the deep learning market than the gaming market. I've read something that stated Baidu were already leaving their cabinets half-full when using the Maxwell Titan X for deep learning because it was too expensive to cool them otherwise.
Yes. +900$ 1080 Are selling so well that next generation 1180 can sell at 999$ and 1170 700$ and 1160 will take the 500$ slot! That is a progress my fellow gamers! Have to wonder though if next generation Titan X could be much higher than 1500$ untill people think that it is too much... Most propably 1500 is just ok. It will still sell well enough.
This might actually be nVidia separating Titan as it's own brand. nVidia loves brands - Geforce: GTX and GT, Tesla, Quatro, GRID, Tegra, CUDA, 3D Vision, etc.
That's bang on to how it's worked previously. They'll restrict compute performance slightly and shave a suitably large-looking chunk of money off to make it look like a "bargain", but not until they have made sure the early adopters have had their fill of overpriced madness.
Or possibly faster for gaming. If Titan is no longer part of the Geforce product line the 1080Ti doesn't have to slot in below it in the gaming hierachy.
1200 fucking $ and still no HBM2. Also wow another price hike over previous generation, sure seems like there are lot of idiots purchasing these overpriced gpus.
Yes. No HBM2 which should theoretically give us over 100% more bandwidth compared to the 386-bit GDDR5x in Titan X.
That's a big disappointment for some with bandwidth heavy workloads.
I understand the price. This is a premium product, not meant for children's overclocking machines.
Also, market positioning states that they can gouge the price and they will.
In a year, prices will have come down - hopefully - when full AMD Vega has shipped.
I'd love to have a pair of Titan X's in the interim and have the cash at hand, but have no real reason to spend. I don't need that amount of computing power. It'd be a pair of very expensive space heaters...
I never heard about INT8 format for neural networks. Total news to me. More important question - what is FP16 performance? Is it as crippled as 1080, or does it have the same FP16 cores as the big chip?
I'm kinda disappointed in the lack of HBM2 for a card with that price tag Though I'm not surprised about the name since they also went with GTX 1080 which people thought they wouldn't do And I also expected at least 1.5Ghz core clock and I also wanted closer to 4000 CUDA cores But I might still buy this later on this year (Big upgrade from my 750Ti)
Later this year there will be better options (namely the Ti version of this, hopefully brought to sane pricing by something maaaybe approaching real competition from AMD)
GP100 has a 15.3B transistor count. GP102 has a 12B transistor count. Coupled with the fact that they use different memory interfaces, and they are without a doubt different GPUs.
Interesting. Sounds like they are putting more investment into the compute side than I think. Taping out two separate SKUs for high end compute is a not inconsiderable investment (I guess much of the design would be similar but you still need to pay for a whole new set of masks?)
The are definitely diverging their compute business from their GeForce business more and more. Their compute efforts have grown up to the point that a large number of the improvements made to Pascal seem geared towards compute workloads and they are willing to come out with what is likely a compute-only GPU (GP100). They see the potential for large growth in their data center business.
They'll sell many of these as Quadros and Teslas for people not needing FP64. Not sure a seperate chip wouldn't make sense. And the fundamental building blocks of these GPUs are so similar, nVidia might be able to reduce the tape out cost of additional chips by using clever methods.
Anybody else notice the correlation between NVidia's recent price hikes for every product over the previous generation, and the utter lack of competition from AMD/ATI?
We are all screwed if AMD doesn't pull off a miracle with Polaris. Not only will we be paying $400 for mainstream gaming cards, but there will inevitably be a generational slowdown in performance for lower-cost cards. It doesn't appear that way because since Maxwell, NVidia has been crushing GCN in just about every category except perhaps performance/dollar, but hat's irrelevant because the only tier AMD is dollar competitive is entry-level.
And after vega comes out we'll be saying just wait till vega2 comes out, that'll teach nvidia. Also wait till zen comes out, that'll teach intel. This "wait till the next gen comes out" saying is getting pretty sad for amd fans.
I've had way more NV cards than AMD cards (starting with a Riva 128), tho I've gone AMD with my last two upgrades (CF 6950 & CF 290)... It definitely feels like I'll be paying more than ever next time to maintain the same relative level of performance.
This generation looked like it should be the first to achieve solid 4K performance on a single card, but at these prices and with AMD fledgling...
People tend to exaggerate Nvidia's advantage, IMO. Last gen AMD cards 290/390X were on-par with the performance of Nvidia's offerings, they just used more power. They even managed to just rebrand existing cards and stay performance competitive.
Fury X was a bit of a disappointment compared to the 980 Ti, but if the current DX/Vulkan games are any indication, Fury X (and Fury) might ironically turn out to be the better buy in the end.
A) AMD cards use more power B) They have a disappointing high-end proposition and I'll add... C) Their drivers, even with the wash and rinse of Catalyst, are buggier and more problematic than NVidia's.
So for the same price for the same performance, why would I go with AMD over NVidia when there are still con's?
NVidia knows this, which is why they are beginning to charge more.
I think NVIDIA's 10 Series pricing has more to do with the cost of the 16nm process than anything else. Go back to the 700 series and you'll see prices even higher than for the 10 series equivalents. I think the new Titan X pricing has to do with the potential for the card to cut into the market of their $5000 Tesla cards. AMD has never really had any competition for the Titan cards, have they?
No HBM2? At $1200 I was expecting memory bandwidth to knock the socks off the Fury X, especially if they're positioning it as a prosumer card suitable for neural net applications.
Please, please, PRETTY PLEASE, do not buy this card. The price of these things is getting out of hand and, unless you and I STOP buying the top end card, that price will carry on. They're basically shafting each of us.
You're being silly. NVIDIA prices their products at what people are willing to pay (and in this case they don't want to go too low to prevent the card from undermining their Tesla line), just as any other company. Do you try the same thing with all your products? "PLEASE PLEASE stop buying gasoline until they put the price down by 30 cents a gallon!" I'm sure they'll coming out with a 1080Ti eventually that'll be quite a bit cheaper, but possibly with less RAM and/or some compute-oriented features disabled.
While I agree with what your are saying, your analogy is completely wrong. While gasoline is a much needed commodity, a high end GPU is not. Telling people to stop buying gasoline is like telling people to stop getting electricity to their house because electric bills are too high. Everyone (who has a car) needs gas, and everyone needs electricity in this day and age. No one absolutely needs a high end GPU, or any GPU for that matter.
While I try to stay away from car analogies, the better analogy is asking people to stop buying BMW's i8 because their highest end car is too much money. People will buy the Titan X just as much as people will buy an i8, just like how people will buy a 1080 like people will buy a BMW 7 series car, or how people will buy a 1070 or 1060 just as much as people will buy a BMW 5 series car. Or at the bottom end, you can just buy a used Honda Accord or Toyota Corolla just like how some people can live with the iGPU in their Intel/AMD CPUs in their crappy sub $400 laptop.
Yeah I was aware of that, but I don't think it really matters much. Besides, although gasoline is necessary for many things in the economy that rely on it and therefore unlike the Titan X, it doesn't have any direct replacement available in the market, much like the Titan X. The BMW i8 does have direct competition on the market. In that way gasoline is a better analogy than the i8.
In any case, pick any product and the request is just as silly, regardless if there are replacements or not or how vital the product is to peoples lives.
Nice. Black like a black Amex. If I owned one I would go to all the chicks and say "Hey Baby. Want to see my big black Titan?". Of course, if she has a boyfriend in the football team that is where things get interesting...
I'm still laughing at a post I read whereby some nVidiot (like what I done there) actually bought four TitanX's at over a grand each for some massive SLI rig that doesn't even scale well in games. A week later the Titan killer 1080 came out.
Impressive as these cards are; performance at all costs is resulting in crazy prices. To bastardise Oscar Wilde... "Nvidia users who buy this card know the price of everything and the value of nothing".
Nvidia are effectively killing AMD by matching performance at each price point (no benefit to the consumers thirsting for high end) and shafting the high end performance consumers (that AMD can't provide for).
Indeed. 2016 (so far) didn't turn out to be the price-competitive HBM2 upgrade love-fest everybody was hoping for.
But at least we got a lot faster cards. Prices will come down.
If one has no use for it, one can always postpone purchase.
I can remember the amount of $5000 device I've bought that turned to worthless crapola in a couple of years and I practically had to pay to get rid of them.
For most, unless this level of performance is required, it's just better to wait another 6-12 mo.
Nvidia is NOT a charity. Unlike the government that taxes you and you are forced to pay, Nvidia doesn't "force" you to buy their products. If you feel this product is overpriced , don't buy it.
Titans ( thus their name) have always been at the top of the price food chain for Nvidia so the price is not a shock to me.
Titan si 30% faster and 100% pricier than 1080, poor offer. 1070 SLI will be much faster and almost twice cheaper tthan new Titan, especially that SLI scales pretty well with high resolutions like 2560 or 3840
I wonder how many folks have bought multiple cards over the last 3-4 years? Kepler, Maxwell, now Pascal...
In my case, I've bought the equivalent of all of these with two OG Titans for basically the same price, and have only needed to do it once. Now, I will do it again and will probably last me another several years with no need to immediately jump to Volta.
You guys have no idea how to run a business, let alone understand how the market works. Nvidia are cunts? Great language from children who don't know their asses from their elbows.
You're not going to buy it. You don't need it. You don't even really want it. Leave the adults who know what their doing to their own devices and we'll sell you our second hand stuff in a couple of years. Who cares what they sell it for. If you're happy with the 1080 price, then what's the point of crapping over here?
It works out about the same either way. You have the top end for a year at most and enjoy it then watch is value and relative performance drop in half each year. On the other hand you could just always be midrange. Either way it is the same amount of money and that is the way it works because of they way Nvidia charges.
Due to the lack of fanfare, I am guessing even Nvidia does not shy away from the fact that the target audience is very small for this card. Also they should have called it Titan X 2016 or Titan X2?
While this card is hugely faster than the original Titan at lots of things, I'm guessing that it's going to be the same 1/32 compute performance.
So I guess for the actual jobs that need it, I'd be better off keeping my original Titan that is 1/3 and possibly seeing how to run that as a secondary compute card and put in a 1080 or wait for a 1080ti for the gaming part.
I wish Nvidia would just simply stop playing all these games and let us have at least 1 card that can actually do everything, even if it is around the $3k to $4k mark, just stop screwing around with us and let the highest end Quadro cards do full gaming as well as full compute, it's not like their drivers couldn't have different profile modes.. but no they think you will buy an M6000, a K80 and a 1080... nope... but I bet if the highest end Quadro cards did it all, they would actually get more money overall.
Being that it only has 12 GB of RAM and that they are pricing it at $1200, I think it could have FP16x2 units. The Tesla M40 is available with 24 GB of RAM and the successor to the M40 should have at least that amount. That allows a decent amount of market differentiation, I think. The question is does have FP16x2? In terms of deep learning, they seem to be pushing this as an inference card judging by the 44 TOPS int8 spec they list. The M40 is marketed as a training card, and for that they would want FP16x2. Do they plan on using a GP100 chip in a card to replace the M40? If not, then GP102 should have FP16x2, unless they don't plan on giving that market segment as large of a speed boost as they are able to.
I wish I knew how many transistors ROPs and the register files used. Then it would be possible to tell if GP102 likely has DP cores on it, but I'm guessing it doesn't. It has the same number of SP cores as the cut-down GP100 in the P100 but using 3.3 billion less transistors (it may use a fully enabled die, however, which the GP100 in the P100 doesn't). I'm assuming the GP100 does not have ROPs but has double the register files of the GP102.
Nvidia are greedy fuckers but AMD are useless arseholes therefore we get this pricing, it's annoying but it's what you get from a capitalist system with little competition. Looking forward to the 1080Ti though. I suspect Nvidia are trying to hoover up sales before AMD drop big Vega which may be sooner than we think.
Woah, the sodium content is unhealthily high in this comment section. It is like I wandered into a bloody salt mine. :D
Where in the woodwork were all these frugal consumers hiding when the "Pro Duo" was announced for $1500 a little while back (April 26th '16, here at AT)? A dual-GPU card, with 4GB usuable VRAM, a 350W TDP, and a performance of <85%* (i.e., when Crossfire is working, otherwise <50%) of this single-GPU, 12GB VRAM, 250W TDP product.
Wait, they weren't ALL hiding. Some of them were justifying the price tag by calling it a "content creation" card... one of course, with only 4GB VRAM. But hey, it is HBM so that's kewl, right? <hyperbole alert> And in a glorious future, DX12 will make it a full 8GB, and Vulcan will make it go faster than a rocket on crack, and 200 years from now it will be beating God himself in the AoTS bench, 'cuz AMD stuff gets better with time (duh). 1000 years from now, all AMD GPUs will unite to form a whole new God that all will fear, and all images of Jesus will have him look like Raja Koduri wearing a "Gaming Evolved(tm)" t-shirt. While only a few years from now, this Nvidia GPU will be obsolescence-d into the very depths of hell, and will probably jump out of your system eventually and rape your dog, and so on and so forth. Yeah sure, keep at it guys - don't ever let facts slow you down :P. If only the actual employees of AMD were as dedicated to the brand as you are, the company would be unstoppable.
* - estimated values based on specs, though I wouldn't be off by more than +/-5%.
Does the Pro Duo still sells? Even the red fans thought it was too expensive and out right dumb to buy for gaming given the ill timing (if we are reading the same comments section). I sport a 2x 295 and no way would I recommend it to anyone given how 16nm was just around the corner. Daniel himself called it out for commanding $500 premium over 2x Nano (although I havent seen the same re this vs a very sensible 2x 1070). I agree with the sentiment regarding the rabid fanboyism here. When I first visited the site in 2006 the comment section was much smaller but had as useful content as the article, and I learnt a lot. I guess it's the price we all pay for a wider demographic. Thanks to all the sparks that keep pumping some sense and value to the comment section. Can anyone point me in the direction of a useful article detailing nVidia core naming? I find the GP100, 104, 106 all confusing and Ryan Roadmap (http://www.anandtech.com/show/7900/nvidia-updates-... didn't really get into it at all?
I've had nano crossfire for a year. It costed me $1000. It still beats a 1080 in Time Spy. Not a bad investment for me IMO. The pro duo was way too late
So what happened to HBM2? Did nVidia give up on waiting for it and going for next gen? I'd think that these high end cards would be the ones that could actually take advantage of the bandwidth it offers.
As I understand it, AMD's Vega GPU's are including HBM2 and that is a big part of why they wont be out until later this year or early next.
I find it odd that they didn't see fit to include it on a $1,200 card. Makes me think that either they are rushing this to market so they can get the 1080ti out whenever AMD's Vega does, like they did last year or it's not offering much of a performance advantage as hyped...a least not for their GPU's.
Pascal doesnt need as much memory bandwidth as maxwell, the 1080 already outperforms the titan x, and the new titan will have a 50% wider bus to play with. If the new titan is only 50% faster then a 1080 (being charitable here) then it would have no need of pricey, difficult to work with HBM2, as opposed to available GDDR5X.
Obama and the Feds have been printing money so long that inflation is finally starting to hit us in our wallets. Video cards that used to be $800 are now $1200. This is what happens when interest rates are kept at 0% for 10 years so the left can guilt us into empting our back accounts so everyone can live the American dream, albeit only until the dollar comes crashing down.
Don't blame the businesses that are just having to deal with the BS that zerohedge have been exposing for years. If we were using the gold standard, this card would only be $500 and 1060GTX would be $99. Imagine 980GTX performance for $99!
Obama and the Feds had and still have no choice. If they stop printing, the US will go into a deep recession. Who wants that ? There is more to it. Even the mighty US has a tough time competing on a global stage with countries where state capitalism is a religion. Japan and South Korea are prime examples and China, Brazil and others have been going the same route for some time. Unless US makes some changes, the situation will only get worse.
In your analysis labor would be price accordingly as well. Assuming $10/hr. after tax wages now, it would still take you 25 hrs. of work to buy your hypothetical $99 card.
Thats the issue, salaries haven't changed in 10+ years but the cost of living/goods have gone up significantly. The middleclass is being left to fight among themselves while the fat cats get richer though their inflation immune assets like Trump towers.
If ppl were making $100/hr than a $600 or $1200 vid card wouldn't raise any eyebrows.But now you've got the masses in an uproar because they can't buy their precious toy.
Aaaannd wages would be something like $2/hr, and you would be here bellyaching about how if the feds hadnt forced us to move from the copper standard then the 1060 would only be $9.99.
Nvidia should just get x86 licence and build an APU. Not sure Intel would want that given that NVIDIA is better at design than AMD, we may actually have some competition.
I wouldnt go that far. Remember their custom ARM core that got tangled up on spaghetti code, and generally wasnt as good as the 32 bit stock ARM Varient? Not to mentoin them constantly overpromising and underdelivering on performance and power consumption. Kinda like AMD.
Much as I would love an APU with a 750ti built into it, It might take some time before nvidia would be competitive.
Awesome! They had me at 44 TOPs INT8 compute performance!! FP32 is spectacular as well, and my simulations have no need for FP64. Only question is whether I buy 2 or 4 on August 2nd -- either way, it's still under the cost of a TESLA P100, which isn't available anyway ...
I found a comparison for int8 performance for some older-generation processors. http://www.ieee-hpec.org/2014/Presentations/126.pd... Page 4 shows some int8 performance numbers for Kepler, Knights Corner, and Sandy Bridge processors/accelerators. It seems that Kepler performs int8 calculations at a throughput of 1 operation/core/clock cycle. The K40 has 2.5 TOPS int8 (I used the boost clock for comparison, since NVIDIA is using the boost clock for the Pascal Titan X number). The Pascal Titan X seems to perform them at 8 operations/core/clock cycle, resulting in the 44 TOPS int8. Presumably the Maxwell Titan X has 3.3 TOPS int8 unless there was a previous change in int8 performance made for Maxwell that I haven't heard about.
Thanks, helpful summary there. Yes, the architectural change for Pascal results in a mind-blowing increase in these sorts of metrics. I sure hope that FP16 and INT16 are also greatly increased. Also note that the memory bandwidth is about as fast as TESLA P100- 12GB, and for a fraction of the cost.
Well the update to this article confirms minimal FP16x2 support for compatibility purposes only. Someone on another message board claimed that the ultra-slow FP16 performance on the GTX 1080 is a driver bug, and NVIDIA intends FP16 to run at the same throughput and memory isage as FP32 on that card. Apparantly the FP16x2 cores must be targeted with special instructions. So it makes sense for NVIDIA to include a small number on other chips for compatability purposes even though running through the FP32 cores makes more sense on such chips.
But I am surprised the GP102 doesn't use FP16x2 cores. I would have thought they'd use that GPU for an M40 successor. It seems they are pushing the P100 for learning. Baidu uses the Maxwell Titan X for training, as far as I know. The Pascal Titan X has significant FP32 performance and memory bandwidth improvement over the Maxwell Titan X. Maybe NVIDIA think that improvement is enough. Still makes me wonder about an M40 successor though.
Yojimbo, do you have a reference for the claim that throttled FP16 performance is a driver bug? I'd be overjoyed to learn that the GTX 1080 is not artificially crippled in that regard, especially with so many up-and-coming deep learning enthusiasts trying to advance the field with that kind of horsepower. I don't know if it could be modded to access full-speed FP16 without an alteration to the die?
It's 8 ops. And floating point FMA is 2 ops. But it's not clear to me why we are talking about it in terms of floating point FMA. Maybe there's a reason to do so because of the hardware implementation. But regardless, it seems true that the GPU performs 8 8-bit integer operations per core per clock cycle under the right conditions. It's (8 ops)*(3584 cores)*(1.531 GHz) = 43.9 TOPS.
"Integer Arithmetic Instructions: dp4a
Description
Four-way byte dot product which is accumulated in 32-bit result. Operand a and b are 32-bit inputs which hold 4 byte inputs in packed form for dot product."
So if a = (a.w, a.x, a.y, a.z) and b = (b.w, b.x, b.y, b.z) then the dp4a operation computes c = a(dot)b + c = a.w*b.w + a.x*b.x + a.y*b.y + a.z*b.z + c, resulting in 4 multiplications and 4 additions for a total of 8 operations.
Sorry for my confusing notation there. The "=" after the c is meant to be an assignment and the "=" after that is meant as equality of the two expressions, excluding the assignment.
I'm confused (or maybe it really is as written/posted).
FP16 is 1/64 (presumably) the FP32 rate. FP64 is 1/32.
So...FP16 is actually SLOWER than FP64? Wow. Didn't expect that.
Also, with FP32 rate being a FMA rate, I wonder what say a LINPACK FP32 speed would be (native/raw/non-FMA).
That said, it's sad (in a way) how my GTX Titan FP64 is still faster than their newest, latest, flagship card, despite the fact that its primary market IS FP32. (And I care because I think that my MATLAB code is primarily FP64 by default...)
They use separate cores for FP16, FP32, and FP64 ops in the new Titan X and in 1080. That's why performance numbers for those data types are independent. In GP100, on the other hand, each FP32 core can compute 2 FP16 ops, so that's why FP16 performance is off the charts there (unfortunately the price is off the charts as well).
1200 $ for a 471 mm2 GPU, what a joke. There is barely any performance increase over the 1080, at double the price. This FinFET generation is pathetic so far.
Finvet is Great! The prises Are not, but when first products of new technology Are? Allmost newer. Next year we can see cheaper prises, because the production technology matures and yeald get better. In two years we Are at where this generation of finvet is its prime. The yeald Are good and there is enough production capasitynto allow enough products to market to allow competition and Also manufactures has improves their design to better utilise finvet in their products. After that there will be smaller upgrades in technology, but bigger upgrades in the size of the chips, because less defective chips comes out From the production line. And aboutbthe same time the next generation of product tech is coming out with not so Great prises, but better energy consumption.
So bye, bye 1080 Ti. If this Titan would have been 384bit 24 GB 12 Ghz DDR5X (and fully enabled chip), there were still a place for a Ti, but actually this generation the Ti is this Titan X.
And, of course, if the competition would have been fierce, the Ti/Titan would have 16 GB DDR5X/512bit bus on a fully enabled chip, even with this great memory compression techniques of NVIDIA. Until HBM2 is in volume production...
Pretty sure their forum is being laced by NVidia in typical NV marketing fashion. People think it should be $3000? Laugh. Nobody would say that unless they're a complete idiot tool. NVidia is trying to fluff their own feathers on their own site!
"buying NVIDIA’s best card just got a bit more expensive" $200 or 20% is not "a bit more", is CONSIDERABLY more expensive. A BIT more would be 2%. But you keep brushing them, in a couple of years they will reach $1000 for an ordinary desktop GPU. That's what we shareholders pray for.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
228 Comments
Back to Article
Dobson123 - Friday, July 22, 2016 - link
Holy shit, that was fast.Dobson123 - Friday, July 22, 2016 - link
Confusing name btw.nathanddrews - Friday, July 22, 2016 - link
I've been saying it for a while, but NVIDIA wants to be Apple. Closed garden "ecosystem" (Shield/GeForce Experience/GameStream/G-Sync), planned obsolescence (poor support for previous gen GPUs), and obscene markup of halo products.That said, I bet this Titan X (2016) will be the first ever, truly 4K Ultra 60+fps GPU. That's an accomplishment worth celebrating. When the 1080Ti comes out for half this price, that will be an accomplishment worth buying.
ImSpartacus - Friday, July 22, 2016 - link
I was surprised that they didn't just keep using the "Titan" name so we end up with 2014's Titan, 2015's Titan, etc.Ian Cutress - Monday, July 25, 2016 - link
A lot of cars work this way. Ford ShootingBrake 2015, 2016, 2017Rock1m1 - Friday, July 22, 2016 - link
A lot of what they are doing suggests that, but thankfully so far they are going by the middle path, semi-open ecosystem which I think is the best for both innovation and consumers.nathanddrews - Friday, July 22, 2016 - link
Semi-open? GameStream/Shield only works with NVIDIA hardware. G-Sync only works with NVIDIA hardware. CUDA? PhysX? HairWorks? SmokeWorks? I don't know if that last one is real, but I mean, the list goes on and on. They have done very little to pursue anything "open". Not that they are required to, of course.For the record, I'm not suggesting that AMD is altruistic in this regard either, but at least their pursuit of OpenCL, FreeSync, and HSA is in contrast to much of what NVIDIA is doing.
Murloc - Saturday, July 23, 2016 - link
yeah but AMD may be doing that open stuff because they're in a weak position and they would just garden wall themselves out of the market.nevcairiel - Monday, July 25, 2016 - link
AMD just didn't have much of a choice. They don't have the market share to push for their own methods, as much as they would want to.TheJian - Tuesday, July 26, 2016 - link
You are aware Hairworks (and all of gameworks AFAIK) works fine on AMD hardware correct? The only problem comes when you jack up tessellation to above 4x (8x in some stuff?) in games like Witcher 3. There is nothing wrong with NV exploiting their hardware to the max for users each gen and AMD should do the same. Even Nvidia's older hardware gets hit (980 and below), the same way as AMD's with 64x. It was seemingly targeted at maxwell2 and up for that level which admittedly doesn't change things much. The 4-8x settings look fine and is easily adjustable but you still get to see the hair stuff (which is pretty cool). Cuda is different and they have spent ~8Billion+ on developing it for the specific architecture in their gpus. Why anyone would share those gains over the last decade with the competition is beyond me. The fact that AMD can't afford to do it, is a benefit again to NV buyers. If AMD hadn't gone console/apu they would have the R&D to spend on CPU/GPU (their CORE tech) instead of low margin stuff, and there are many more management decisions that have screwed them (paying 3x price for ATI etc). Both VEGA/ZEN were pushed off due to AMD's own selection of consoles first. Nvidia passed stating consoles would rob R&D from CORE products. People scoffed saying they were butthurt...LOL. NO, it was looking out for their core customers! You're complaining that Nvidia isn't helping AMD. With ~80% share, they don't have to (at least you got that part).https://www.pcper.com/reviews/Graphics-Cards/Borde...
physx does run on cpu, just not as well since it's single threaded (last I checked, not sure today as of gameworks integration now). Consoles use it too. Again, it isn't Nvidia's job to help AMD.
I already gave solutions for gamestream in a previous response to you, but of course YMMV (though people use them fine supposedly low lag etc). I just think NV doing their own solution is the best way to go when talking timings, lag etc as they know their hardware. The feature would likely lose quality if done by 3rd party for AMD or NV. You may think AMD has a better approach to their stuff, but it's only because they are weak. It is also the reason they don't draw many users for "features" that are literally game changers. AMD pursued OpenCL because they couldn't afford to fund a cuda alternative on their own or funding for the schools/certification system to get people to use it and learn it ;) Same story with everything else. If you can't afford your own cuda, you either lack the feature, or go in with a group hoping to at least have some success. OpenCL was their only option IMHO, but again all of these issues are due to a lack of profits yearly and losing ~8B in the last 15yrs, which ironically is about the amount NV spent on getting Cuda to where it is. Cuda is one of the main reasons I may go NV this time as I have aspirations past just games (at some point soon) and have delayed this purchase far longer than normal (usually every 3yrs, was waiting on gsync monitor I wanted, die shrink etc). But if Vega is awesome, cool and uses less watts...I could go that route now and upgrade when the need for pro stuff really hits me & just toss vega to my dad :) Whine about what AMD is doing to themselves, this isn't an NV problem. Put out a great product and quit low-balling them. I hate that as a consumer as I like cheap stuff too, but they need 5-10yrs of profits, not 15yrs of losses (well they had a few profit years but c'mon).
Impulses - Friday, July 22, 2016 - link
Will it come out for half the price? I'm guessing it wouldn't happen this year but early next year at best; but considering the price, along with current 10x0 pricing, and AMD's schedule (or lack of competition at the high end)... It seems likely NV could get away with an $800 GTX 1080 Ti.I'd love to be wrong... I'm sitting on a pair of R9 290 right now (2x 6950 before that, largely NV cards before that). I feel like we're finally at a point where a single card would satisfy me for 4K/Surround resolutions, and this would be it, but I'm not feeling like paying over $750 for the privilege.
Otherwise I might as well just stick to SLI/CF and go 2x GTX 1070.
nathanddrews - Friday, July 22, 2016 - link
The pricing of the 1080Ti will completely depend upon the price and performance of Big Vega, I'm just making assumptions. Time will reveal all.JeffFlanagan - Monday, July 25, 2016 - link
For the price you could go with 2x 1080, but SLI comes with a lot of headachesptown16 - Friday, July 22, 2016 - link
I've been drawing this comparison for a while as well. Nvidia dominates the high end performance area, are massively popular, highly priced, and use closed-end technology. AMD is much like the Android platform... more open-source, does not compete at the high end in terms of raw performance, and is not viewed as favorably by most people. I'm just holding off on a 1070 for now and hoping to see AMD offer something in the $300-$400 range to replace by 770, which has not aged well at all ever since Maxwell landed.Lolimaster - Friday, July 22, 2016 - link
1080 still struggles with many games at 4k even to sustain 30fps avrg.30% more performance is not going to change it.
We need Vega10 and VULKAN.
otherwise - Monday, July 25, 2016 - link
I don't think the markup is that obscene. This is a very expensive GPU, but a very cheap compute card. It's a hard line to walk in terms of pricing and marketing.damianrobertjones - Monday, July 25, 2016 - link
I bet that nVidia could have done 60+ fps last gen but... that wouldn't have brought in the $$$$$TheJian - Tuesday, July 26, 2016 - link
Except it's only AMD who has poor support for previous gen gpus (no money for dx11 etc) :(http://www.pcmag.com/article2/0,2817,2458186,00.as...
And I seem to remember radeon halo products going for $1500 (and an even steeper NV $3k, though that didn't last long...LOL - it was a price the market wouldn't bare). So both sides do this. As long as they are leaving the shelves FASTER than they can be made why should you set a price lower than whatever you can get? Business is not in business to be nice and make everyone happy (AMD should learn more from nvidia here, it's about making money fools), but rather to make a profit. This is simply supply and demand at work, and a company who is pretty good at figuring out what will help their bottom line. You seem to not understand that an M6000 goes for $5k at launch. The people who are unable to buy that but want to do content creation (games, 4k video etc) will see this as a massive discounted card. If you're struggling with the price of this card, you're not the target audience...LOL. These will fly off the shelves to the people who can't swallow a $5000 quadro and/or don't need 24GB. Many times before people have been seen buying 2-4...ROFL. To a prosumer, your version of obscene markup is downright obscene markdown. Your comment only makes sense if you're a PURE gamer with no other intentions, and even then it's still #1 and we all know what you pay on the top end. HEDT Intel chips for $1730 for instance.
But yes, if you're gaming without needing the pro side perks, by all means wait for the card with GTX in the name (1080ti) and save $500, not 600. 1080ti will not be $600. It will be $650-750 depending on Vega most likely. No need to push down 1080 with no competition. They will cherry pick the crap out of this gpu for the next 4-5 months and launch 1080ti with faster clocks and 8-10GB and a stack of cards on the shelf on day 1. It would be plum crazy to put out a $600 1080ti if AMD takes until Dec to put out vega and HBM2 will drive up their price vs. a GDDR5x card for no gain in perf unfortunately (NV doesn't even need their current bandwidth).
I really wish AMD had chose to go with GDDR5x for Vega. They got screwed by HBM1, and looks like they're going to do it again. IE too small of a market to lower cost, 4GB limit on HBM1, production more difficult than other mem leading to shortage etc. The only thing I see fixed this time is the 4GB limit. It doesn't even matter if your card is fast if you can't produce near enough to meet the demand. You should be limited by YOUR production of your product, not some part ON your product on top of your own issues.
I still can't wait for vega vs. Titan X/1080ti but it sucks to see AMD might be set up to fail profit wise yet again with their halo product. Samsung HBM2 will likely go to HPC first, and Sk-hynix is mass production in Q3, so AMD will be lucky to get an xmas card out, let alone ramping it for then. Nvidia meanwhile will be using GDDR5x that has been produced for a year by the time 1080ti ships and was already cheaper to move to from GDDR5.
As far as the closed garden goes, no different than AMD with mantle which never was designed for others, they just failed to push their own API due to small market share etc. How would NV make gamestream (of games)/gsync work with gpus from AMD? That would require a lot of effort on timings etc to help your competition (uh, silly). Shield can be bought by anyone, it just gets better if you have an NV gpu, which inspires sales. The only reason AMD is even sort of friendly is they are always behind and have no share. When they were ahead, they had $1000 cpus also and all of their chips were more expensive vs. today across the product line and I'd know as a reseller for 8yrs. Today they're cheap because nobody wants them.
http://moonlight-stream.com/
You can however get NV stuff to stream to other devices not owned by NV. It's a work in progress, but still...Again, why would NV want to do that and how much work when vid is encoded via NV's gpu? It's not as easy to control another company's gpu in this case IMHO.
There is also few other solutions for AMD such as remotr or splashtop. Again, it's AMD (or someone else) who needs to do their homework here, not NV. The lack of an AMD based solution is an AMD funding problem. Nvidia is doing you a favor by offering this feature and since it's on their gpus it will be the best experience they can offer. AMD should be doing the same (adding features!). Adding value to your product is a GOOD thing for customers. It's just one more reason I might buy their gpu vs. AMD unless Vega is VERY good on power, perf and price. The only one I'd waver on there for AMD is price. I'll pay more as long as you win on perf and watts/heat. I won't pay more if you lose either of these as your product is inferior IMHO if that is the case so I'd expect at least some price discount. Though IMHO AMD gives far too much and kills their bottom line repeatedly. Their cpus suck, but their gpus are usually close enough to charge more.
It's taken NV almost 10yrs to get back to 2007 profits. AMD should quit the price war as it's only hurting them and R&D for cpu/gpu, driver support for dx11, gamestream competitor, etc etc. AMD hasn't made 2006 profits since, well 2006 and they had 3Q's of 58% margins then too! I can't believe management went 480 instead of VEGA first. Vega is 58% margins or more, while 480 is ~30% probably. Since NV still can't keep 1080 in stock there was plenty of room for AMD to be in there making a mint from Vega. Now it will be facing a 1080ti (serving gamers) and titan x (serving rich gamers+prosumers) and likely miss most of xmas sales with a small supply if it even hits before xmas. I believe AMD has good products in the pipeline (ZEN/Vega) but they are useless if they're late and pushed off for low margin stuff instead.
Vega should be out now (GDDR5x!) and ZEN should be out for back to school and actually ZEN should have been out last year. But instead...We got new consoles (and old xbox1/ps4 started the delays of zen etc) and radeon 480. ~10-15% margins on the consoles (last ones started in single digits, moved to almost 15% according to amd), and probably 25-30% on 480 chips. Both bad decisions. AMD's next 12 months could have been a billion dollar year like NV (maybe even better as Zen has potential to pull down HEDT prices on the to end), but not now.
Note for AMD fanboys, I'm telling AMD to make more money! I'm telling them to choose products that have HIGHER margins FIRST, and low margin stuff second! AMD can't afford to keep losing money year after year. I own a 5850...LOL. Still waiting for Vega to show me something, then I'll jump. But I have no qualms about saying I'm leaning NV. The days of me "donating" to AMD with an inferior product are over. I still have to rebuild my dad's PC (giving him my Devils Canyon chip and likely the gpu too), so AMD has still has a shot at that cpu too. I won't take "close enough". They have to win or I buy Intel again and Nvidia. The only thing I'll fudge on is price, not perf, watts or heat. You have to win at least one of those and tie the others basically or bust. I have no trouble paying AMD for a WINNER. Other than a few driver issues (not a problem for me as long as it's fixed quickly), I got exactly what I expected from my gpu (low heat/noise with perf).
Gastec - Tuesday, September 6, 2016 - link
Half the price of Titan X? Sorry but if any 1080Ti comes out it will be $800 :)Beerfloat - Friday, July 22, 2016 - link
We look forward to the Anandtech review sometime in October, then.. 2018.I kid, I kid. Nice job on the GP104 Ryan.
ImSpartacus - Friday, July 22, 2016 - link
That's not a bad estimate, unfortunately...I'd aim for September though. No need to dive deep into the architecture. Just a bigger gp104.
B3an - Friday, July 22, 2016 - link
Titan X is nothing more than a great example of how much Nvidia are cunts.Instead of simply releasing the 1080 Ti, they first milk as much money out of idiots who will buy the Titan X, because these people are too impatient to wait for the PURPOSELY delayed 1080 Ti version - Which will no doubt offer the same performance with a slight overclock.
None of these cards should exist in the first place though. They should just release the top-end cards first. Like what used to happen years ago (excluding dual-GPU cards). A 1080 Ti straight out of the gate at the price point of a 1080.
But Nvidia just want to keep ripping off the consumer, and it will continue to happen until AMD can get their act together and make a good GPU for once that actually matches or beats Nvidia's best offering.
It's exactly the same thing happening with Intel, because AMD are even worse at competing on CPU performance there, so you get these ridiculously priced Intel CPU's.
So fuck AMD for incompetence, and fuck Nvidia for just being all-round anti-consumer pro-proprietary cunts.
Beerfloat - Friday, July 22, 2016 - link
Don't buy it, then. What's it to you that they run their business cleverly?damianrobertjones - Friday, July 22, 2016 - link
Things like this NEED to be said.smorebuds - Friday, July 22, 2016 - link
The market is what determines the price of this product, not Nvidia. If AMD can come out with something that even remotely competes with this, that's all it would take for Nvidia to price things more competitively.It's like complaining about BMW's being too expensive becase you can't afford one. There's a market for them, you're just not a part of it.
fanofanand - Friday, July 22, 2016 - link
Apparently early-adopters is a foreign phrase to B3an......you have ALWAYS paid a price premium to be among the first to grab any new tech.Murloc - Saturday, July 23, 2016 - link
the same goes for books.Hard covers cost a fraction more to manufacture, but the books are much pricier.
This is because the fans will jump on it ASAP even if they have to pay 30$.
Then the other people who don't buy it at 30$ will buy the pocket edition for 15$ a few months later.
althaz - Friday, July 22, 2016 - link
If you choose not to buy nVidia, you are also choosing not to buy a high-end GPU.AMD haven't been competitive in this space for years.
So I continue to think nVidia are a pack of dickheads - but I'll still buy their product, because it's the best. If AMD has caught up next time I need a graphics card (I just got my GTX1080), I'll look very hard at them.
qwerty dvorak - Friday, July 22, 2016 - link
What's with all this self-entitled whining? It's a freakin' super high-end gaming device - if you don't like the price, don't buy it. They are a business, they don't owe you anything. They do owe their shareholders something though, and that is to extract as much money as possible out of the market.Ahnilated - Friday, July 22, 2016 - link
Actually they DO owe the customer a lot. Without the customer they would have no business or share holders to worry about.lunarmit - Friday, July 22, 2016 - link
Its a luxury item. They don't owe you low prices any more than Porsche does.Eidigean - Friday, July 22, 2016 - link
Well said. Whiners are not entitled to a SUPER COMPUTER on the cheap! This is 11 teraflops in a 2-slot card!In the year 2000, IBM ASCI White was the fastest computer in the world at 7.226 teraflops (12.3 theoretical) and cost $110 million.
That's $110,000,000. Stop whining about $1,200.
dedu - Monday, July 25, 2016 - link
How does AMD not have high end cards? Fury X is keeping up pretty well with a 1080 in the upcoming api's and lets all embrace the fact we got in the stage where every major game release will be Vulkan or Dx12 from now.ACE76 - Monday, July 25, 2016 - link
while I agree with you, you have to look at how much behind AMD really is...most times, it's just a few precentage points behind...Vulkan is shaping up to be real big...I built a Intel 660K system and have it running OC'd to 4.5ghz and bought a 8gb RX480...I'm playing doom now on Ultra settings at 2k without a single hiccup...I haven't tried playing it at 4k but I will now.Agent Smith - Friday, July 22, 2016 - link
Try and drop the hard language mate this ain't Fudzilla.If your point is good enough you'll not need gutter talk here!!
ZeGuy - Friday, July 22, 2016 - link
He's fucking right though.qwerty dvorak - Friday, July 22, 2016 - link
Maybe in some different dimension where NVIDIA is a charity NGO. What the hell is wrong with some people, where does this idea even come from? No one owes you anything little princess, it's a business. Complain about their shady anti-competitive practices vs AMD but enough with this "oooh it's sooo expensive, that's unfair" whining. Vote with your wallet, don't buy it then, no one is forcing you. Geeez.fanofanand - Friday, July 22, 2016 - link
Don't scare the snowflakes with your micro-aggressions, man.Murloc - Saturday, July 23, 2016 - link
you forgot a TRIGGER WARNINGingwe - Friday, July 22, 2016 - link
I see what you did there. You know.bug77 - Friday, July 22, 2016 - link
You must confuse Nvidia with Santa.Why would they compete with their own GTX 1080 when AMD can't?
colonelclaw - Friday, July 22, 2016 - link
Whilst I wouldn't have phrased it like you did, I completely agree with the sentiment; Nvidia are in full-on rinse mode right now.It's not only them, Intel are just as bad. Both companies are up against sub-standard competition and are squeezing ever-more money from their customers. That's pretty much what always happens when anyone has a monopoly. This is capitalism in it's purest and unchecked form.
Spunjji - Friday, July 22, 2016 - link
Word. Bad time to be a consumer.Spunjji - Friday, July 22, 2016 - link
Agreed. "Don't buy it" doesn't work as a solution when the problem is caused by the /market/ buying it, not the individual. Some people are happy to dig their own graves monetarily speaking it and it's messing up gaming hardware for the rest of us. The only financially sound solution is to stay behind the curve and buy second-hand.And yes I know that buying a high-end graphics card has never been "sensible" but $1200 is patently absurd. I can build a whole gaming PC for that!
TemjinGold - Friday, July 22, 2016 - link
If the market is buying it, why is that a problem then? Do you not understand how a market works? If a company can sell a desirable quantity of X for $1200, why in the world should they ever do that for less? I have news for you, there is no such thing as a company that is purely "pro-consumer." All companies are "pro-their own bottom line." Any that isn't is incompetent and will not last long.cocochanel - Friday, July 22, 2016 - link
You guys are all blowing smoke. Microsoft will release a gaming console by next year called Scorpio. At about $600 it will be doing VR and 4k gaming in style. For another $600 you can get an Oculus Rift and you're all set. PC gaming is being going down for years for a few basic reasons:-PC gaming rigs are way too expensive compared to consoles
-most games if not all are made and optimized for consoles
-physical copies of console games ( disc in a box ) can be traded or exchanged for credit or cash. With PC games you're screwed. Last week I bought a game for $70 on Steam, played it for a few hours while it kept crashing. Good luck trying to get a refund.
With Microsoft releasing such a powerful console, expect Sony to follow suit. So you guys keep buying those expensive Porsches. Fine with me. I'll settle for a Tesla Model 3 and keep the change. Don't get me wrong, I've been playing PC games for years. But enough is enough.
K_Space - Friday, July 22, 2016 - link
It look like you haven't been following the updated steam return policy: you're well within your rights if you have played the game for few hours for a refund. Key phrase being few hours (number is specified clearly in their terms but can't recall), the Nvidia centric Batman Arkham Origins refused to play balls with my 295x2, in 5 days, the money was back in my PayPal account (and not using PayPal protection).cocochanel - Saturday, July 23, 2016 - link
I did ask for a refund and I checked the refund policy. They said the game was played for more than two hours. What a ripoff.Murloc - Saturday, July 23, 2016 - link
if what you say is true, the market will move to consoles and prices will have to fall.But you're wrong, it's not happening, people love MOBAs or mostly play in 1080p and they don't need a $1200 card for those.
nevcairiel - Monday, July 25, 2016 - link
Consoles will use outdated hardware from day one. Their big problem is that they have to survive for years to come as console players don't want to buy a new one every 2 years.The current-gen consoles (XBOne, PS4) use something that was at best mid-range when it came out, and today you get to play 720p at 30 fps because they just don't cut it. The new consoles will also not use high-end hardware, because thats just too expensive.
There will always be a market for actual powerful gaming systems.
nevcairiel - Monday, July 25, 2016 - link
Not to mention that there is a bunch of genres of games that just don't work well on consoles.They may release FPS games for consoles, but the entire genre just is no fun on a controller, especially competitive play.
Not to mention complex strategy games, like Civilization.
Eidigean - Friday, July 22, 2016 - link
Your thought is that "gaming" hardware should be cheap. That's what Xbox and Playstation is for.This is a super computer. 11 teraflops is not easy to achieve, let alone in a 2-slot card. This kind of computing power cost millions of dollars just a few years ago. Spot complaining that $1,200 is expensive.
The computer you want will always cost about $3,000. This has held true for 30 years.
K_Space - Friday, July 22, 2016 - link
.....except most of the comments completely ignore the fact that this isn't neccesserily a gaming card? As if dropping the GTX tag wasn't enough of a clue. Most of these will probably end up in render farms with a tiny niche going to top end gaming PCs.Gothmoth - Friday, July 22, 2016 - link
tell us what you really think.you should have stayed in school and get a job that makes you not think twice about paying a 1200$ card.
i have no problems to pay 1200$ for the fatest card i can get.
the best is always more expensive.
Black Obsidian - Friday, July 22, 2016 - link
There's a difference between not being able to afford a $1200 card and being annoyed at the market failure that leads to the card being priced that way to begin with.I wouldn't think twice about buying a $1200 video card, but that's not what the Titan X is; it's an $800 video card that requires you to put another $400 in a pile and light it on fire. And yeah, that's going to piss people off.
Gothmoth - Friday, July 22, 2016 - link
800$ ..... how do you get to that number?i have to pay 1700$ dollar for a 10 core intel.... and 500$ for a 6 core that´s how it is.
i paid much more for my car but i could have bought a VW golf to get from A to B.
but these whining from people who can´t afford something is getting on my nerves.
Eidigean - Friday, July 22, 2016 - link
Exactly. Folks don't understand that larger chips have higher chances of defects. The yields are lower. There are reasons, like you said, for why 66% more cores costs 240% more dollars.SlyNine - Tuesday, July 26, 2016 - link
Yes, lack of competition plays no role here. It's all just innocent ol Nvidia and Intel just can't make em better.Flunk - Friday, July 22, 2016 - link
Nvidia only builds these huge top-end designs so they can sell a $1000+ Titan card. If they need to compete, then they release a cost-reduced version, but the intent was never to release this a a lower price point.Yojimbo - Friday, July 22, 2016 - link
What you have to understand is that this card has uses beyond gaming. NVIDIA needs to be careful not to incentivize the deep learning community to spend their time and money trying to figure out ways to do their learning tasks with these far cheaper cards rather than spend money on the more expensive Tesla cards. The $1200 price tag is meant to give more reason for people to buy their $5000 Tesla cards with more memory instead of trying to cobble together something more exotic with the 12 GB Titan.fanofanand - Friday, July 22, 2016 - link
The 8800GTX was $600 in 2006 which would be about $720 in today's dollars. Clearly Nvidia is inflating the prices of their top-end cards, which is contrary to the established idea that tech continually gets better AND cheaper. This would imply that Nvidia is certainly taking advantage of their market position and their ability to set pricing. That's what companies do, is increase prices when they can. All that said, that would mean that the 1080 which is more of a direct successor to the 8800GTX, is actually cheaper adjusted for inflation than the 8800GTX was. So maybe NVidia isn't quite as bad as you think, they just introduced an additional tier of cards? I am not so sure it's as simple as "Nvidia is gouging what a rotten awful company!"PS I rock an HD 5850, and had a 512Mb 8800GTS, so I am brand agnostic, I buy the best value.
D. Lister - Friday, July 22, 2016 - link
"I wouldn't think twice about buying a $1200 video card, but that's not what the Titan X is; it's an $800 video card"10 years from now, you could probably buy the same performance level for ~$200. I think you should hold on to your money.
Yojimbo - Friday, July 22, 2016 - link
I hope in 10 years we're getting much better than this performance for $200. Hopefully we'll have at least 25 TFLOPS for $200 by then, unless inflation becomes an issue between now and then.D. Lister - Friday, July 22, 2016 - link
Aye, lol, 10 years was a bit of a stretch.Murloc - Saturday, July 23, 2016 - link
there is no market failure here, they're not actively locking out AMD or anybody else. GPU market has high entry barriers and that's an issue but it's not nvidia's decision to make it that way. They're pursuing legitimate objectives with legitimate means.It's not a monopoly, with integrated intel GPUs, nvidia, AMD all present on it.
SlyNine - Tuesday, July 26, 2016 - link
600$ for 25-30% more performance. Yea that's a hard pill to swallow.godrilla - Friday, July 22, 2016 - link
They need something to offset the low yields with big chips hence price point, even gp 104 is mia, when yields will improve then comes the 1080ti as was previous generations!SunnyNW - Friday, July 22, 2016 - link
Dude welcome to the free market and business 101. Why should Nvidia leave money on the table? I have most recently purchased AMD cards but you have to give it to Nvidia, they are firing on all GPU cylinders.Nvidia could be doing FAR worse when it comes to performance/$ considering AMD has absolutely NO answer for high performance, at least not for quite a long while, and honestly the outlook is not so good. You could still be paying top doallar for Fury X performance right now if it wasnt for NVidia.
Flunk - Friday, July 22, 2016 - link
I think your assumption that they're going to release a 1080 TI is pretty premature. If AMD doesn't release anything competitive I don't see them doing that. It will be $1200 Titan or $599 GTX 1080.Impulses - Friday, July 22, 2016 - link
Ehh, they'll release it just because at some point the market for people willing to shell out well over a grand will be tapped but gap between those two is still sizeable, and plenty of people would see a new 1080 Ti at $700 or even $800 as a good deal at that point... Even if it's six months down the line. Heck if I was in the market for a new card by then (and I likely will be) I wouldn't pass it up if it's better enough than 1080.Eidigean - Friday, July 22, 2016 - link
I don't think you know what a Ti card really is. The Titan X most like has 3840 cores with 256 cores disabled due to manufacturing defects. The chips that have no defects (all cores functional) are not sold as Titan X's, but saved for later. When the yields improve enough that many 3840 core chips have been produced, they'll come out with a Ti card. It's never a day 1 thing.Eidigean - Friday, July 22, 2016 - link
It also works in the other direction. They'll also hold onto chips with more than 256 cores defective; which cannot be sold as a Titan X. Once they have enough to sell, that have say 512 cores disabled, a Ti card comes out at a discount.haukionkannel - Tuesday, July 26, 2016 - link
Yep... They wait untill there Are huge pile of defective 102 chips that Are no good for TitanX. Then chose how much They need to cut of those chips so that They can use as Many as possible to those junk chips and sell them as 1080ti cards. They most propably Also use gddr5 in 1080ti to save more money. The price will be somewhere between 850-1050$ depending on how much vega cost and how fast or slow it is compared to this. So that mean that 1080 will come out q1 2017 sometimes after vega has been released or even before if They already know the speed and price in advantage.Jumangi - Friday, July 22, 2016 - link
Oh get over it. This is nothing more than the video card equivalent of a million dollar "Supercar". It's purely for show and the tiny few who don't care at all about value and have a bunch of money burning a hole in their pocket. There's plenty of great options for gamers that have been released already by Nvidia and AMD and more to come.ImSpartacus - Friday, July 22, 2016 - link
Why does amd need to beat Nvidia in the halo gpu market? Who actually cares about those cards?I'm not spending $500 on a graphics card, so I don't really care who is "winning" in that segment.
SlyNine - Tuesday, July 26, 2016 - link
With good competition this might not be the 500$ market.Murloc - Saturday, July 23, 2016 - link
nvidia is a private company that has to make money for its investors, what's your problem with that?They're not ripping off anybody, you don't have to buy it if you don't think it's worth your money.
Other people will, that's why they're doing this without a care for your opinion.
SlyNine - Tuesday, July 26, 2016 - link
I don't think anyone is condemning Nvidia here. It's just a shame that better competition doesn't exist.Morawka - Monday, July 25, 2016 - link
intel wont be able to make us jump through hoops for to much longer. When zen hits and it gives us 8 cores with haswell level performance, intel will have to compete and start offering 8 skylake cores at under $350ACE76 - Monday, July 25, 2016 - link
Don't count on it...Zen won't beat Skylake or Kaby Lake....although it would really be nice to see AMD at least put up a fight...nevcairiel - Monday, July 25, 2016 - link
You are only ripping yourself off when you buy something you don't really want - and then whine about it.The Titan has always been very much a niche card, and even more so this time around as they are strongly re-targeting it towards compute tasks instead of gaming. It was never a good economy decision to buy it.
So in short, you think the Titan is too expensive? Just don't buy it.
TheJian - Tuesday, July 26, 2016 - link
If the price sucks it will be left on shelves until they lower it. But since it IS NOT a gaming card (though it's still the fastest out there probably for this task), they will fly off the shelves vs. M6000's that run $4000-5000. PROSUMER, is who this is aimed out. They are wisely expanding their market to poorer people who'd like to get into content creation (games vids etc) but can't afford it at $5k. You don't seem to understand the article. It point blank says it's a prosumer card, heck they even removed the GTX from it...LOL. Nobody can rip you off, they do not force you to buy anything. If the price is too high, the market will let them know and price will come down. It's that simple.Get a better job. I could easily afford this and I'm not rich by any standard. It's a tough pill as a pure gamer, but as I move on to other things (pro stuff) it's a massive discount to quadro. If these cards didn't exist an indie dev etc would be forced to pay $5000 even if they had no need for support, ECC etc. No thanks. Thank you Nvidia, please ignore the ignorant and keep releasing prosumer cards! It sounds like the card for you will come a few months from now for $700 or so.
BTW, what they are doing is cherry picking chips for the 1080ti right now for it's launch. Again, you don't seem to understand how this stuff works. 1080ti will likely be a salvaged or cherry picked titan die with higher clocks and maybe a bad part disabled. The higher clocks will make it faster in either case, and save dies. It's quite possible yields will be good enough to not disable anything other then deep learning crap and keep the gaming side fully enabled while jacking up clocks. I guess their strategy depends on Vega's perf.
"NVIDIA this morning has stated that the primary market is FP32 and INT8 compute, not gaming."
Did you miss that part? So content creation and int8 work. But sure it will game too, but why would you buy it? Wait a few months if you're a gamer only. AMD will have to release more than a single card, as NV has multiple market segments here. PRO, Prosumer, & gamer. Everyone gets the best of the best this way for their tasks. Price would go up if they tried to do it all in ONE solution. You also seem unaware of the fact that NV has taken nearly a decade to get back to 2007 profits. They aren't ripping anyone off. If you take out Intel's payment they still would be struggling to match 2007...LOL. Jeez, read some quarterly reports and balance sheets.
Drumsticks - Friday, July 22, 2016 - link
I'm feeling a mixed combination of 1) Holy cow, Nvidia is killing it with their Pascal timeline, 2) Is anybody else wondering how much fun it's going to be to try and buy the 5 of these they can make every week (jkjk), and 3) These prices.I know the best performance comes at a premium, but we could really use an AMD spoiler right about now. With the amount of development time it's going to get, Vega 10 needs to be notably better than GP104, and Vega 11 better be able to compete with the Titan X and get ready for a fully enabled GP102 (presumably with HBM?). Nvidia is absolutely phenomenal at making quality GPUs, but it's also highlighting how badly we need flagship level competition.
Impulses - Friday, July 22, 2016 - link
Agreed, NV's timing is probably putting a ton of pressure on AMD's timeline. I'd like to upgrade my 2x R9 290 already but until I see AMD's hand and a 1080 Ti I'm not sure I wanna bother.Mondozai - Friday, July 22, 2016 - link
Full fat Pascal this is not(still GP104). My guess is that they will release the full fat Pascal GPU when AMD releases Vega.Mondozai - Friday, July 22, 2016 - link
Oops misread the GP102 part. Still, why is TFLOPS so low? Isn't the compute full fat Pascal close to 16-18 TFLOPS? I'd expect the consumer version to be lower, maybe 14-15, but not 11.Dobson123 - Friday, July 22, 2016 - link
GP100 officially has only 10.6 TFLOPs SP performance as well. Which is pretty logical considering it has the same number of ALUs and a similar frequency.MrSpadge - Friday, July 22, 2016 - link
The number you're remembering is at half precision (16 bit).DanNeely - Friday, July 22, 2016 - link
It's the lower clockrate. 40% more cores, but only 88% of the clock rate results in only 24% faster overall. I wish nVidia would've bumped the TDP a bit on this one to get the clocks up; the gap between it and the 1080 is narrow enough there's not much room to fit a 1080 Ti in if they wanted to.OTOH at least there's not much reason to wait for one either. I'll be getting a 1080 whenever the price/availability stops being crazy.
Drumsticks - Friday, July 22, 2016 - link
Eh, it's smart. They get three things. It's still faster, anyways. They get to say "look! Our fastest GPU ever still only consumes 250W!" And then they get an even better reputation when it turns out this thing can be clocked higher for how great of an overclocker it is.Flunk - Friday, July 22, 2016 - link
The "full fat" Pascal is already out. If you want one, buy a Tesla P100. This card, the Titan X is the top of their gaming line and I don't see them cannibalizing their Tesla line by releasing a GP100 Titan or Geforce.extide - Saturday, July 23, 2016 - link
Nope, even those are not 'full fat' GP100 -- they have disabled cores too.Nagorak - Friday, July 22, 2016 - link
I think they are trying to release as quick as possible to make the most of their lack of competition at the high end. Also the high sales of the 1080 may have encouraged them that the demand is there for something even more pricey.Agent Smith - Friday, July 22, 2016 - link
I agree, it's just good business decision making utilising the good position they're in. Really need a leak from AMD to put a boomerang in the works and get some folk to wait for the red team.zepi - Friday, July 22, 2016 - link
Leak of a GPU that will be released at some distant (3-4 months in future minimum) probably doesn't change anything.Black Obsidian - Friday, July 22, 2016 - link
It might if it's positioned against a GPU that's "released" now but probably won't actually be in stock for the same 3-4 months, if the stock levels of the 1080 are any indication.djayjp - Friday, July 22, 2016 - link
Obviously that (or the gtx 1080) FMA/TFlops number is way off.Dobson123 - Friday, July 22, 2016 - link
No, the Nvidia Titan X only has around 24% more shader performance.Nagorak - Friday, July 22, 2016 - link
It has more CUDA cores, but its clock speed is lower. Together they combine to only being slightly higher than the 1080. Maybe the yields on the larger chip aren't as good so they have to set more conservative clock speeds?Agent Smith - Friday, July 22, 2016 - link
More like its prosumer related card to satisfy gamers and compute market together. Having higher clocks would damage Tesla sales so they're striking a balance. Wouldn't be surprised if nVidia locked down those clocks too to reinforce my point, hence why only nVidia have selling rights. No custom boards allowed !!Xanavi - Friday, July 22, 2016 - link
You can only go as fast as the slowest transistor, there are more of them, therefore...willis936 - Saturday, July 23, 2016 - link
That is not how it works at all. If anything the longest signal path is what limits the clock rate. Adding more cores adds more transistors while affecting the longest signal path by a very small amount. There is probably a heat issue with the die size that prevents higher clocks. A low thermal resistance cooler could probably pump a GP102 chip to 2 GHz. We'll see.DanNeely - Friday, July 22, 2016 - link
38% more TDP, 66% more transistors; it has to run at a lower power per transistor (probably slightly lower max voltage as well as the lower clock speeds). What puzzles me is why they left the TDP sticky at 250W, going to 275 would give enough TDP headroom to leave the clocks alone.Yojimbo - Sunday, July 24, 2016 - link
The reason for the 250W cap might be more for the deep learning market than the gaming market. I've read something that stated Baidu were already leaving their cabinets half-full when using the Maxwell Titan X for deep learning because it was too expensive to cool them otherwise.poohbear - Friday, July 22, 2016 - link
so they named it the same as the previous gen Titan X? i mean really? they couldn't think of an easier way to confuse us?TallestJon96 - Friday, July 22, 2016 - link
Well true 4k60 PC gaming begins august 2nd I guessEden-K121D - Friday, July 22, 2016 - link
No. Unless overclocked this still won't hit 4K 60 fps at ultra in some gamesKvaern1 - Friday, July 22, 2016 - link
I can tell you this much, it will be a challenge for it to sustain 50fps in World of Warcraft maxed out @3440*1440.beck2050 - Friday, July 22, 2016 - link
Exactly. This going to be close to 1080 SLI on a single chip. That is phenomenal.Beararam - Friday, July 22, 2016 - link
The Nvidia Nvidia Titan X. Rolls right off the tongue.Ian Cutress - Friday, July 22, 2016 - link
Ferrari The Ferrari doesn't sound any better.Impulses - Friday, July 22, 2016 - link
They're both dumb, first few times I heard of the latter I was genuinely confused... I guess this speaks to the market NV is going for tho. :pfanofanand - Friday, July 22, 2016 - link
Just like the Apple Apple TV. Naming schemes have gone completely absurd.Yojimbo - Sunday, July 24, 2016 - link
We're not computers. We don't need to take field entries so literally.chuttney1 - Friday, July 22, 2016 - link
I would not be surprise if Nvidia decides to scrap the 1080 Ti name in favor of a Titan X Ti model. But it gonna be expensive.Midwayman - Friday, July 22, 2016 - link
Nvidia is really pushing the limits with their pricing this generation...ikjadoon - Friday, July 22, 2016 - link
Not really. There was already a huge market of people buying GTX 1080s for $900 to $1000.It's simple supply / demand. If nobody bought $1000 GPUs, I doubt NVIDIA would make them, :P
haukionkannel - Tuesday, July 26, 2016 - link
Yes. +900$ 1080 Are selling so well that next generation 1180 can sell at 999$ and 1170 700$ and 1160 will take the 500$ slot! That is a progress my fellow gamers! Have to wonder though if next generation Titan X could be much higher than 1500$ untill people think that it is too much... Most propably 1500 is just ok. It will still sell well enough.Flunk - Friday, July 22, 2016 - link
You sell what the market will bear.xrror - Friday, July 22, 2016 - link
Well at least we know where a 1080 Ti will come from, if nVidia feels it's needed.xrror - Friday, July 22, 2016 - link
This might actually be nVidia separating Titan as it's own brand. nVidia loves brands - Geforce: GTX and GT, Tesla, Quatro, GRID, Tegra, CUDA, 3D Vision, etc.For some real brand fun check this out: (PDF warning) http://international.download.nvidia.com/partnerfo...
Nagorak - Friday, July 22, 2016 - link
This doesn't look like it's enough faster than the 1080 to allow for a 1080 Ti in between the two.maximumGPU - Friday, July 22, 2016 - link
1080Ti won't exactly be in between, it would be so close to the titan as to be almost indistinguishable.Spunjji - Friday, July 22, 2016 - link
That's bang on to how it's worked previously. They'll restrict compute performance slightly and shave a suitably large-looking chunk of money off to make it look like a "bargain", but not until they have made sure the early adopters have had their fill of overpriced madness.Kvaern1 - Friday, July 22, 2016 - link
Or possibly faster for gaming. If Titan is no longer part of the Geforce product line the 1080Ti doesn't have to slot in below it in the gaming hierachy.Flunk - Friday, July 22, 2016 - link
I expect there won't be a "1080 TI".Chaitanya - Friday, July 22, 2016 - link
1200 fucking $ and still no HBM2. Also wow another price hike over previous generation, sure seems like there are lot of idiots purchasing these overpriced gpus.jimbo2779 - Friday, July 22, 2016 - link
Or people with enough disposable income that want the absolute best available.I couldn't justify the cost but to say people are idiots for buying something they see value in is not right at all.
halcyon - Friday, July 22, 2016 - link
Yes. No HBM2 which should theoretically give us over 100% more bandwidth compared to the 386-bit GDDR5x in Titan X.That's a big disappointment for some with bandwidth heavy workloads.
I understand the price. This is a premium product, not meant for children's overclocking machines.
Also, market positioning states that they can gouge the price and they will.
In a year, prices will have come down - hopefully - when full AMD Vega has shipped.
I'd love to have a pair of Titan X's in the interim and have the cash at hand, but have no real reason to spend. I don't need that amount of computing power. It'd be a pair of very expensive space heaters...
fanofanand - Friday, July 22, 2016 - link
If they put HBM2 in this thing, what would they do for the Titan X Black Giga Edition?p1esk - Friday, July 22, 2016 - link
I never heard about INT8 format for neural networks. Total news to me. More important question - what is FP16 performance? Is it as crippled as 1080, or does it have the same FP16 cores as the big chip?Ryan Smith - Friday, July 22, 2016 - link
"Is it as crippled as 1080, or does it have the same FP16 cores as the big chip? "I would hope to know for sure in time for the card's launch. But I suspect that it's going to be like GP104.
Kvaern1 - Friday, July 22, 2016 - link
I bet you it will be able to do something GP104 can't in that regard, I'd think that's why it's no longer a Geforce branded card.guachi - Friday, July 22, 2016 - link
What a joke of a card this is. The existence of this card only makes nVidia look worse as a company and a bunch of rip off artists.Ranger1065 - Friday, July 22, 2016 - link
The word "Nvidia" has long been synonymous with rapacious greed. (Yes, I'm aware they aren't the only ones...)It's gratifying to note the release of this card heralds increasing public awareness of this fact.
This seems like an appropriate moment to FULLY ENDORSE the sentiments of Linus Torvalds when he announced,
"Nvidia, Fuck you!"
I would like to add, the same goes for Jen-Hsun Huang.
On a more positive note it was good to see Anandtech quick on the draw with this article.
Michael Bay - Friday, July 22, 2016 - link
You guys always serve the most delicious butthurt.HighTech4US - Friday, July 22, 2016 - link
To each other - EewPJ_ - Friday, July 22, 2016 - link
I'm kinda disappointed in the lack of HBM2 for a card with that price tagThough I'm not surprised about the name since they also went with GTX 1080 which people thought they wouldn't do
And I also expected at least 1.5Ghz core clock and I also wanted closer to 4000 CUDA cores
But I might still buy this later on this year (Big upgrade from my 750Ti)
Spunjji - Friday, July 22, 2016 - link
Later this year there will be better options (namely the Ti version of this, hopefully brought to sane pricing by something maaaybe approaching real competition from AMD)vladx - Friday, July 22, 2016 - link
No way will Nvidia an 1080 Ti this year. And unless AMD pulls a miracle, I wager there won't be any at allJon Tseng - Friday, July 22, 2016 - link
> These products are based on very different GPUs, but I bring this up because> Tesla P100 did not use a fully enabled GP100 GPU
I'd be stunned if P100 and P102 aren't the same die. The tape-out costs for 610mm of 16nm GPU just don't make sense to do P102 as a different chip.
Obviously they may fuse of bits or muck around with the memory interface to differentiate. But I'm 99% sure it's the same chip.
Ryan Smith - Friday, July 22, 2016 - link
GP100 has a 15.3B transistor count. GP102 has a 12B transistor count. Coupled with the fact that they use different memory interfaces, and they are without a doubt different GPUs.Jon Tseng - Friday, July 22, 2016 - link
Ha I stand corrected.Interesting. Sounds like they are putting more investment into the compute side than I think. Taping out two separate SKUs for high end compute is a not inconsiderable investment (I guess much of the design would be similar but you still need to pay for a whole new set of masks?)
Yojimbo - Friday, July 22, 2016 - link
The are definitely diverging their compute business from their GeForce business more and more. Their compute efforts have grown up to the point that a large number of the improvements made to Pascal seem geared towards compute workloads and they are willing to come out with what is likely a compute-only GPU (GP100). They see the potential for large growth in their data center business.MrSpadge - Friday, July 22, 2016 - link
They'll sell many of these as Quadros and Teslas for people not needing FP64. Not sure a seperate chip wouldn't make sense. And the fundamental building blocks of these GPUs are so similar, nVidia might be able to reduce the tape out cost of additional chips by using clever methods.Samus - Friday, July 22, 2016 - link
Anybody else notice the correlation between NVidia's recent price hikes for every product over the previous generation, and the utter lack of competition from AMD/ATI?We are all screwed if AMD doesn't pull off a miracle with Polaris. Not only will we be paying $400 for mainstream gaming cards, but there will inevitably be a generational slowdown in performance for lower-cost cards. It doesn't appear that way because since Maxwell, NVidia has been crushing GCN in just about every category except perhaps performance/dollar, but hat's irrelevant because the only tier AMD is dollar competitive is entry-level.
HrD - Friday, July 22, 2016 - link
"We are all screwed if AMD doesn't pull off a miracle with Polaris."*Vega
shabby - Friday, July 22, 2016 - link
And after vega comes out we'll be saying just wait till vega2 comes out, that'll teach nvidia. Also wait till zen comes out, that'll teach intel.This "wait till the next gen comes out" saying is getting pretty sad for amd fans.
Impulses - Friday, July 22, 2016 - link
It's pretty sad for fans of a good value period.I've had way more NV cards than AMD cards (starting with a Riva 128), tho I've gone AMD with my last two upgrades (CF 6950 & CF 290)... It definitely feels like I'll be paying more than ever next time to maintain the same relative level of performance.
This generation looked like it should be the first to achieve solid 4K performance on a single card, but at these prices and with AMD fledgling...
Nagorak - Friday, July 22, 2016 - link
People tend to exaggerate Nvidia's advantage, IMO. Last gen AMD cards 290/390X were on-par with the performance of Nvidia's offerings, they just used more power. They even managed to just rebrand existing cards and stay performance competitive.Fury X was a bit of a disappointment compared to the 980 Ti, but if the current DX/Vulkan games are any indication, Fury X (and Fury) might ironically turn out to be the better buy in the end.
Samus - Saturday, July 23, 2016 - link
So basically you admitA) AMD cards use more power
B) They have a disappointing high-end proposition
and I'll add...
C) Their drivers, even with the wash and rinse of Catalyst, are buggier and more problematic than NVidia's.
So for the same price for the same performance, why would I go with AMD over NVidia when there are still con's?
NVidia knows this, which is why they are beginning to charge more.
extide - Saturday, July 23, 2016 - link
It's not wait till the next gen comes out -- it's wait till the high end stuff of this gen comes out. MAJOR difference.Spunjji - Friday, July 22, 2016 - link
You have a weird definition of "entry level" but otherwise am with you here.Yojimbo - Friday, July 22, 2016 - link
I think NVIDIA's 10 Series pricing has more to do with the cost of the 16nm process than anything else. Go back to the 700 series and you'll see prices even higher than for the 10 series equivalents. I think the new Titan X pricing has to do with the potential for the card to cut into the market of their $5000 Tesla cards. AMD has never really had any competition for the Titan cards, have they?r3loaded - Friday, July 22, 2016 - link
No HBM2? At $1200 I was expecting memory bandwidth to knock the socks off the Fury X, especially if they're positioning it as a prosumer card suitable for neural net applications.damianrobertjones - Friday, July 22, 2016 - link
Please, please, PRETTY PLEASE, do not buy this card. The price of these things is getting out of hand and, unless you and I STOP buying the top end card, that price will carry on. They're basically shafting each of us.Yojimbo - Friday, July 22, 2016 - link
You're being silly. NVIDIA prices their products at what people are willing to pay (and in this case they don't want to go too low to prevent the card from undermining their Tesla line), just as any other company. Do you try the same thing with all your products? "PLEASE PLEASE stop buying gasoline until they put the price down by 30 cents a gallon!" I'm sure they'll coming out with a 1080Ti eventually that'll be quite a bit cheaper, but possibly with less RAM and/or some compute-oriented features disabled.metayoshi - Friday, July 22, 2016 - link
While I agree with what your are saying, your analogy is completely wrong. While gasoline is a much needed commodity, a high end GPU is not. Telling people to stop buying gasoline is like telling people to stop getting electricity to their house because electric bills are too high. Everyone (who has a car) needs gas, and everyone needs electricity in this day and age. No one absolutely needs a high end GPU, or any GPU for that matter.While I try to stay away from car analogies, the better analogy is asking people to stop buying BMW's i8 because their highest end car is too much money. People will buy the Titan X just as much as people will buy an i8, just like how people will buy a 1080 like people will buy a BMW 7 series car, or how people will buy a 1070 or 1060 just as much as people will buy a BMW 5 series car. Or at the bottom end, you can just buy a used Honda Accord or Toyota Corolla just like how some people can live with the iGPU in their Intel/AMD CPUs in their crappy sub $400 laptop.
Yojimbo - Friday, July 22, 2016 - link
Yeah I was aware of that, but I don't think it really matters much. Besides, although gasoline is necessary for many things in the economy that rely on it and therefore unlike the Titan X, it doesn't have any direct replacement available in the market, much like the Titan X. The BMW i8 does have direct competition on the market. In that way gasoline is a better analogy than the i8.In any case, pick any product and the request is just as silly, regardless if there are replacements or not or how vital the product is to peoples lives.
vladx - Friday, July 22, 2016 - link
Sorry but I think I'll be buying two instead of one. Nvidia, shut up and take my money!!!Amandtec - Friday, July 22, 2016 - link
Nice. Black like a black Amex. If I owned one I would go to all the chicks and say "Hey Baby. Want to see my big black Titan?". Of course, if she has a boyfriend in the football team that is where things get interesting...Agent Smith - Friday, July 22, 2016 - link
I'm still laughing at a post I read whereby some nVidiot (like what I done there) actually bought four TitanX's at over a grand each for some massive SLI rig that doesn't even scale well in games. A week later the Titan killer 1080 came out.Mutha ha ha, ...makes me laugh so much.
Yojimbo - Sunday, July 24, 2016 - link
And that guy probably didn't give a damn because he has money falling out of his pockets.beck2050 - Friday, July 22, 2016 - link
What crushing domination of the high end. One percenters are drooling already. They'll sell every one they make.EasterEEL - Friday, July 22, 2016 - link
Impressive as these cards are; performance at all costs is resulting in crazy prices. To bastardise Oscar Wilde... "Nvidia users who buy this card know the price of everything and the value of nothing".Nvidia are effectively killing AMD by matching performance at each price point (no benefit to the consumers thirsting for high end) and shafting the high end performance consumers (that AMD can't provide for).
Eden-K121D - Friday, July 22, 2016 - link
"Nvidia users who buy this card know the price of everything and the value of nothing"On Point
halcyon - Friday, July 22, 2016 - link
Indeed. 2016 (so far) didn't turn out to be the price-competitive HBM2 upgrade love-fest everybody was hoping for.But at least we got a lot faster cards. Prices will come down.
If one has no use for it, one can always postpone purchase.
I can remember the amount of $5000 device I've bought that turned to worthless crapola in a couple of years and I practically had to pay to get rid of them.
For most, unless this level of performance is required, it's just better to wait another 6-12 mo.
guskline - Friday, July 22, 2016 - link
Nvidia is NOT a charity. Unlike the government that taxes you and you are forced to pay, Nvidia doesn't "force" you to buy their products. If you feel this product is overpriced , don't buy it.Titans ( thus their name) have always been at the top of the price food chain for Nvidia so the price is not a shock to me.
Eden-K121D - Friday, July 22, 2016 - link
.TristanSDX - Friday, July 22, 2016 - link
Titan si 30% faster and 100% pricier than 1080, poor offer. 1070 SLI will be much faster and almost twice cheaper tthan new Titan, especially that SLI scales pretty well with high resolutions like 2560 or 3840sna1970 - Friday, July 22, 2016 - link
not really ... you should wait for the 1080 Ti , which will be priced around $800The Titan is not a gaming card
Spectrophobic - Friday, July 22, 2016 - link
"The Titan is not a gaming card"Hogwash.
That statement stopped being true with the Maxwell Titan X and I highly doubt this Pascal Titan X is a FP64 card.
Adm_SkyWalker - Friday, July 22, 2016 - link
This was unexpected on multiple levels.6-9 months early, no GeForce label, no 3rd party manufacturing, direct distribution only.TemjinGold - Friday, July 22, 2016 - link
I wonder if that last part is to make the market believe that FE cards are better than non-FE cards, since effectively there is no non-FE Titan.Impulses - Friday, July 22, 2016 - link
Or to maintain a firmer grasp over stock and pricing while they still can and the demand is there...fanofanand - Friday, July 22, 2016 - link
And to keep the profits on their highest margin product all to themselves.Railgun - Friday, July 22, 2016 - link
I wonder how many folks have bought multiple cards over the last 3-4 years? Kepler, Maxwell, now Pascal...In my case, I've bought the equivalent of all of these with two OG Titans for basically the same price, and have only needed to do it once. Now, I will do it again and will probably last me another several years with no need to immediately jump to Volta.
You guys have no idea how to run a business, let alone understand how the market works. Nvidia are cunts? Great language from children who don't know their asses from their elbows.
You're not going to buy it. You don't need it. You don't even really want it. Leave the adults who know what their doing to their own devices and we'll sell you our second hand stuff in a couple of years. Who cares what they sell it for. If you're happy with the 1080 price, then what's the point of crapping over here?
halcyon - Friday, July 22, 2016 - link
Well said.HighTech4US - Friday, July 22, 2016 - link
T H I SMarkieGcolor - Monday, July 25, 2016 - link
It works out about the same either way. You have the top end for a year at most and enjoy it then watch is value and relative performance drop in half each year. On the other hand you could just always be midrange. Either way it is the same amount of money and that is the way it works because of they way Nvidia charges.Rock1m1 - Friday, July 22, 2016 - link
Due to the lack of fanfare, I am guessing even Nvidia does not shy away from the fact that the target audience is very small for this card. Also they should have called it Titan X 2016 or Titan X2?JamesAnthony - Friday, July 22, 2016 - link
While this card is hugely faster than the original Titan at lots of things, I'm guessing that it's going to be the same 1/32 compute performance.So I guess for the actual jobs that need it, I'd be better off keeping my original Titan that is 1/3 and possibly seeing how to run that as a secondary compute card and put in a 1080 or wait for a 1080ti for the gaming part.
I wish Nvidia would just simply stop playing all these games and let us have at least 1 card that can actually do everything, even if it is around the $3k to $4k mark, just stop screwing around with us and let the highest end Quadro cards do full gaming as well as full compute, it's not like their drivers couldn't have different profile modes.. but no they think you will buy an M6000, a K80 and a 1080... nope... but I bet if the highest end Quadro cards did it all, they would actually get more money overall.
Yojimbo - Friday, July 22, 2016 - link
Being that it only has 12 GB of RAM and that they are pricing it at $1200, I think it could have FP16x2 units. The Tesla M40 is available with 24 GB of RAM and the successor to the M40 should have at least that amount. That allows a decent amount of market differentiation, I think. The question is does have FP16x2? In terms of deep learning, they seem to be pushing this as an inference card judging by the 44 TOPS int8 spec they list. The M40 is marketed as a training card, and for that they would want FP16x2. Do they plan on using a GP100 chip in a card to replace the M40? If not, then GP102 should have FP16x2, unless they don't plan on giving that market segment as large of a speed boost as they are able to.I wish I knew how many transistors ROPs and the register files used. Then it would be possible to tell if GP102 likely has DP cores on it, but I'm guessing it doesn't. It has the same number of SP cores as the cut-down GP100 in the P100 but using 3.3 billion less transistors (it may use a fully enabled die, however, which the GP100 in the P100 doesn't). I'm assuming the GP100 does not have ROPs but has double the register files of the GP102.
Jackie60 - Friday, July 22, 2016 - link
Nvidia are greedy fuckers but AMD are useless arseholes therefore we get this pricing, it's annoying but it's what you get from a capitalist system with little competition. Looking forward to the 1080Ti though. I suspect Nvidia are trying to hoover up sales before AMD drop big Vega which may be sooner than we think.D. Lister - Friday, July 22, 2016 - link
Woah, the sodium content is unhealthily high in this comment section. It is like I wandered into a bloody salt mine. :DWhere in the woodwork were all these frugal consumers hiding when the "Pro Duo" was announced for $1500 a little while back (April 26th '16, here at AT)? A dual-GPU card, with 4GB usuable VRAM, a 350W TDP, and a performance of <85%* (i.e., when Crossfire is working, otherwise <50%) of this single-GPU, 12GB VRAM, 250W TDP product.
Wait, they weren't ALL hiding. Some of them were justifying the price tag by calling it a "content creation" card... one of course, with only 4GB VRAM. But hey, it is HBM so that's kewl, right? <hyperbole alert> And in a glorious future, DX12 will make it a full 8GB, and Vulcan will make it go faster than a rocket on crack, and 200 years from now it will be beating God himself in the AoTS bench, 'cuz AMD stuff gets better with time (duh). 1000 years from now, all AMD GPUs will unite to form a whole new God that all will fear, and all images of Jesus will have him look like Raja Koduri wearing a "Gaming Evolved(tm)" t-shirt. While only a few years from now, this Nvidia GPU will be obsolescence-d into the very depths of hell, and will probably jump out of your system eventually and rape your dog, and so on and so forth. Yeah sure, keep at it guys - don't ever let facts slow you down :P. If only the actual employees of AMD were as dedicated to the brand as you are, the company would be unstoppable.
* - estimated values based on specs, though I wouldn't be off by more than +/-5%.
roc1 - Friday, July 22, 2016 - link
Hilarious. And true :-)K_Space - Friday, July 22, 2016 - link
Does the Pro Duo still sells? Even the red fans thought it was too expensive and out right dumb to buy for gaming given the ill timing (if we are reading the same comments section). I sport a 2x 295 and no way would I recommend it to anyone given how 16nm was just around the corner. Daniel himself called it out for commanding $500 premium over 2x Nano (although I havent seen the same re this vs a very sensible 2x 1070).I agree with the sentiment regarding the rabid fanboyism here. When I first visited the site in 2006 the comment section was much smaller but had as useful content as the article, and I learnt a lot. I guess it's the price we all pay for a wider demographic. Thanks to all the sparks that keep pumping some sense and value to the comment section.
Can anyone point me in the direction of a useful article detailing nVidia core naming? I find the GP100, 104, 106 all confusing and Ryan Roadmap (http://www.anandtech.com/show/7900/nvidia-updates-... didn't really get into it at all?
K_Space - Friday, July 22, 2016 - link
Please ignore, I'm reminded that Ryan mentioned in the latest 1080 review that he'll be detailing it in a separate article. Looking forward to it.K_Space - Friday, July 22, 2016 - link
In fact totally ignore, the section on FP16 Throughput on GP104 has completely answered my questions. Thanks RyanMarkieGcolor - Monday, July 25, 2016 - link
I've had nano crossfire for a year. It costed me $1000. It still beats a 1080 in Time Spy. Not a bad investment for me IMO. The pro duo was way too lateMorg72 - Saturday, July 23, 2016 - link
So what happened to HBM2? Did nVidia give up on waiting for it and going for next gen? I'd think that these high end cards would be the ones that could actually take advantage of the bandwidth it offers.As I understand it, AMD's Vega GPU's are including HBM2 and that is a big part of why they wont be out until later this year or early next.
Ryan Smith - Saturday, July 23, 2016 - link
HBM2 is in use on GP100, which in turn is used for the Tesla P100.Morg72 - Saturday, July 23, 2016 - link
I find it odd that they didn't see fit to include it on a $1,200 card. Makes me think that either they are rushing this to market so they can get the 1080ti out whenever AMD's Vega does, like they did last year or it's not offering much of a performance advantage as hyped...a least not for their GPU's.TheinsanegamerN - Monday, July 25, 2016 - link
I'd go with option 2 there.Pascal doesnt need as much memory bandwidth as maxwell, the 1080 already outperforms the titan x, and the new titan will have a 50% wider bus to play with. If the new titan is only 50% faster then a 1080 (being charitable here) then it would have no need of pricey, difficult to work with HBM2, as opposed to available GDDR5X.
haukionkannel - Tuesday, July 26, 2016 - link
It Also allows to use normal gddr5 in 1080ti instead of gddr5+ to reduce the cost.webdoctors - Saturday, July 23, 2016 - link
Obama and the Feds have been printing money so long that inflation is finally starting to hit us in our wallets. Video cards that used to be $800 are now $1200. This is what happens when interest rates are kept at 0% for 10 years so the left can guilt us into empting our back accounts so everyone can live the American dream, albeit only until the dollar comes crashing down.Don't blame the businesses that are just having to deal with the BS that zerohedge have been exposing for years. If we were using the gold standard, this card would only be $500 and 1060GTX would be $99. Imagine 980GTX performance for $99!
Just something to consider.
trackgoon - Saturday, July 23, 2016 - link
You have a funny way of looking at money. Who cares if its 99 or 999, its the same value. Inflation isn't emotional.cocochanel - Saturday, July 23, 2016 - link
Obama and the Feds had and still have no choice. If they stop printing, the US will go into a deep recession. Who wants that ?There is more to it. Even the mighty US has a tough time competing on a global stage with countries where state capitalism is a religion. Japan and South Korea are prime examples and China, Brazil and others have been going the same route for some time. Unless US makes some changes, the situation will only get worse.
stardude82 - Saturday, July 23, 2016 - link
In your analysis labor would be price accordingly as well. Assuming $10/hr. after tax wages now, it would still take you 25 hrs. of work to buy your hypothetical $99 card.webdoctors - Monday, July 25, 2016 - link
Thats the issue, salaries haven't changed in 10+ years but the cost of living/goods have gone up significantly. The middleclass is being left to fight among themselves while the fat cats get richer though their inflation immune assets like Trump towers.If ppl were making $100/hr than a $600 or $1200 vid card wouldn't raise any eyebrows.But now you've got the masses in an uproar because they can't buy their precious toy.
stardude82 - Friday, July 29, 2016 - link
Still a lousy analysis, you could go back to the 1971 and you couldn't buy a Titan X for all the money in the world.Of course in 1971, top marginal tax rates were 87%, that might have had something to do with wealth distribution rather than monetary policy.
TheinsanegamerN - Monday, July 25, 2016 - link
Aaaannd wages would be something like $2/hr, and you would be here bellyaching about how if the feds hadnt forced us to move from the copper standard then the 1060 would only be $9.99.Murloc - Saturday, July 23, 2016 - link
misleading naming schemestardude82 - Saturday, July 23, 2016 - link
Who remembers the $992 AMD FX-9590 that couldn't be counted on to reliably beat a $120 i3?http://www.anandtech.com/show/8316/amds-5-ghz-turb...
sharath.naik - Saturday, July 23, 2016 - link
Nvidia should just get x86 licence and build an APU. Not sure Intel would want that given that NVIDIA is better at design than AMD, we may actually have some competition.TheinsanegamerN - Monday, July 25, 2016 - link
I wouldnt go that far. Remember their custom ARM core that got tangled up on spaghetti code, and generally wasnt as good as the 32 bit stock ARM Varient? Not to mentoin them constantly overpromising and underdelivering on performance and power consumption. Kinda like AMD.Much as I would love an APU with a 750ti built into it, It might take some time before nvidia would be competitive.
rolfaalto - Sunday, July 24, 2016 - link
Awesome! They had me at 44 TOPs INT8 compute performance!! FP32 is spectacular as well, and my simulations have no need for FP64. Only question is whether I buy 2 or 4 on August 2nd -- either way, it's still under the cost of a TESLA P100, which isn't available anyway ...versesuvius - Sunday, July 24, 2016 - link
Right! Scientists don't play games. Games play the scientists.Yojimbo - Sunday, July 24, 2016 - link
I found a comparison for int8 performance for some older-generation processors.http://www.ieee-hpec.org/2014/Presentations/126.pd...
Page 4 shows some int8 performance numbers for Kepler, Knights Corner, and Sandy Bridge processors/accelerators. It seems that Kepler performs int8 calculations at a throughput of 1 operation/core/clock cycle. The K40 has 2.5 TOPS int8 (I used the boost clock for comparison, since NVIDIA is using the boost clock for the Pascal Titan X number). The Pascal Titan X seems to perform them at 8 operations/core/clock cycle, resulting in the 44 TOPS int8. Presumably the Maxwell Titan X has 3.3 TOPS int8 unless there was a previous change in int8 performance made for Maxwell that I haven't heard about.
rolfaalto - Monday, July 25, 2016 - link
Thanks, helpful summary there. Yes, the architectural change for Pascal results in a mind-blowing increase in these sorts of metrics. I sure hope that FP16 and INT16 are also greatly increased. Also note that the memory bandwidth is about as fast as TESLA P100- 12GB, and for a fraction of the cost.Yojimbo - Monday, July 25, 2016 - link
Well the update to this article confirms minimal FP16x2 support for compatibility purposes only. Someone on another message board claimed that the ultra-slow FP16 performance on the GTX 1080 is a driver bug, and NVIDIA intends FP16 to run at the same throughput and memory isage as FP32 on that card. Apparantly the FP16x2 cores must be targeted with special instructions. So it makes sense for NVIDIA to include a small number on other chips for compatability purposes even though running through the FP32 cores makes more sense on such chips.But I am surprised the GP102 doesn't use FP16x2 cores. I would have thought they'd use that GPU for an M40 successor. It seems they are pushing the P100 for learning. Baidu uses the Maxwell Titan X for training, as far as I know. The Pascal Titan X has significant FP32 performance and memory bandwidth improvement over the Maxwell Titan X. Maybe NVIDIA think that improvement is enough. Still makes me wonder about an M40 successor though.
pygosceles - Thursday, August 4, 2016 - link
Yojimbo, do you have a reference for the claim that throttled FP16 performance is a driver bug? I'd be overjoyed to learn that the GTX 1080 is not artificially crippled in that regard, especially with so many up-and-coming deep learning enthusiasts trying to advance the field with that kind of horsepower. I don't know if it could be modded to access full-speed FP16 without an alteration to the die?p1esk - Monday, July 25, 2016 - link
INT8 is implemented as dp4a operation, so it's 4 ops instead of one FP32 ops, which means 4x11=44.Yojimbo - Monday, July 25, 2016 - link
It's 8 ops. And floating point FMA is 2 ops. But it's not clear to me why we are talking about it in terms of floating point FMA. Maybe there's a reason to do so because of the hardware implementation. But regardless, it seems true that the GPU performs 8 8-bit integer operations per core per clock cycle under the right conditions. It's (8 ops)*(3584 cores)*(1.531 GHz) = 43.9 TOPS."Integer Arithmetic Instructions: dp4a
Description
Four-way byte dot product which is accumulated in 32-bit result. Operand a and b are 32-bit inputs which hold 4 byte inputs in packed form for dot product."
So if a = (a.w, a.x, a.y, a.z) and b = (b.w, b.x, b.y, b.z) then the dp4a operation computes c = a(dot)b + c = a.w*b.w + a.x*b.x + a.y*b.y + a.z*b.z + c, resulting in 4 multiplications and 4 additions for a total of 8 operations.
Yojimbo - Tuesday, July 26, 2016 - link
Sorry for my confusing notation there. The "=" after the c is meant to be an assignment and the "=" after that is meant as equality of the two expressions, excluding the assignment.tipoo - Monday, July 25, 2016 - link
I think the question marks on the int ratios for the other cards may be in this article:http://www.extremetech.com/computing/153467-amd-de...
p1esk - Monday, July 25, 2016 - link
That article is from 2013, and has no information about INT8 operations.alpha754293 - Monday, July 25, 2016 - link
I'm confused (or maybe it really is as written/posted).FP16 is 1/64 (presumably) the FP32 rate.
FP64 is 1/32.
So...FP16 is actually SLOWER than FP64? Wow. Didn't expect that.
Also, with FP32 rate being a FMA rate, I wonder what say a LINPACK FP32 speed would be (native/raw/non-FMA).
That said, it's sad (in a way) how my GTX Titan FP64 is still faster than their newest, latest, flagship card, despite the fact that its primary market IS FP32. (And I care because I think that my MATLAB code is primarily FP64 by default...)
Hmmm...interrresting.
p1esk - Monday, July 25, 2016 - link
They use separate cores for FP16, FP32, and FP64 ops in the new Titan X and in 1080. That's why performance numbers for those data types are independent. In GP100, on the other hand, each FP32 core can compute 2 FP16 ops, so that's why FP16 performance is off the charts there (unfortunately the price is off the charts as well).Harry Lloyd - Tuesday, July 26, 2016 - link
1200 $ for a 471 mm2 GPU, what a joke. There is barely any performance increase over the 1080, at double the price. This FinFET generation is pathetic so far.haukionkannel - Tuesday, July 26, 2016 - link
Finvet is Great! The prises Are not, but when first products of new technology Are? Allmost newer. Next year we can see cheaper prises, because the production technology matures and yeald get better. In two years we Are at where this generation of finvet is its prime. The yeald Are good and there is enough production capasitynto allow enough products to market to allow competition and Also manufactures has improves their design to better utilise finvet in their products.After that there will be smaller upgrades in technology, but bigger upgrades in the size of the chips, because less defective chips comes out From the production line. And aboutbthe same time the next generation of product tech is coming out with not so Great prises, but better energy consumption.
MarkieGcolor - Tuesday, July 26, 2016 - link
What is going to change that makes yields better?HeavyHemi - Wednesday, July 27, 2016 - link
The process matures. Pretty much the same as every process cycle.Mugur - Tuesday, July 26, 2016 - link
So bye, bye 1080 Ti. If this Titan would have been 384bit 24 GB 12 Ghz DDR5X (and fully enabled chip), there were still a place for a Ti, but actually this generation the Ti is this Titan X.And, of course, if the competition would have been fierce, the Ti/Titan would have 16 GB DDR5X/512bit bus on a fully enabled chip, even with this great memory compression techniques of NVIDIA. Until HBM2 is in volume production...
Hxx - Wednesday, July 27, 2016 - link
so its pretty much settled that the TI will not have HBM instead of DDR5.JonnyDough - Monday, August 1, 2016 - link
So I clicked the link to the NVidia site...Pretty sure their forum is being laced by NVidia in typical NV marketing fashion. People think it should be $3000? Laugh. Nobody would say that unless they're a complete idiot tool. NVidia is trying to fluff their own feathers on their own site!
Gastec - Wednesday, August 17, 2016 - link
"buying NVIDIA’s best card just got a bit more expensive"$200 or 20% is not "a bit more", is CONSIDERABLY more expensive. A BIT more would be 2%.
But you keep brushing them, in a couple of years they will reach $1000 for an ordinary desktop GPU. That's what we shareholders pray for.
bn880 - Friday, August 26, 2016 - link
Thanks for the great posts TheJian. Very informative.