Desktop graphics game is lost. AMD should just close all non-gpu divisions and outright reorient towards producing specific ASIC solutions for miners, this newfangled machine bullshit and such. DATS WHERE DEM MONES BE BRAH
People are confusing "armchair analyst" with "idiot" :)
While it is disappoint for me, as I am predominantly interested in compute, vega has the makings of a good gaming GPU, disappointing for me as this comes from sacrificing high-precision performance for the sake of efficiency and optimizing for low-prevision performance, which is the precision which game graphics utilize.
Well, the only bit about DP FP on the slides mentioned that it was 'configurable' -- so we don't know what we will see 1/2, 1/3, 1/4... ? Rate vs SP DP? AMD has always been more gratuitous with not disabling that higher end compute on consumer parts -- but we wont know what's actually in the silicon until later. I wouldn't be surprised to see at least one 1/2 rate DP FP part ship at some point, whether it be sold as a consumer card or only as a pro card, only time will tell.
AMD has stated that Vega 20 (due in 2H18) will have 1/2 DPFP, while Vega 10 will be 1/16th. The configurable means that different chips can have different double precision rates. There is at least one additional Vega chip in the works (Vega 11) plus Navi. However, I think that 1/2 is as fast as we will see. ;-)
Why not, his armchair is the brains of the operation! Seriously, he's totes right though... they should just throw away GPU revenue. Take chips that could otherwise be sold in several product categories, and only sell them as "newfangled machine bullshit" chips. Plus some low-margin ephemeral ASICs for miners. See, it's brilliant. Cuts down volume which saves on shipping costs. Plus it means less wafers so you don't get those pesky economies of scale benefits. Nvidia would obviously go this route too but they just don't have the balls to stop selling graphics cards. Excelsior!!
You should really add <sarcasm> tags cuz someone might think you are being serious :)
AMD stepping out of consumer products will have a terrible effect, on us as consumers, and on amd as a business entity, even as lousy as it is. Sure, if they COULD sell all their chips in enterprise products, they would have done it.
With grammar like that, how can anyone not take you seriously? You're obviously incredibly articulate. I mean, never mind such brilliant statements like "newfangled machine bullshit"; the "DATS WHERE DEM ..." etc. phrase solidifies your rank as a profoundly gifted scholar.
You appear to be assuming a brand bias based on a comment someone is making about word choice and grammar. You're then using that assigned bias as a fulcrum to toss out an insult. It raises two concerns the first of which is:
-Why would you feel so strongly about one particular video card manufacturer that you find it worth the effort to insult someone on the internet that you've never met and hasn't made a brand biased statement?
The second is:
-Does the irony of calling someone an idiot under these circumstances escape your notice?
Don't mind him, he's likely a democrat. Accusing everyone of racism to hide their own is par for the course. He sees "fanboys" everywhere, like the rest of the NVidia sheep.
Not sure why you think "the desktop graphics game is lost" . In the market segemnts AMD decided to compete in they are winning . Taking market share and out performing nvidia on price and performance. If that scales into the top end then perhaps its nvidia who should consider becoming a maker of mobile graphics chips and abandoning the desktop graphics market ?
Well, maybe you are right, but if desktop/laptop gaming dies completely, your solutions to professionals will be fucked up, and no longer evolve properly. In fact, it is in the interest of the "professionals" to not let this gaming die; if they are smart enough.
Not only every new solution will be far more expensive, but it will be much more incremental, to the level of complete stalling. Gaming market, offers from the one hand the economies of scale and from the other the most demanding playground for researchers to develop new solutions.
If you have complerely bought the myth, "that there's the serious computing for every day problems and the amater one for gaming ", think again. It may not seem like this right now, but gaming trends towards simulating reality, albeit with small steps everytime. Believe me, there's nothing more computationally demanding than simulating reality, and gaming is the ideal platform for it.
If those who are in charge don't get it, and let it degrade only to get a tiny and insignificant competitive edge, they will regret it exponentially later.
Aha, it's clear then! This is what gamers are supposed to do now: 1. Buy graphics cards from Nvidia, no matter the price. 2. Stop gaming. Or better yet we should just lay down and die and let the trolls rule the World.
If they have TitanXP performance, it's going to be more expensive than the 1080. You know NVIDIA is just waiting for a chance to release a $699 or $799 1080Ti, so whatever AMD brings out, you can bet there's gonna be a clash of titans. Er, titans and stars, that is.
599$ might be too aggressive but depends on where the die size lands and what Nvidia does. Vega has almost 15% more FLOPS than Titan XP so remains to be seen how well they utilize that computing power and how much silicon it takes. The likely 8GB of HMB does help on the cost side. It also depends on what other SKUs they got and when. A high price limits volumes but if they also have lesser SKUs at launch, they can afford to price the best SKU higher.
True! Nvidia has guaranteed that fastest Vegas can be sold over 1000$... Lets hope that cheaper option are near 500-600$.
The interesting part is when AMD will use these next generation GPU units in their mid and low range products. Maybe the next summer or next autumn? Then we will get interesting devices to 150-350$ slots! Most propably with gdd5 in the low end and maybe gddr5+ in the high mid range GPUs,
There is a new Polaris chip (Polaris 12) in the works. It may be intended only for APUs where it would be mounted on an interposer with a Ryzen chip. It is not clear what AMD is going to do in the gap between RX 480 and Vega 10. Vega 11 is expected to replace the RX 480*
* Understand what replace means here. It doesn't mean that AMD will stop selling Polaris GPUs. It means that AMD expects Vega 11 to have a better price/performance than RX 480, and that the performance gap between and that the price range where RX 480 currently sells will be starved of oxygen. I do expect a dual Polaris 10 card to ship, and there is also an RX 490 design floating around. (It may be a Polaris 10 chip with a higher clock speed, more power of course, and 8 Gig of GDDR5x memory.)
Always remember that marketing gets the last say, not engineering. So only one or none of these products may show up. It is also not clear when Vega 11 will arrive. If it is late in this year, or early in next year, there will be enough time to market the additional Polaris parts.
So, by the time this hits the streets, Nvidia will already have another hardware iteration out? It's likely too late, but if you still holding on, sell your AMD stock.
Likely Nvidia will have an answer, that has always been the case and is barely even worth mentioning anymore, at least until AMD gets the upper hand one of these days. They did it to Intel during the Athlon days. It is very much possible, they have smart engineers, they just don't have enough of them but that often does not matter if they can work more efficiently. They have one thing going for them and that is a larger team by 2x often results in <2x the work done.
One of my former bosses while I worked in the silicon industry said "AMD has good technology, they just have terrible luck being the underdog in both industries they compete in at the same time.". I wholely agree, with some luck they actually can come out on top, Nvidia is spending a lot of money diversifying.
Honestly, I for once, don't think nVidia will have an answer. I feel that their expectations of what AMD could do were so incredibly low, that they felt GP100 and all of its derivatives would be enough to lay the smack down on AMD for good. Even with Volta, which seems like it's going to be a slight tweak to Pascal, it seems that Vega might just come out on top, which would make more sense out of AMD's slide of "Poor Volta", which would be a rather idiotic move unless AMD truly had something to be that cocky about.
NVIDIA isn't trying to lay the smackdown on AMD for good. NVIDIA has been evolving in a different direction from AMD. AMD, probably because they have been cash-strapped, has not been able to invest the money necessary to become a platform-based company the way NVIDIA has.
Also, Volta will not be a small tweak to Pascal. Pascal was a die shrink and small tweak to Maxwell (from the point of view of the underlying architecture, not the features that it enables). Volta is supposed to have ~1.7 times the performance and efficiency of Pascal on the same process technology. It won't be out until 6 to 9 months after Vega, however. But I'm very leery about taking AMD's promises at face value. Even if Vega is as high performance and efficient as AMD claims, it still uses HBM2 which adds significant cost to the manufacture of the chip. That means they will only be able to put a limited amount of pricing pressure on NVIDIA.
There is, unfortunately, another element to this. It's the sheep mentality of team green supporters (hate the word "fanboy"). I watched a pretty good video on YouTube from Adoredtv talking about the past decade of AMD engineering prowess vs NVidia sheep mentality. They have been buying lesser cards for more money for years, and they're feeding a viscous cycle in which they (and us) will be slaves to NVidia's R&D laziness and pricing. All the while, AMD has had little to show for their investment and research. Thanks to NVidia sheep, buying lesser cards for much more money https://www.youtube.com/watch?v=uN7i1bViOkU.
Because "market analysts" like that guy think that somehow in every field there's only room for one. And since there's only one CPU manufacturer, one car manufacturer, one phone manufacturer, etc. why not have only one GPU manufacturer, right? Oh wait...
Um, you sell high, buy low. That's how you make money. So $2 to $11.50 is the perfect reason to sell.
Then the question is, will it go higher or will it drop again? If it will drop then you wait until then to buy again and you'll have made $140k. If it goes higher than, well, you'll have still made $140k.
Selling (in the money) calls a year out may be the best strategy. You guarantee the money you get from selling the calls, even if the stock drops to nothing. If the stock goes up and the calls are exercised? You made $140k plus the income from selling the calls. Best situation for you with this strategy may be if the stock drops to where the calls don't get exercised, then bounces back. ;-)
Don't forget that Vega plus Ryzen will significantly outperform Vega plus Intel. (Virtual memory will be usable between CPU and GPU.) Since AMD uses MOESI vs. Intel's MESIF for virtual memory, I suspect that even if nVidia wants to share memory with an Intel CPU, it will take a special card to do so. Of course, nVidia might go ahead and support AMD's virtual memory scheme, but I doubt it.
It would be so beautiful for the 1080Ti to be released after AMD's Vega and to underperform. We desperatelly need a breath of air and break from the trap we're in right now.
now we get a countdown to the date/time of the reveal of the launch date and time. What we need now is a countdown to the reveal of the countdown to the reveal of the launch date.. or something.
It is kind of sad that the tile based rasterization obviously caught AMD completely by surprise. These guys are chumps. It took a 3rd party to even clue them in on how to capture huge efficiency gains. At any rate, congrats AMD for catching up to the two year old Nvidia designs!
Looks that way, but the timeline is impossible. What drove both Nvidia and AMD to tiling was that HBM memory wasn't working and they need serious bandwidth to the framebuffer. Tiling is an obvious way around that issue.
AMD just found out that HBM wasn't going to work a generation late (because FuryX never took off. And probably will *still* need to tile if they use HBM2).
Pre-Dreamcast actually. PowerVR's second generation video processor was in the Dreamcast, but the first generation was relased to market on a 32-bit PCI card and chips were out on the market in 1996 making the concept of tiling about 20 years old.
I had a first generation PowerVR card add-in card (supporting a generic 2d card) and later, a PowerVR Kyro (I think it was called). Was it Imagination Technologies? Can't be bothered to look it up LOL
Yup the first PowerVR card didn't work as a standalone graphics adapter. I had one too from Matrox, IIRC. It worked pretty well in original Unreal (non-tournament version). I had a lot of fun on Deck 16 with that thing until I replaced it and the S3 ViRGE DX with a Diamond Viper V550 and a Voodoo 2. It was good for 640x480.
You're right that their later iterations were sold under the Kyro branding. I sold one of them from my computer shop inside of a custom built desktop...well actually, it was a Kyro II not the original. The techs tinkered with it a little during build, but I didn't get a chance to mess with it. The other partner and I were pulling several miles of wire in one of our business client's new offices so it was out the door before I got more than a glance in passing at what it could do. It was fairly competitive with a GeForce 2 if you put enough CPU power behind it (I think it lacked hardware TnL so it needed the processor power).
Formerly Videologic, changed in 1999 to Imagination Technologies.
I had a Kyro II. The first Kyro was clocked at a mere 115MHz for both core and memory, but Kyro II was a whole 60MHz faster. It really worked well in UT when paired with my XP 1700+. :) There was a Kyro II SE with 200MHz clocks but was as rare as rocking horse excrement. I was also very excited about its supposed TnL, DDR-powered successor, but it never launched, which meant bye-bye on the desktop for PowerVR, rather than the bloody nose that ATi and NVIDIA deserved at that time.
If my poor memory is remotely accurate (don't bet on it), the chip to run Unreal Tournament was the S3 chip (probably the last they made). Not sure if the "TnL" it supposedly had was fraudulent or defective, but the texture compression (probably only used in Unreal Tournament before S3 died and it became a standard feature) gave the S3 the best visuals in the game.
I know I had one of those boards (simply because it was cheap, long after its day) but can't remember how it performed on Unreal Tournament (I may not have bothered putting windows on that box. In those days the best way to connect windows to the internet was to connect a Linux machine to a DSL modem, then connect windows to Linux. Connecting Windows to DSL was asking to get it to blindly reset all your configuration files at random intervals).
You're probably thinking about S3's Savage video card series with the S3 MeTaL API. It's been ages, but I'm pretty sure UT supported MeTaL out of the box. Epic was pretty good about that back when there was a plethora of competing APIs before we settled into DirectX and OpenGL. When I tried to diversify my PC shop by adding an after-hours LAN arcade, we used those graphics cards because they were good for the multiplayer stuff we wanted to run. I do recall they were a bit glitchy under Descent Freespace. There were artifacts around ship engine glow effects and I think it was due to poor DirectX suport, but don't quote me on that, it's been years.
Yeah, MeTaL was there at retail, as long as you installed the texture pack from the second disc. I had a Savage 4 (Diamond S540) which worked flawlessly... until I flashed the BIOS one day, thereafter it would hang during every session at a random point, thus requiring a reboot. That was a stupid idea, especially considering it looked excellent. Luckily, somebody got the OpenGL mode to work with those textures.
I believe the T&L on the Savage 2000 was unfit for purpose; it was broken in hardware so you couldn't coax its true performance out.
I always liked the infinite planes concept (no polygons!) of the PowerVR 1 series. I remember being disappointed at the time, and still am, that infinite planes never caught on. The entire paradigm gave many operations 'for free' that today require a lot of hardware and software support to implement.
I remembering excitedly reading about TBR with an article about the Kyro II on this very website oh so many years ago like you're saying. http://www.anandtech.com/show/735
Funny enough, it didn't do well in the market because the APIs at the time weren't very flexible and required a lot of tweaking to it working in each engine pretty much.
The games that it did work well in, the card punched WAY above its cost bracket if I remember correctly.
So for it looks likely that they have implemented that famous patent everybody hoped for, one way or another. Seems that higher efficiency was a focus area everywhere so they might catch up- or better? Hard to figure out where Vega is from a perf density point of view, that matters too.
Any clue if it's on 16FF or they stick with 14nm?
If this is what it looks like, it's gonna be huge step forward and that includes APUs. Can you imagine how those APUs would perform vs the competition?
Meant to add...So not sure if TSMC or GloFo but I guess it could be either since GloFo has been running their 14nm for quite a while now. I would put my money on TSMC tho because GloFo history makes me second guess them.
There are perf difference between the 2 foundries, due to HBM i'm expecting TSMC but we'll see.
By looking at the pic published here i would say 480mm2 but the pic is poor so the guess is likely unreliable. In any case, i am expecting it in the 400-500mm2 range.
HBM2 die is about ~8x12mm. So, the long (12mm) sides of the HBM die are facing the GPU die, and 2 of them are a bit shorter than the GPU die. I am going to guess that the GU die is about 25mm wide. It looks like it is a bit shorter than it is wide -- so say 20mm. 25x20mm makes a 500mm chip, That seems about right. Smaller die than Fiji, much smaller interposer because of only 2 stacks of HBM, this thing should end up costing a lot less than Fiji to produce once the volume gets ramped, and hopefully it should smoke it performance-wise as well.
^^ Based on this pic http://techreport.com/r.x/amdvegapreview/vegachip.... which is the most square on and clear pic I can find right now. I was initially thinking it was 400mm^2 and change, but now I am thinking it is closer to 500mm^2 and change.
I get to some 500mm2 too. As for cost, Fiji was on 28nm, much cheaper on an area basis. They will get better yield for packaging but the overall costs will be much higher than with Fiji. They could have a SKU with 25% of the CUs disabled at 499$ and the full Vega 10 at 699 to .. the upper limit depends on where perf lands vs Titan X.
One must also consider the difference in cost of the process that each chip is made with as well as the price of the larger capacity of HBM2 compared to smaller amount of HBM1. But a smaller interposer will definitely help.
It'd better smoke Fiji performance-wise. I think it's a foregone conclusion that it will, though. Judging by their released benchmarks I'm guessing it'll be modestly faster than the 1080, on average, but significantly slower than a forthcoming 1080Ti. So it'll probably be priced around the same $650 as the Fury X.
You are rushing into judging perf in a big way. AMD is demoing Vega, just showing it up and running ,NOT showing perf. Not quite sure why so many don't get that.
You can bet that it is using early software and that it's clocked at 1-1.2GHz only for now.They are not gonna show their hand ,months before retail availability. They are just showing 4k gaming at 60FPS or better.
You can look at min perf Vega 10 should offer in many ways. 1. It is assumed to have same number of "cores" as the Fury X but almost 50% higher clocks. Then,some 15% architectural and software gains would put it on par with Titan X. So the question is, if they can do better and by how much and at what power. A note here, given the number of cores , the scaling from 28nm to 14/16nm is very poor. They are clearly sacrificing area for huge gains elsewhere, you wouldn't do it otherwise. 16 and 8 bit is on thing but there must be a lot more. 2. Even the Polaris architecture scaled to 12.5TFLOPS and this memory bandwidth, would match the Titan X. So the question is, how much better can they do.
One could argue that AMD is nuts and Vega is worse than Polaris but that's less than reasonable.
The 12.5 TFLOPS is Vega 10 doing 16-bit floating point, not Polaris. And this, in part, explains why Vega 10 doesn't scale well compared to Fury X. There is a new compute engine with lots of new features. I'm hoping that one of them is a fused multiply-add, but we do know that it can do 16-bit floating point twice as fast as single precision (32 bit).
So remember that, for now, any comparisons you see with Vega will almost certainly use none of the new features. AMD can and should spend most of their driver effort right now on changes that benefit Polaris. Some support for Vega? Sure, but don't expect support for the new features in drivers until just before (or after) Vega ships.
With AMD's countdown on the ve.ga site and their marketing almost everyone thought their would be a live stream incoming, but instead it was a countdown to an NDA lift, like really AMD? I believe AMD made a huge mistake here because there are A LOT of angry people out there whom were waiting with much anticipation. I kinda thought something was odd with the time being 6 a.m. PST and had told my friends as much but still they were convinced too that it was for a live stream.
Why has AMD marketing been SO bad for so long, I do think they have gotten a little better in recent history but they are way behind Nvidia for example. It's like AMD has kept the same marketing team for years and years and just does not want to let them go for who knows what reason. In business when you fail for so long who are usually replaced, has AMD just been replacing under-performer after another or just keeping the same people I just don't get it.
And this is coming from someone who is an AMD supporter and would like to see them succeed.
This is exactly my biggest problem with AMDL: their marketing is, to be frank, absolute bullshit. It's why I don't trust them to deliver on their promises until I see independent reviewers validate them, while if Intel says their next chip is going to have an extra 10% IPC or NVIDIA says their next GPU is going to be 20% faster, I'm inclined to believe those companies.
AMD marketing needs to understand the difference between hype and lies, and that consistent lies hurt them far more than they help.
Spotted this in the press release Data based on AMD Engineering design of Vega. Radeon R9 Fury X has 4 geometry engines and a peak of 4 polygons per clock. Vega is designed to handle up to 11 polygons per clock with 4 geometry engines. This represents an increase of 2.6x. VG-3
While this is mildly interesting, I have to question why anandtech has, what 8 "live blog" articles today when there's still nothing in bench for the 470, 460 or 1050 cards. These keynote speeches aren't telling us anything that hasn't already been leaked or in press releases...
I would like to see reviews for the the GPU releases that have slipped through the cracks too. Anandtech has thus far missed the 460, 470, and 1050. Given the amount of time since those graphics cards have been released, it's a disappointment and does cause people to grumble about the site's supposedly looming demise or to lash out in CPU articles at Ian over the lateness of GPU publications.
While I personally think live blogs don't have a lot of value, other readers have been quick to point out that they want them and criticize Anandtech for the omission. They're not going away and they do take time to conduct, but that doesn't excuse the months that have passed without reviewing parts of what's arguably the most significant GPU release in recent history thanks to the move to a small manufacturing process.
it is. if you consider na int8 as a 8 element vector, you can do a dot product with them.
in neutral network you can assign each element of an int as in input to the network. the math involved in such network requires repetitive dot product of each element. thats what the article refers to.
I'm all hyped up for Vega, but could we get a full review of the 480? I always appreciate the in-depth architectural reviews, and that would help me be prepared for Vega when it arrives.
What's the status of the RX 480/470/460 reviews? A teaser was given, but nothing more. And the GTX 1050 (Ti) got an announcement thread which noted the fab differences but has yet to see a review.
I agree, an announcement that a product 6 months out will be faster than today's isn't that exciting. If it also beamed light into my eyes directly or purified the air with the cooling fan, that would be a neat twist but right now its a pretty bland announcement.
Don't use URL obfuscation services. They are a major security risk, as they can hide malware attack vectors as you don't know the true final destination of the URL. For all I know that tinyurl could be going to a malicious site that tries to load malware into your browser.
Never click on an obfuscated URL, that's as basic as never opening an email attachment from an unknown source.
I should add that the only practical resolution to the twin problems, too-long URLs and the issues they cause and who do you trust, is Anandtech running their own URL-shortening service.
It has become sickening to listen to AMD years in advance about what they are going to do for the next eon, which is nothing but talking anyway. Wise to stop reading about anything GPU until Intel buys NVIDIA out and gets it all sorted out and done with.
It's a fair point and I would not be at all surprised if Intel used its cash to buy Nvidia; Intel is clearly struggling to find the Next Big Thing (5G? Really?); Nvidia is not, with IP ideal for making AI happen.
The "High Bandwidth Cache" could also be a hint of what's to come with future HBM2-capable APUs. Intel has a cache up to what, 128MB of eDRAM? I think even a single HBM2 stack could provide a huge boost to an APU.
Zodiac: "I don't have any solid data, no benchmarks, and no pricing information... but it says AMD so I'm not allowed to express anything but disdain."
Well the only reference we have gaming-wise is that it's between 1080 and titan levels in doom at 4k. So like 1080ti. It's not particularly exciting. I want to find out it's actually way faster than a 1080 in many games or other apps, but I think zodiac is right to be skeptical. Doom runs well on AMD cards, so it's possible Vega might only be 1080 levels overall and that a 1080ti would be faster and easy for nvidia to counter vega with.
I feel the same way about Zen. The benchmarks we've seen or heard so far make it sound fine, but at best as fast as intel. That's concerning if the cherry picked examples are as good, in my mind that suggests it could be 20% slower overall.
The fact that AMD is very late into the high end graphics with 14nm, means the expectation for them will be higher. I am not sure Vega will fill that expectation at this point in time. With the limited information at this point, it does show again that it is performing well in very selective DX 12 titles against the likes of a GTX 1080, which does not bode well to me. Moreover, the power requirement for Vega 10 seems to be higher than a GTX 1080 that they are comparing with.
I actually if VEGA will be non cut in terms of compute performance. Making it way more attractive for accelerator cards for servers and "decent" in gaming performance. Thus making it all around performer.
The whole point AMD is still in business is to bring consoles to people who can't afford top-of-the-line components and/or the knowledge to make it. As you know Apple is no where near the quality AAA DirectX games are. You cannot get good simulation photorealistic graphics and physics from just Intel alone. NVidia doesn't make CPUs. AMD is the only company to do both in the XBox One and PS4. The better they get, the better the consoles will be.
I had such a processor. He is very cool, you have no idea how smart he is. I needed one to solve mathematical processes. This is the essence of my profession. I constantly read literature to improve my skills and also solve various problems. For example green's stokes and the divergence theorem, I found https://plainmath.net/post-secondary/calculus-and-... in order to help people with their questions. I help them and also help myself in practice. I think everyone is on the positive side.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
155 Comments
Back to Article
FireSnake - Thursday, January 5, 2017 - link
ve.ga is not accessible .... no Ryzen servers yet, haha :)Michael Bay - Thursday, January 5, 2017 - link
Desktop graphics game is lost. AMD should just close all non-gpu divisions and outright reorient towards producing specific ASIC solutions for miners, this newfangled machine bullshit and such.DATS WHERE DEM MONES BE BRAH
rpmrg - Thursday, January 5, 2017 - link
Here's another armchair analyst.Michael Bay - Thursday, January 5, 2017 - link
At least try to hide your envy next time, honey.lobz - Thursday, January 5, 2017 - link
yeah rpmrg dude, don't you ever be envious of his armchair...ddriver - Thursday, January 5, 2017 - link
People are confusing "armchair analyst" with "idiot" :)While it is disappoint for me, as I am predominantly interested in compute, vega has the makings of a good gaming GPU, disappointing for me as this comes from sacrificing high-precision performance for the sake of efficiency and optimizing for low-prevision performance, which is the precision which game graphics utilize.
extide - Thursday, January 5, 2017 - link
Well, the only bit about DP FP on the slides mentioned that it was 'configurable' -- so we don't know what we will see 1/2, 1/3, 1/4... ? Rate vs SP DP? AMD has always been more gratuitous with not disabling that higher end compute on consumer parts -- but we wont know what's actually in the silicon until later. I wouldn't be surprised to see at least one 1/2 rate DP FP part ship at some point, whether it be sold as a consumer card or only as a pro card, only time will tell.eachus - Saturday, January 14, 2017 - link
AMD has stated that Vega 20 (due in 2H18) will have 1/2 DPFP, while Vega 10 will be 1/16th. The configurable means that different chips can have different double precision rates. There is at least one additional Vega chip in the works (Vega 11) plus Navi. However, I think that 1/2 is as fast as we will see. ;-)Michael Bay - Friday, January 6, 2017 - link
But not in your case. ^_^Alexvrb - Friday, January 6, 2017 - link
Why not, his armchair is the brains of the operation! Seriously, he's totes right though... they should just throw away GPU revenue. Take chips that could otherwise be sold in several product categories, and only sell them as "newfangled machine bullshit" chips. Plus some low-margin ephemeral ASICs for miners. See, it's brilliant. Cuts down volume which saves on shipping costs. Plus it means less wafers so you don't get those pesky economies of scale benefits. Nvidia would obviously go this route too but they just don't have the balls to stop selling graphics cards. Excelsior!!ddriver - Friday, January 6, 2017 - link
You should really add <sarcasm> tags cuz someone might think you are being serious :)AMD stepping out of consumer products will have a terrible effect, on us as consumers, and on amd as a business entity, even as lousy as it is. Sure, if they COULD sell all their chips in enterprise products, they would have done it.
svan1971 - Thursday, January 5, 2017 - link
And a moe as well...Michael Bay - Thursday, January 5, 2017 - link
I`m moe as hell, true.kaidenshi - Friday, January 6, 2017 - link
Michael Bay is one of the oldest surviving trolls here at AnandTech, don't feel bad that you fell for his crap.satai - Thursday, January 5, 2017 - link
No.johnnycanadian - Thursday, January 5, 2017 - link
With grammar like that, how can anyone not take you seriously? You're obviously incredibly articulate. I mean, never mind such brilliant statements like "newfangled machine bullshit"; the "DATS WHERE DEM ..." etc. phrase solidifies your rank as a profoundly gifted scholar.MapRef41N93W - Thursday, January 5, 2017 - link
It's cute because you don't understand such obvious sarcasm. Typical AMD idiot too stupid to see what's so obviously in front of them.BrokenCrayons - Thursday, January 5, 2017 - link
You appear to be assuming a brand bias based on a comment someone is making about word choice and grammar. You're then using that assigned bias as a fulcrum to toss out an insult. It raises two concerns the first of which is:-Why would you feel so strongly about one particular video card manufacturer that you find it worth the effort to insult someone on the internet that you've never met and hasn't made a brand biased statement?
The second is:
-Does the irony of calling someone an idiot under these circumstances escape your notice?
LG25 - Friday, July 14, 2017 - link
Don't mind him, he's likely a democrat. Accusing everyone of racism to hide their own is par for the course. He sees "fanboys" everywhere, like the rest of the NVidia sheep.WinterCharm - Thursday, January 5, 2017 - link
Lol, this clown.Michael Bay - Friday, January 6, 2017 - link
Here`s your pity comment.Outlander_04 - Thursday, January 5, 2017 - link
Not sure why you think "the desktop graphics game is lost" . In the market segemnts AMD decided to compete in they are winning . Taking market share and out performing nvidia on price and performance.If that scales into the top end then perhaps its nvidia who should consider becoming a maker of mobile graphics chips and abandoning the desktop graphics market ?
Michael Bay - Friday, January 6, 2017 - link
Sure, garbage bin can be seen as some kind of a market too, if you`re desperate enough.negusp - Friday, January 6, 2017 - link
Yeah, no. AMD is actually pretty successful with their current mid-range cards- especially with DX12 and newer drivers the RX 470 kicks 1060 ass.You sound like a butthurt nVidia fanboy scared as hell of Vega DGPs and Ryzen IGPs.
Outlander_04 - Tuesday, January 17, 2017 - link
If the "garbage" is beating nVidia so handily then I think you are agreeing with my original statementLG25 - Friday, July 14, 2017 - link
Watch this video (all of it), and you will see what a sheep you've been.https://www.youtube.com/watch?v=uN7i1bViOkU
IUU - Tuesday, January 10, 2017 - link
Well, maybe you are right, but if desktop/laptop gaming dies completely, your solutions to professionals will be fucked up, and no longer evolve properly. In fact, it is in the interest of the "professionals" to not let this gaming die; if they are smart enough.Not only every new solution will be far more expensive, but it will be much more incremental, to the level of complete stalling. Gaming market, offers from the one hand the economies of scale and from the other the most demanding playground for researchers to develop new solutions.
If you have complerely bought the myth, "that there's the serious computing for every day problems and the amater one for gaming ", think again. It may not seem like this right now, but gaming trends towards simulating reality, albeit with small steps everytime. Believe me, there's nothing more computationally demanding than simulating reality, and gaming is the ideal platform for it.
If those who are in charge don't get it, and let it degrade only to get a tiny and insignificant competitive edge, they will regret it exponentially later.
Gastec - Thursday, January 12, 2017 - link
Aha, it's clear then! This is what gamers are supposed to do now:1. Buy graphics cards from Nvidia, no matter the price.
2. Stop gaming.
Or better yet we should just lay down and die and let the trolls rule the World.
LG25 - Friday, July 14, 2017 - link
Gee. Yet another NVidia sheep. Baaa-a-a-awaltsmith - Thursday, January 5, 2017 - link
So, lets talk pricing already!! loljjj - Thursday, January 5, 2017 - link
599$ for Titan X Pascal (or better) perf?nathanddrews - Thursday, January 5, 2017 - link
If they have TitanXP performance, it's going to be more expensive than the 1080. You know NVIDIA is just waiting for a chance to release a $699 or $799 1080Ti, so whatever AMD brings out, you can bet there's gonna be a clash of titans. Er, titans and stars, that is.jjj - Thursday, January 5, 2017 - link
599$ might be too aggressive but depends on where the die size lands and what Nvidia does.Vega has almost 15% more FLOPS than Titan XP so remains to be seen how well they utilize that computing power and how much silicon it takes.
The likely 8GB of HMB does help on the cost side.
It also depends on what other SKUs they got and when. A high price limits volumes but if they also have lesser SKUs at launch, they can afford to price the best SKU higher.
haukionkannel - Thursday, January 5, 2017 - link
True! Nvidia has guaranteed that fastest Vegas can be sold over 1000$...Lets hope that cheaper option are near 500-600$.
The interesting part is when AMD will use these next generation GPU units in their mid and low range products. Maybe the next summer or next autumn? Then we will get interesting devices to 150-350$ slots! Most propably with gdd5 in the low end and maybe gddr5+ in the high mid range GPUs,
jjj - Thursday, January 5, 2017 - link
Why not get a 350$ card at launch. They have nothing above Polaris 10 and the 350 price band is important.eachus - Saturday, January 14, 2017 - link
There is a new Polaris chip (Polaris 12) in the works. It may be intended only for APUs where it would be mounted on an interposer with a Ryzen chip. It is not clear what AMD is going to do in the gap between RX 480 and Vega 10. Vega 11 is expected to replace the RX 480** Understand what replace means here. It doesn't mean that AMD will stop selling Polaris GPUs. It means that AMD expects Vega 11 to have a better price/performance than RX 480, and that the performance gap between and that the price range where RX 480 currently sells will be starved of oxygen. I do expect a dual Polaris 10 card to ship, and there is also an RX 490 design floating around. (It may be a Polaris 10 chip with a higher clock speed, more power of course, and 8 Gig of GDDR5x memory.)
Always remember that marketing gets the last say, not engineering. So only one or none of these products may show up. It is also not clear when Vega 11 will arrive. If it is late in this year, or early in next year, there will be enough time to market the additional Polaris parts.
Jad77 - Thursday, January 5, 2017 - link
So, by the time this hits the streets, Nvidia will already have another hardware iteration out? It's likely too late, but if you still holding on, sell your AMD stock.Darkknight512 - Thursday, January 5, 2017 - link
Likely Nvidia will have an answer, that has always been the case and is barely even worth mentioning anymore, at least until AMD gets the upper hand one of these days. They did it to Intel during the Athlon days. It is very much possible, they have smart engineers, they just don't have enough of them but that often does not matter if they can work more efficiently. They have one thing going for them and that is a larger team by 2x often results in <2x the work done.One of my former bosses while I worked in the silicon industry said "AMD has good technology, they just have terrible luck being the underdog in both industries they compete in at the same time.". I wholely agree, with some luck they actually can come out on top, Nvidia is spending a lot of money diversifying.
MLSCrow - Thursday, January 5, 2017 - link
Honestly, I for once, don't think nVidia will have an answer. I feel that their expectations of what AMD could do were so incredibly low, that they felt GP100 and all of its derivatives would be enough to lay the smack down on AMD for good. Even with Volta, which seems like it's going to be a slight tweak to Pascal, it seems that Vega might just come out on top, which would make more sense out of AMD's slide of "Poor Volta", which would be a rather idiotic move unless AMD truly had something to be that cocky about.Yojimbo - Thursday, January 5, 2017 - link
NVIDIA isn't trying to lay the smackdown on AMD for good. NVIDIA has been evolving in a different direction from AMD. AMD, probably because they have been cash-strapped, has not been able to invest the money necessary to become a platform-based company the way NVIDIA has.Also, Volta will not be a small tweak to Pascal. Pascal was a die shrink and small tweak to Maxwell (from the point of view of the underlying architecture, not the features that it enables). Volta is supposed to have ~1.7 times the performance and efficiency of Pascal on the same process technology. It won't be out until 6 to 9 months after Vega, however. But I'm very leery about taking AMD's promises at face value. Even if Vega is as high performance and efficient as AMD claims, it still uses HBM2 which adds significant cost to the manufacture of the chip. That means they will only be able to put a limited amount of pricing pressure on NVIDIA.
LG25 - Friday, July 14, 2017 - link
There is, unfortunately, another element to this. It's the sheep mentality of team green supporters (hate the word "fanboy"). I watched a pretty good video on YouTube from Adoredtv talking about the past decade of AMD engineering prowess vs NVidia sheep mentality. They have been buying lesser cards for more money for years, and they're feeding a viscous cycle in which they (and us) will be slaves to NVidia's R&D laziness and pricing. All the while, AMD has had little to show for their investment and research. Thanks to NVidia sheep, buying lesser cards for much more moneyhttps://www.youtube.com/watch?v=uN7i1bViOkU.
noBSplz - Thursday, January 5, 2017 - link
Why would anyone sell their AMD stock? LOL It went from $2 to $11.50. I Put in 25k last year when they hit their slump and already made 140k.close - Thursday, January 5, 2017 - link
Because "market analysts" like that guy think that somehow in every field there's only room for one. And since there's only one CPU manufacturer, one car manufacturer, one phone manufacturer, etc. why not have only one GPU manufacturer, right? Oh wait...Threska - Friday, January 6, 2017 - link
One government. :-DLG25 - Friday, July 14, 2017 - link
Yes, usually right before they buy as much stock in the one they think should win, right before trashing the other.euler007 - Thursday, January 5, 2017 - link
Might want to realize some of your profit, and have some stops in place. Unless you're banking on 500% a year for several years...michael2k - Thursday, January 5, 2017 - link
Um, you sell high, buy low. That's how you make money.So $2 to $11.50 is the perfect reason to sell.
Then the question is, will it go higher or will it drop again? If it will drop then you wait until then to buy again and you'll have made $140k. If it goes higher than, well, you'll have still made $140k.
eachus - Saturday, January 14, 2017 - link
Selling (in the money) calls a year out may be the best strategy. You guarantee the money you get from selling the calls, even if the stock drops to nothing. If the stock goes up and the calls are exercised? You made $140k plus the income from selling the calls. Best situation for you with this strategy may be if the stock drops to where the calls don't get exercised, then bounces back. ;-)negusp - Thursday, January 5, 2017 - link
I'm holding out for Ryzen and Vega, ye with little faith.So what if nVidia has another hardware iteration? This could be better.
eachus - Saturday, January 14, 2017 - link
Don't forget that Vega plus Ryzen will significantly outperform Vega plus Intel. (Virtual memory will be usable between CPU and GPU.) Since AMD uses MOESI vs. Intel's MESIF for virtual memory, I suspect that even if nVidia wants to share memory with an Intel CPU, it will take a special card to do so. Of course, nVidia might go ahead and support AMD's virtual memory scheme, but I doubt it.Flunk - Thursday, January 5, 2017 - link
Probably not, but the rumors have it that there is a card between the 1080 and Titan X (most assume it will be called 1080 TI) waiting in the wings.MajGenRelativity - Thursday, January 5, 2017 - link
There probably is a 1080Ti waiting around. I thought for sure it was going to hit yesterday at CES, but we'll have to wait a while for it I guess :(Gastec - Friday, January 13, 2017 - link
It would be so beautiful for the 1080Ti to be released after AMD's Vega and to underperform. We desperatelly need a breath of air and break from the trap we're in right now.tipoo - Thursday, January 5, 2017 - link
Volta won't be out for a while, so this will compete with minor Pascal rebrands.canesin - Thursday, January 5, 2017 - link
Remember the good old days when we had a countdown to a reveal which an actual launch date and price?FalcomPSX - Thursday, January 5, 2017 - link
now we get a countdown to the date/time of the reveal of the launch date and time. What we need now is a countdown to the reveal of the countdown to the reveal of the launch date.. or something.Gastec - Friday, January 13, 2017 - link
And while you wait "consume" some more $1000 cards :)Michael Bay - Thursday, January 5, 2017 - link
>Instinctive Computing EraSomebody`s marketing shoah is long overdue.
Shadowmaster625 - Thursday, January 5, 2017 - link
It is kind of sad that the tile based rasterization obviously caught AMD completely by surprise. These guys are chumps. It took a 3rd party to even clue them in on how to capture huge efficiency gains. At any rate, congrats AMD for catching up to the two year old Nvidia designs!rpmrg - Thursday, January 5, 2017 - link
You really must live in a green glass castle.Michael Bay - Thursday, January 5, 2017 - link
It`s awesome when your GPU is not burning the whole computer up, true.TheinsanegamerN - Thursday, January 5, 2017 - link
Hence why you do not buy GTX 480s.lobz - Thursday, January 5, 2017 - link
u got burned by your fermis, didn't you? =}LordanSS - Saturday, January 7, 2017 - link
My old FX 5800 was the loudest, hottest card I've ever owned. I didn't own a Fermi but I've read those weren't very nice either.silverblue - Thursday, January 5, 2017 - link
I'm not sure I follow. Are you saying that David Kanter's discovery forced AMD to bake TBR into Vega?wumpus - Thursday, January 5, 2017 - link
Looks that way, but the timeline is impossible. What drove both Nvidia and AMD to tiling was that HBM memory wasn't working and they need serious bandwidth to the framebuffer. Tiling is an obvious way around that issue.AMD just found out that HBM wasn't going to work a generation late (because FuryX never took off. And probably will *still* need to tile if they use HBM2).
xenol - Thursday, January 5, 2017 - link
And congrats to NVIDIA for using a 12 year old design finally.Tiled rendering is ancient in computing terms. It's at least as old as the Dreamcast which had that PowerVR GPU that did tiled rendering.
BrokenCrayons - Thursday, January 5, 2017 - link
Pre-Dreamcast actually. PowerVR's second generation video processor was in the Dreamcast, but the first generation was relased to market on a 32-bit PCI card and chips were out on the market in 1996 making the concept of tiling about 20 years old.TesseractOrion - Thursday, January 5, 2017 - link
I had a first generation PowerVR card add-in card (supporting a generic 2d card) and later, a PowerVR Kyro (I think it was called). Was it Imagination Technologies? Can't be bothered to look it up LOLBrokenCrayons - Thursday, January 5, 2017 - link
Yup the first PowerVR card didn't work as a standalone graphics adapter. I had one too from Matrox, IIRC. It worked pretty well in original Unreal (non-tournament version). I had a lot of fun on Deck 16 with that thing until I replaced it and the S3 ViRGE DX with a Diamond Viper V550 and a Voodoo 2. It was good for 640x480.You're right that their later iterations were sold under the Kyro branding. I sold one of them from my computer shop inside of a custom built desktop...well actually, it was a Kyro II not the original. The techs tinkered with it a little during build, but I didn't get a chance to mess with it. The other partner and I were pulling several miles of wire in one of our business client's new offices so it was out the door before I got more than a glance in passing at what it could do. It was fairly competitive with a GeForce 2 if you put enough CPU power behind it (I think it lacked hardware TnL so it needed the processor power).
silverblue - Thursday, January 5, 2017 - link
Formerly Videologic, changed in 1999 to Imagination Technologies.I had a Kyro II. The first Kyro was clocked at a mere 115MHz for both core and memory, but Kyro II was a whole 60MHz faster. It really worked well in UT when paired with my XP 1700+. :) There was a Kyro II SE with 200MHz clocks but was as rare as rocking horse excrement. I was also very excited about its supposed TnL, DDR-powered successor, but it never launched, which meant bye-bye on the desktop for PowerVR, rather than the bloody nose that ATi and NVIDIA deserved at that time.
wumpus - Friday, January 6, 2017 - link
If my poor memory is remotely accurate (don't bet on it), the chip to run Unreal Tournament was the S3 chip (probably the last they made). Not sure if the "TnL" it supposedly had was fraudulent or defective, but the texture compression (probably only used in Unreal Tournament before S3 died and it became a standard feature) gave the S3 the best visuals in the game.I know I had one of those boards (simply because it was cheap, long after its day) but can't remember how it performed on Unreal Tournament (I may not have bothered putting windows on that box. In those days the best way to connect windows to the internet was to connect a Linux machine to a DSL modem, then connect windows to Linux. Connecting Windows to DSL was asking to get it to blindly reset all your configuration files at random intervals).
BrokenCrayons - Friday, January 6, 2017 - link
You're probably thinking about S3's Savage video card series with the S3 MeTaL API. It's been ages, but I'm pretty sure UT supported MeTaL out of the box. Epic was pretty good about that back when there was a plethora of competing APIs before we settled into DirectX and OpenGL. When I tried to diversify my PC shop by adding an after-hours LAN arcade, we used those graphics cards because they were good for the multiplayer stuff we wanted to run. I do recall they were a bit glitchy under Descent Freespace. There were artifacts around ship engine glow effects and I think it was due to poor DirectX suport, but don't quote me on that, it's been years.silverblue - Friday, January 6, 2017 - link
Yeah, MeTaL was there at retail, as long as you installed the texture pack from the second disc. I had a Savage 4 (Diamond S540) which worked flawlessly... until I flashed the BIOS one day, thereafter it would hang during every session at a random point, thus requiring a reboot. That was a stupid idea, especially considering it looked excellent. Luckily, somebody got the OpenGL mode to work with those textures.I believe the T&L on the Savage 2000 was unfit for purpose; it was broken in hardware so you couldn't coax its true performance out.
silverblue - Friday, January 6, 2017 - link
Sorry, to clarify, you could use MeTaL without the texture pack, but what would be the point? :)Threska - Friday, January 6, 2017 - link
And for the time it was a very nice card. Shame Kyro didn't take offhttps://en.wikipedia.org/wiki/PowerVR
eldakka - Friday, January 6, 2017 - link
I always liked the infinite planes concept (no polygons!) of the PowerVR 1 series. I remember being disappointed at the time, and still am, that infinite planes never caught on. The entire paradigm gave many operations 'for free' that today require a lot of hardware and software support to implement.Threska - Saturday, January 7, 2017 - link
Could have been a patent issue.tarqsharq - Friday, January 6, 2017 - link
I remembering excitedly reading about TBR with an article about the Kyro II on this very website oh so many years ago like you're saying. http://www.anandtech.com/show/735Funny enough, it didn't do well in the market because the APIs at the time weren't very flexible and required a lot of tweaking to it working in each engine pretty much.
The games that it did work well in, the card punched WAY above its cost bracket if I remember correctly.
Jtaylor1986 - Thursday, January 5, 2017 - link
This design has probably been in the works for 3+ years. Nvidia just beat them to market it had nothing to do with David Kanter.AndrewJacksonZA - Thursday, January 5, 2017 - link
I'm kinda glad that I've procrastinated my RX480 purchase so much. Vega interests me greatly...AndrewJacksonZA - Thursday, January 5, 2017 - link
(Darn it! Where's the edit button guys?)Bring on the release date and the pricing!!! :-)
jjj - Thursday, January 5, 2017 - link
So for it looks likely that they have implemented that famous patent everybody hoped for, one way or another.Seems that higher efficiency was a focus area everywhere so they might catch up- or better?
Hard to figure out where Vega is from a perf density point of view, that matters too.
Any clue if it's on 16FF or they stick with 14nm?
If this is what it looks like, it's gonna be huge step forward and that includes APUs. Can you imagine how those APUs would perform vs the competition?
SunnyNW - Thursday, January 5, 2017 - link
I read that the die size is over 500mm2SunnyNW - Thursday, January 5, 2017 - link
Meant to add...So not sure if TSMC or GloFo but I guess it could be either since GloFo has been running their 14nm for quite a while now. I would put my money on TSMC tho because GloFo history makes me second guess them.jjj - Thursday, January 5, 2017 - link
There are perf difference between the 2 foundries, due to HBM i'm expecting TSMC but we'll see.By looking at the pic published here i would say 480mm2 but the pic is poor so the guess is likely unreliable. In any case, i am expecting it in the 400-500mm2 range.
Kepe - Thursday, January 5, 2017 - link
There's a better picture in the gallery here: https://www.io-tech.fi/uutinen/amd-esitteli-seuraa...Scroll down to the image gallery, it's the first picture.
jjj - Thursday, January 5, 2017 - link
TR has a much better one but didn't had time to try to measure it based on the HBM 2 package size http://techreport.com/r.x/amdvegapreview/vegachip....extide - Thursday, January 5, 2017 - link
HBM2 die is about ~8x12mm. So, the long (12mm) sides of the HBM die are facing the GPU die, and 2 of them are a bit shorter than the GPU die. I am going to guess that the GU die is about 25mm wide. It looks like it is a bit shorter than it is wide -- so say 20mm. 25x20mm makes a 500mm chip, That seems about right. Smaller die than Fiji, much smaller interposer because of only 2 stacks of HBM, this thing should end up costing a lot less than Fiji to produce once the volume gets ramped, and hopefully it should smoke it performance-wise as well.extide - Thursday, January 5, 2017 - link
^^ Based on this pic http://techreport.com/r.x/amdvegapreview/vegachip.... which is the most square on and clear pic I can find right now. I was initially thinking it was 400mm^2 and change, but now I am thinking it is closer to 500mm^2 and change.jjj - Thursday, January 5, 2017 - link
I get to some 500mm2 too.As for cost, Fiji was on 28nm, much cheaper on an area basis. They will get better yield for packaging but the overall costs will be much higher than with Fiji. They could have a SKU with 25% of the CUs disabled at 499$ and the full Vega 10 at 699 to .. the upper limit depends on where perf lands vs Titan X.
Yojimbo - Friday, January 6, 2017 - link
One must also consider the difference in cost of the process that each chip is made with as well as the price of the larger capacity of HBM2 compared to smaller amount of HBM1. But a smaller interposer will definitely help.It'd better smoke Fiji performance-wise. I think it's a foregone conclusion that it will, though. Judging by their released benchmarks I'm guessing it'll be modestly faster than the 1080, on average, but significantly slower than a forthcoming 1080Ti. So it'll probably be priced around the same $650 as the Fury X.
jjj - Friday, January 6, 2017 - link
You are rushing into judging perf in a big way.AMD is demoing Vega, just showing it up and running ,NOT showing perf. Not quite sure why so many don't get that.
You can bet that it is using early software and that it's clocked at 1-1.2GHz only for now.They are not gonna show their hand ,months before retail availability. They are just showing 4k gaming at 60FPS or better.
You can look at min perf Vega 10 should offer in many ways.
1. It is assumed to have same number of "cores" as the Fury X but almost 50% higher clocks. Then,some 15% architectural and software gains would put it on par with Titan X. So the question is, if they can do better and by how much and at what power.
A note here, given the number of cores , the scaling from 28nm to 14/16nm is very poor. They are clearly sacrificing area for huge gains elsewhere, you wouldn't do it otherwise. 16 and 8 bit is on thing but there must be a lot more.
2. Even the Polaris architecture scaled to 12.5TFLOPS and this memory bandwidth, would match the Titan X. So the question is, how much better can they do.
One could argue that AMD is nuts and Vega is worse than Polaris but that's less than reasonable.
eachus - Saturday, January 14, 2017 - link
The 12.5 TFLOPS is Vega 10 doing 16-bit floating point, not Polaris. And this, in part, explains why Vega 10 doesn't scale well compared to Fury X. There is a new compute engine with lots of new features. I'm hoping that one of them is a fused multiply-add, but we do know that it can do 16-bit floating point twice as fast as single precision (32 bit).So remember that, for now, any comparisons you see with Vega will almost certainly use none of the new features. AMD can and should spend most of their driver effort right now on changes that benefit Polaris. Some support for Vega? Sure, but don't expect support for the new features in drivers until just before (or after) Vega ships.
SunnyNW - Thursday, January 5, 2017 - link
With AMD's countdown on the ve.ga site and their marketing almost everyone thought their would be a live stream incoming, but instead it was a countdown to an NDA lift, like really AMD? I believe AMD made a huge mistake here because there are A LOT of angry people out there whom were waiting with much anticipation. I kinda thought something was odd with the time being 6 a.m. PST and had told my friends as much but still they were convinced too that it was for a live stream.Why has AMD marketing been SO bad for so long, I do think they have gotten a little better in recent history but they are way behind Nvidia for example. It's like AMD has kept the same marketing team for years and years and just does not want to let them go for who knows what reason. In business when you fail for so long who are usually replaced, has AMD just been replacing under-performer after another or just keeping the same people I just don't get it.
And this is coming from someone who is an AMD supporter and would like to see them succeed.
Michael Bay - Thursday, January 5, 2017 - link
Marketing is where careers go to die. If you can even call those things careers.What else do you expect?
alphasquadron - Thursday, January 5, 2017 - link
Marketing for large companies is usually one of the higher paid positions. The ability to persuade a mass of people gets you paid well.eldakka - Friday, January 6, 2017 - link
like a cult leader?The_Assimilator - Friday, January 6, 2017 - link
This is exactly my biggest problem with AMDL: their marketing is, to be frank, absolute bullshit. It's why I don't trust them to deliver on their promises until I see independent reviewers validate them, while if Intel says their next chip is going to have an extra 10% IPC or NVIDIA says their next GPU is going to be 20% faster, I'm inclined to believe those companies.AMD marketing needs to understand the difference between hype and lies, and that consistent lies hurt them far more than they help.
Meteor2 - Friday, January 6, 2017 - link
This. I'm tiring of AMD 'teasing' products.When will it be available? What can it do? How much will it cost? This is what I want to know.
GreenMeters - Thursday, January 5, 2017 - link
Are any of Anandtech's writers still based in North Carolina? I can't read HBM2 without thinking HB2. Stupid legislature.Manch - Friday, January 6, 2017 - link
Having trouble reading the article in your preferred bathroom?doggface - Thursday, January 5, 2017 - link
Great article Ryan. Looking forward to the launchTheFrisbeeNinja - Thursday, January 5, 2017 - link
Awesome job Ryan; I love the architecture articles, even the teaser ones.jjj - Thursday, January 5, 2017 - link
Spotted this in the press releaseData based on AMD Engineering design of Vega. Radeon R9 Fury X has 4 geometry engines and a peak of 4 polygons per clock. Vega is designed to handle up to 11 polygons per clock with 4 geometry engines. This represents an increase of 2.6x. VG-3
silverblue - Thursday, January 5, 2017 - link
If I'm correct, Polaris can do 2, the 1070 can do 3 and the 1080 can do 4.Hairs_ - Thursday, January 5, 2017 - link
While this is mildly interesting, I have to question why anandtech has, what 8 "live blog" articles today when there's still nothing in bench for the 470, 460 or 1050 cards. These keynote speeches aren't telling us anything that hasn't already been leaked or in press releases...BrokenCrayons - Thursday, January 5, 2017 - link
I would like to see reviews for the the GPU releases that have slipped through the cracks too. Anandtech has thus far missed the 460, 470, and 1050. Given the amount of time since those graphics cards have been released, it's a disappointment and does cause people to grumble about the site's supposedly looming demise or to lash out in CPU articles at Ian over the lateness of GPU publications.While I personally think live blogs don't have a lot of value, other readers have been quick to point out that they want them and criticize Anandtech for the omission. They're not going away and they do take time to conduct, but that doesn't excuse the months that have passed without reviewing parts of what's arguably the most significant GPU release in recent history thanks to the move to a small manufacturing process.
tynopik - Thursday, January 5, 2017 - link
> the first Vega has tapped outVega has already quit before it even got in the octagon?
fanofanand - Thursday, January 5, 2017 - link
IT'S ALL OVER!tynopik - Thursday, January 5, 2017 - link
> The first chip, which tapped out last yearI think you missed the point of my comment ;-)
should be 'taped' (one p)
wiyosaya - Thursday, January 5, 2017 - link
Mathematically speaking, the "dot product" is not defined by the number of bits in the operation. https://en.wikipedia.org/wiki/Dot_productAnonymous_87 - Thursday, January 5, 2017 - link
it is. if you consider na int8 as a 8 element vector, you can do a dot product with them.in neutral network you can assign each element of an int as in input to the network. the math involved in such network requires repetitive dot product of each element. thats what the article refers to.
bigboxes - Thursday, January 5, 2017 - link
Packed math? Well, something did get launched and something definitely got packed!Colin1497 - Thursday, January 5, 2017 - link
Last page, tapped=>taped.mr_tawan - Thursday, January 5, 2017 - link
And I just bought the Polaris (lol).Seriously I don't know how will AMD market these chips. Are they going to up against the Titan ? or the 1080/1070? or it will also replace the Rx4x0 ?
mr_tawan - Thursday, January 5, 2017 - link
AMD's GPU scheming is so psyco-crusher. (well the SF2's boss is officially Vega not M.Bison anyway).Meteor2 - Saturday, January 7, 2017 - link
They'll be RX5x0.Achaios - Thursday, January 5, 2017 - link
ARTICLE SUMMARY:- Vega to hit the shelves in June 2017.
- Speculation: 1080TI to hit the shelves around late Q3 or Q4 2017, if ever.
Buy guys, cya after 6 months.
MajGenRelativity - Thursday, January 5, 2017 - link
I'm all hyped up for Vega, but could we get a full review of the 480? I always appreciate the in-depth architectural reviews, and that would help me be prepared for Vega when it arrives.Space Jam - Thursday, January 5, 2017 - link
What's the status of the RX 480/470/460 reviews? A teaser was given, but nothing more. And the GTX 1050 (Ti) got an announcement thread which noted the fab differences but has yet to see a review.MajGenRelativity - Thursday, January 5, 2017 - link
I'm also interested in knowing the answer to these questions :)Notmyusualid - Thursday, January 5, 2017 - link
I'm growing tired of the details.Show me the basic power requirements, price, and some well-known benchmarks, then I'll make my purchasing decision.
I guess I've had a long day.
Jtaylor1986 - Thursday, January 5, 2017 - link
You get tired pretty quickly then since these are essentially the first details that have been officially release other than the vague roadmapwebdoctors - Thursday, January 5, 2017 - link
I agree, an announcement that a product 6 months out will be faster than today's isn't that exciting. If it also beamed light into my eyes directly or purified the air with the cooling fan, that would be a neat twist but right now its a pretty bland announcement.tuxRoller - Thursday, January 5, 2017 - link
I'm not sure we can say that the ISA has changed since it has long supported i8.http://gpuopen.com/compute-product/amd-gcn3-isa-ar...
extide - Friday, January 6, 2017 - link
Yeah, why reinvent the wheel? I bet it has additional instructions, but is largely extended rather than totally new.tuxRoller - Friday, January 6, 2017 - link
Yup. Gcn hasn't been given enough credit for how forward looking its been, but instead all people talk about is how old it is.https://gcc.gnu.org/wiki/cauldron2016?action=
AttachFile&do=get&target=Porting+GCC+to+GCN.pdf
So this fellow has been working on adding a gcn target to gcc, and you can see from his presentation how general is the ISA.
tuxRoller - Friday, January 6, 2017 - link
Sorry about that link. I had to split it over two lines in order to get past the AT ai's spam protection:)Threska - Friday, January 6, 2017 - link
TinyURLhttp://preview.tinyurl.com/gtjlxrn
eldakka - Friday, January 6, 2017 - link
Don't use URL obfuscation services. They are a major security risk, as they can hide malware attack vectors as you don't know the true final destination of the URL. For all I know that tinyurl could be going to a malicious site that tries to load malware into your browser.Never click on an obfuscated URL, that's as basic as never opening an email attachment from an unknown source.
Threska - Saturday, January 7, 2017 - link
That's why it has a preview function built in were you can see where it goes first before continuing on.Threska - Saturday, January 7, 2017 - link
I should add that the only practical resolution to the twin problems, too-long URLs and the issues they cause and who do you trust, is Anandtech running their own URL-shortening service.tuxRoller - Saturday, January 7, 2017 - link
Ha!I ALWAYS forget to use those services:)
versesuvius - Friday, January 6, 2017 - link
It has become sickening to listen to AMD years in advance about what they are going to do for the next eon, which is nothing but talking anyway. Wise to stop reading about anything GPU until Intel buys NVIDIA out and gets it all sorted out and done with.lobz - Friday, January 6, 2017 - link
alright then... just let us know when you're sober againMeteor2 - Saturday, January 7, 2017 - link
It's a fair point and I would not be at all surprised if Intel used its cash to buy Nvidia; Intel is clearly struggling to find the Next Big Thing (5G? Really?); Nvidia is not, with IP ideal for making AI happen.Threska - Sunday, January 8, 2017 - link
Cash isn't the only thing needed for a buyout. Nvidia has to be willing to be bought. An IP exchange would be easier for both companies.Gastec - Friday, January 13, 2017 - link
There are these little nuisances called antitrust laws but with Donald Trump as President of the USA anything is possible :)Alexvrb - Friday, January 6, 2017 - link
The "High Bandwidth Cache" could also be a hint of what's to come with future HBM2-capable APUs. Intel has a cache up to what, 128MB of eDRAM? I think even a single HBM2 stack could provide a huge boost to an APU.zodiacfml - Saturday, January 7, 2017 - link
Not impressed. Vega will just compete with Nvidia's current high-end.Alexvrb - Saturday, January 7, 2017 - link
Zodiac: "I don't have any solid data, no benchmarks, and no pricing information... but it says AMD so I'm not allowed to express anything but disdain."andrewaggb - Monday, January 9, 2017 - link
Well the only reference we have gaming-wise is that it's between 1080 and titan levels in doom at 4k. So like 1080ti. It's not particularly exciting. I want to find out it's actually way faster than a 1080 in many games or other apps, but I think zodiac is right to be skeptical. Doom runs well on AMD cards, so it's possible Vega might only be 1080 levels overall and that a 1080ti would be faster and easy for nvidia to counter vega with.I feel the same way about Zen. The benchmarks we've seen or heard so far make it sound fine, but at best as fast as intel. That's concerning if the cherry picked examples are as good, in my mind that suggests it could be 20% slower overall.
Ro_Ja - Sunday, January 8, 2017 - link
Micheal Bay was put by AnandTech to trolloranos - Sunday, January 8, 2017 - link
these graphs mean absolutely nothing without a comparison to the competitionsupdawgwtfd - Monday, January 9, 2017 - link
What's the bet this is all the info Anandtech will post?The teaser then never follow up with a review like so many other things they never get back to...
milkod2001 - Tuesday, January 10, 2017 - link
Anandtech has turned from: The Most Trusted in Tech Since 1997 to Very late late tech previews, plus this pure Purch BS from the web...Gunbuster - Tuesday, January 10, 2017 - link
Support for the HDMI 2.1 or will it be a repeat of the "screw off and wait for a 3rd party dongle 11 months in the making"?watzupken - Wednesday, January 11, 2017 - link
The fact that AMD is very late into the high end graphics with 14nm, means the expectation for them will be higher. I am not sure Vega will fill that expectation at this point in time. With the limited information at this point, it does show again that it is performing well in very selective DX 12 titles against the likes of a GTX 1080, which does not bode well to me. Moreover, the power requirement for Vega 10 seems to be higher than a GTX 1080 that they are comparing with.tamalero - Wednesday, January 11, 2017 - link
I actually if VEGA will be non cut in terms of compute performance.Making it way more attractive for accelerator cards for servers and "decent" in gaming performance.
Thus making it all around performer.
Vodietsche - Wednesday, January 11, 2017 - link
I really hope VEGA performs up to par, we need some High-end competition!Chaser - Tuesday, January 17, 2017 - link
/golfclapAntDX316 - Wednesday, January 18, 2017 - link
The whole point AMD is still in business is to bring consoles to people who can't afford top-of-the-line components and/or the knowledge to make it. As you know Apple is no where near the quality AAA DirectX games are. You cannot get good simulation photorealistic graphics and physics from just Intel alone. NVidia doesn't make CPUs. AMD is the only company to do both in the XBox One and PS4. The better they get, the better the consoles will be.NerroEx - Monday, February 6, 2017 - link
Did everybody completely ignore how they said consoles, new console confrimedpeterhills2233 - Sunday, March 13, 2022 - link
I had such a processor. He is very cool, you have no idea how smart he is. I needed one to solve mathematical processes. This is the essence of my profession. I constantly read literature to improve my skills and also solve various problems. For example green's stokes and the divergence theorem, I found https://plainmath.net/post-secondary/calculus-and-... in order to help people with their questions. I help them and also help myself in practice. I think everyone is on the positive side.