Comments Locked

574 Comments

Back to Article

  • Crono - Thursday, March 2, 2017 - link

    A Hero Has Ryzen
  • Sweeprshill - Thursday, March 2, 2017 - link

    Lived up to the hype. Ryzen is a beast. Intel needs massive price cuts on their 2011-v3 chips. Well done AMD, best price/performance CPUs on the market and as fast or faster than Intel performance.
  • sans - Thursday, March 2, 2017 - link

    Hey, what you have found which features improving on AMD's crap has been found in Intel's products for years.
  • Nem35 - Thursday, March 2, 2017 - link

    Yeah, and it's beating the Intel. Funny, right?
  • Sweeprshill - Thursday, March 2, 2017 - link

    Yeah these new AMD chips are monsters. Wondering how large the price cuts are that Intel will bring to their 2011-v3 chips to compete.
  • czerro - Friday, March 3, 2017 - link

    Intel already slashed prices pretty drastically 4 days ago, to kinda deflate Ryzen's release. Before price cuts, Ryzen had a huge price and performance advantage at all metrics, and Intel would have looked ridiculous.

    I can't believe people aren't reporting the price-cutting right before Ryzen release more. Intel only did it to save face on graphs and confuse people. Ryzen definitely had Intel by the balls a week ago before the price cuts.

    It's great that we all have options now, but this really smeared Ryzen's release in a cheap way that anybody can point out all those Intel chips were 100-200 dollars more expensive less than a WEEK ago.
  • SodaAnt - Saturday, March 4, 2017 - link

    No, Intel hasn't slashed prices. There was a sale at microcenter a few days back, but there's no across the board official price cut on Intel chips.
  • Notmyusualid - Monday, March 6, 2017 - link

    @ SodaAnt

    Agreed, I see no Intel price drops either.
  • Notmyusualid - Friday, March 3, 2017 - link

    @ Nem35

    Incomplete review.

    After seeing a gaming-focused review, I'd say the AMD procs are just OK. I welcome AMD is back with a fighting chance, but about half my purchase choice will be game-inspired.

    Quote:

    "For gaming, it’s a hard pass. We absolutely do not recommend the 1800X for gaming-focused users or builds, given i5-level performance at two times the price."

    I'm not a 'fanboi', as I'd have no trouble fitting a 1700X in a build I wouldn't game in. But otherwise, like another reviewer said, its a hard pass.
  • Alexvrb - Saturday, March 4, 2017 - link

    For gaming builds the upcoming Ryzen 5 and 3 series will offer a lot more bang for your buck and will compete much more aggressively. However, the Ryzen 7 still offers decent gaming performance and excellent performance everywhere else. The gobs of cores may come in handy in the future too, even in games - as more threads will be available on more rigs, devs will take notice. This year AMD is definitely lowering the pricing for 8-16 thread processors, clearing a path for the future of gaming.

    With that being said I still think that when strictly considering gaming, their Ryzen 3/5 quadcore models will be a far better value, especially as current-gen games aren't often built in such a way that they can take advantage of the Ryzen 7.
  • Notmyusualid - Saturday, March 4, 2017 - link

    Can't disagree with you pal. They look like they execptional value for money.

    I on the other hand, am already on LGA2011-v3 platform, so I won't be changing, but the main point here is - AMD are back. And we welcome them too.
  • Alexvrb - Saturday, March 4, 2017 - link

    Yeah... if the pricing is as good as rumored for the Ryzen 5, I may pick up a quad-core model. Gives me an upgrade path too, maybe a Ryzen+ hexa or octa-core down the road. For budget builds that Ryzen 3 non-SMT quad-core is going to be hard to argue with though.
  • wut - Sunday, March 5, 2017 - link

    You're really optimistically assuming things.

    Kaby Lake Core i5 7400 $170
    Ryzen 5 1600X $259

    ...and single thread benchmark shows Core i5 to be firmly ahead, just as Core i7 is. The story doesn't seem to change much in the mid range.
  • Meteor2 - Tuesday, March 7, 2017 - link

    @wut spot-on. It also seems that Zen on GloFlo 14 nm doesn't clock higher than 4.0 GHz. Zen has lower IPC and lower actual clocks than Intel KBL.

    Whichever way you cut it, however many cores in a chip are being considered, in terms of performance, Intel leads. Intel's pricing on >4 core parts is stupid and AMD gives them worthy price competition here. But at 4C and below, Intel still leads. AMD isn't price-competitive here either. No wonder Intel haven't responded to Zen. A small clock bump with Coffee Lake and a slow move to 10 nm starting with Cannon Lake for mobile CPUs (alongside or behind the introduction of 10 nm 'datacentre' chips) is all they need to do over the next year.

    After all, if Intel used the same logic as TSMC and GloFlo in naming their process nodes, i.e. using the equivalent nanometre number of if finFETs weren't being used, Intel would say they're on a 10 nm process. They have a clear lead over GloFlo and thus anything AMD can do.
  • Cooe - Sunday, February 28, 2021 - link

    I'm here from the future to tell you that you were wrong about literally everything though. AMD is kicking Intel's ass up and down the block with no end in sight.
  • Cooe - Sunday, February 28, 2021 - link

    Hahahaha. I really fucking hope nobody actually took your "buying advice". The 6-core/12-thread Ryzen 5 1600 was about as fast at 1080p gaming as the 4c/4t i5-7400 ON RELEASE in 2017, and nowadays with modern games/engines it's like TWICE AS FAST.
  • deltaFx2 - Saturday, March 4, 2017 - link

    I think the reviewer you're quoting is Gamers Nexus. He doesn't come across as being a particularly erudite person on matters of computer architecture. He throws a bunch of tests at it, and then spews a few untutored opinions, which may or may not be true. Tom's hardware does a lot of the same thing, and more, and their opinions are far more nuanced. Although they too could have tried to use an AMD graphics card to see if the problems persist there as well, but perhaps time was the constraint.

    There's the other question of whether running the most expensive GPU at 1080p is representative of real-world performance. Gaming, after all, is visual and largely subjective. Will you notice a drop of (say) 10 FPS at 150 FPS? How do you measure goodness of output? Let's contrive something.

    All CPUs have bottlenecks, including Intel. The cases where AMD does better than Intel are where AMD doesn't have the bottlenecks Intel has, but nobody has noticed it before because there wasn't anything else to stack up against it. The question that needs to be answered in the following weeks and months is, are AMD's bottlenecks fixable with (say) a compiler tweak or library change? I'd expect much of it is, but lets see. There was a comment on some forum (can't remember) that said that back when Athlon64 (K8) came out, the gaming community was certain that it was terrible for gaming, and Netburst was the way to go. That opinion changed pretty quickly.
  • Notmyusualid - Saturday, March 4, 2017 - link

    Gamers Nexus seem 'OK' to me. I don't know the site like I do Anandtech, but since Anand missed out the games....

    I am forced to make my opinions elsewhere. And funny you mentions Toms, they seem to back it up to some degree too, and I know these two sites are cross-owned.

    But still, when Anand get around to benching games with Ryzen, only then will I draw my final conclusions.
  • deltaFx2 - Sunday, March 5, 2017 - link

    @ Notmyusualid: I'm sure Gamers Nexus numbers are reasonable. I think they and Tom's (and other reviewers) see a valid bottleneck that I can only guess is software optimization related. The issue with GN was the bizarre and uninformed editorializing. Comments like, the workloads that AMD does well at are not important because they can be accelerated on GPU (not true, but if true, why on earth did GN use it in the first place?). There are other cases where he drops i5s from evaluation for "methodological reasons" but then says R7 == i5. Even based on the tests he ran, this is not true. Anyway, the reddit link goes over this in far more detail than I could (or would).
  • Meteor2 - Tuesday, March 7, 2017 - link

    @DeltaFX2 in what way was GamersNexus conclusion that tasks that can be pushed to GPUs should be incorrect? Are you saying Premiere and Blender can't be used on GPUs?

    GN's conclusion was:

    "If you’re doing something truly software accelerated and cannot push to the GPU, then AMD is better at the price versus its Intel competition. AMD has done well with its 1800X strictly in this regard. You’ll just have to determine if you ever use software rendering, considering the workhorse that a modern GPU is when OpenCL/CUDA are present. If you know specific in stances where CPU acceleration is beneficial to your workflow or pipeline, consider the 1800X."

    I think that's very fair and a very good summary of Ryzen.
  • deltaFx2 - Wednesday, March 8, 2017 - link

    @Meteor2: No. Consumer GPUs have poor throughput for Double precision FP. So you can't push those to the GPU (unless you own those super-expensive Nvidia compute cards). Apparently, many rendering/video editing programs use GPUs for preview but do the final rendering on CPU. Quality, apparently, and might be related to DP FP. I'm not the expert, so if you know otherwise, I'd be happy to be corrected and educated. Also, you could make the same argument about AVX-256.

    The quoted paragraph is probably the only balanced statement in that entire review. Compare the tone of that review with AT review above.

    On an unrelated note, there's the larger question of running games at low res on top-end gpus and comparing frame-rates that far exceed human perception. I know, they have to do something, so why not just do this. The rationale is: " In future a faster GPU in future will create a bottleneck ". If this is true, it should be easy to demonstrate, right? Just dig through a history of Intel desktop CPUs paired with increasingly powerful GPUs and see how it trends. There's not one reviewer that has proven that this is true. It's being taken as gospel. OTOH, plenty of folks seem happy with their Sandy Bridge + Nvidia 1080, so clearly the bottleneck isn't here 5 years after SB. Maybe, just maybe, it's because the differences are imperceptible?

    Ryzen clearly has some bottlenecks but the whole gaming thing is a tempest in a tea-cup.
  • theuglyman0war - Thursday, March 9, 2017 - link

    ZBRUSH

    probably 90% of all 3d assets that are created from concept ( NOT SCANNED )
    Went through Zbrush at some point.

    Which means no GPU acceleration at all.
    Renderman
    Maxwell
    Vray
    Arnold
    still all use CPU rendering As do a mountain of other renderers.
    Arnold will be getting an option
    But the two popular GPU renderers are Otoy Octane and Redshift...
    The have their excellent expensive place. But the majority of rendering out there is still suffered through software rendering. And will always be a valid concern as long as they come FREE built into major DCC applications.
  • theuglyman0war - Thursday, March 9, 2017 - link

    Saw that same GPU trumps CPU render validity concerns...
    Comment and had a good laugh.
    I'll remember to spread that around every time I see Renderman Vray Arnold Maxwell sans GPU rendering going on.
    Or the next time a Mercury engine update negates all non Quadro GPU acceleration.

    To be fair a lot of creative pros and tech artists seem to disagree with me but...
    The only time between pulling vrts in Maya and brushing a surface in Zbrush that I really feel that I am suffering buckets of tears and desire a new CPU ( still on i7-980x ) is when I am cussing out a progress bar that is teasing me with it's slow progress. And that means CORES! encoding... un compressing... Rendering! Otherwise I could probably not notice day to day on a ten year old CPU. ( excluding CPU bound gaming of course... talking bout day to day vrt pulling )
    I was just as productive in 2007 as I am today.
  • MaidoMaido - Saturday, March 4, 2017 - link

    Been trying to find a review including practical benchmarks for common video editing / motion graphics applications like After Effects, Resolve, Fusion, Premiere, Element 3D.

    In a lot of these tasks, the multithreading is not always the best, as a result quad core 6700K often outperforms the more expensive Xeon and 5960X etc
  • deltaFx2 - Saturday, March 4, 2017 - link

    I would recommend this response to the GamersNexus hit piece: https://www.reddit.com/r/Amd/comments/5xgonu/analy...

    The i5 level performance is a lie.
  • Notmyusualid - Saturday, March 4, 2017 - link

    @ deltaFx2

    Sorry, not reading a 4k worded response. I'll wait for Anand to finish its Ryzen reviews before I draw any final conclusions.
  • Meteor2 - Tuesday, March 7, 2017 - link

    @deltaFX2 RE: in the 4k word Reddit 'rebuttal', what that person seems to be saying, is that once you've converted your $500 Ryzen 1800X into a 8C/8T chip, _then_ it beats a $240 i5, while still falling short of the $330 i7. Out-of-the-box, it has worse gaming performance than either Intel chip.

    That's not exactly a ringing endorsement.

    The analysis in the Anandtech forums, which concludes that in a certain narrow and low power band a heavily down-clocked 1800X happens to get excellent performance/W, isn't exactly thrilling either.
  • deltaFx2 - Wednesday, March 8, 2017 - link

    @ Meteor2: The anandtech forum thing: Perf/watt matters for servers and laptop. Take a look at the IPC numbers too. His average is that Zen == Broadwell IPC, and ~10% behind Sky/Kaby lake (except for AVX256 workloads). That's not too shabby at all for a $300 part.

    You completely missed the point of the reddit rebuttal. The GN reviewer drops i5s from plenty of tests citing "methodological reasons", but then says R7==i5 in gaming. The argument is that plenty of games use >4 threads and that puts i5 at a disadvantage.
  • tankNZ - Sunday, March 5, 2017 - link

    yes I agree, it's even better than okay for gaming[img]http://smsh.me/li3a.png[/img]
  • deltaFx2 - Monday, March 6, 2017 - link

    You may wish to see this though: https://forums.anandtech.com/threads/ryzen-strictl... Way, way, more detailed than any tech media review site can hope to get. No, it's got nothing to do with gaming. Gaming isn't the story here. AMD's current situation in x86 market share had little to do with gaming efficiency, but perf/watt.

    I'll quote the author: "850 points in Cinebench 15 at 30W is quite telling. Or not telling, but absolutely massive. Zeppelin can reach absolutely monstrous and unseen levels of efficiency, as long as it operates within its ideal frequency range."
  • nt300 - Saturday, March 11, 2017 - link

    The Ryzen 7 1700 is definitely the gaming choice IMO. The CPU that does well in gaming and amazing at everything else. Windows 10 hasn't been properly optimized for ZEN, so any Benchmarks and Gaming Benchmarks are not set in stone.
  • A2Ple98 - Monday, May 22, 2017 - link

    Actually Ryzen isn't for only gamers, is mostly for streamers and professionals. The cores that aren't used for gaming, they are used to encode the video you are stream. As for pro people, they get almost a i7-6900K for half the price.
  • Sweeprshill - Thursday, March 2, 2017 - link

    Does not seem to be proper English here ?
  • Sweeprshill - Thursday, March 2, 2017 - link

    n/m can't edit comments I suppose
  • nt300 - Saturday, March 11, 2017 - link

    Wrong, ZEN is a new design and quite innovative. Just like the past, AMD has let this industry for many years. More so when they launched the Athlon 64 with the IMC which Intel claimed was useless and a waste of die space. That Athlon 64 at 1000 MHz less clock speed smoked any Intel chip you put it against.

    My point, ZEN is new, and both ZEN and Intel chips are unique in there own way, might share some similarities, but nevertheless they are different.
  • nos024 - Thursday, March 2, 2017 - link

    Nope. Ryzen will need to drop in price. $500 1800x is still too expensive. According to this even a 7700k @ $300 -$350 is still a good choice for gamers.

    2011-v3 still offers a platform with more PCIe3 lanes and quad memory channel. I thought about an 1800x and 370 mobo combo, but that costs similar to a 6850k with x99.

    Sorry, ill stick to intel this time around. Good that ryzen caused a ripple in price war though.
  • Gothmoth - Thursday, March 2, 2017 - link

    gamer... as if the world is only full with idiotic people who waste their lives playing shooter or RPG´s.
  • nos024 - Thursday, March 2, 2017 - link

    Ikr? Whatever makes your world go round man.
  • brushrop03 - Thursday, March 2, 2017 - link

    Well played
  • AndrewJacksonZA - Thursday, March 2, 2017 - link

    lol
  • rudolphna - Thursday, March 2, 2017 - link

    Demonizing gamers in your post does nothing to contribute to your credibility, and will only turn off more well reasoned people from listening, or caring, about your opinion.
  • samer1970 - Friday, March 3, 2017 - link

    Gamers dont buy 8 cores chips .. If you want good AMD gaming chip at very low price , wait for the 6 and 4 cores Ryzen and then judge ...

    I expect the 4 cores/8 threads Ryzen at 150$ to blow Intel to pieces ... SOON ..

    Imagine a 4.5Ghz AMD Ryzen 4 cores for $150 then talk .
  • Sttm - Friday, March 3, 2017 - link

    4 cores that are noticeably slower than Intel's 4 cores, which sell in a handsome i5 package for $200. I think they need a software miracle and they need it fast to win over the gaming crowd.
  • Cooe - Sunday, February 28, 2021 - link

    Bet you're feeling like a massive idiot now if you actually got that 4c/4t Kaby Lake i5 over a 6c/12t Ryzen 5 1600. It was about as fast at 1080p gaming in 2017 as the R5, but nowadays isn't even in the same UNIVERSE as the Ryzen chip. Let alone the performance difference for literally EVERYTHING else.
  • Diji1 - Thursday, March 2, 2017 - link

    Hurr durr you don't like what I like so you're a dumbo making me smarter than you! (yes, I know but they cannot see it themselves because their so smart in their own imagination).
  • JoeyJoJo123 - Thursday, March 2, 2017 - link

    What exactly are you trying to say here?
  • Holliday75 - Thursday, March 2, 2017 - link

    I think it was "Hurr durr".
  • BikeDude - Friday, March 3, 2017 - link

    sounded more like 'hold door' to me?
  • star-affinity - Thursday, March 2, 2017 - link

    I didn't know what is considered "wasting your life" is objective – please elaborate. What do you do with your life that makes it better than someone who likes to plays RPGs?
  • Dug - Friday, March 3, 2017 - link

    I'm so glad you are the one to judge what people are when they play games. Your insight and thought process is inspiring.
    I'm only to guess that what you do with a computer is going to change the world.
  • BurntMyBacon - Friday, March 3, 2017 - link

    @Gothmoth: "gamer... as if the world is only full with idiotic people who waste their lives playing shooter or RPG´s."

    PC Gaming happens to be one of the few growing areas in the PC market. Not everyone games, but for those that do, the 7700K is still worth considering. Dropping $500 on the 1800X may not be the best call for those that don't take advantage of the parallelism. Of course, the 1800X wasn't really meant for people who can't take advantage of the parallelism. AMD will have lower cost narrower processors to address that gap. I'm curious as to how the performance/price equation will stand once AMD releases their upper end 6c/12t and 4c/8t processors.
  • Beany2013 - Friday, March 3, 2017 - link

    Sod the 1800X - I need a new VM server, and if I want all the threads (sixteen), I can either drop £450 on a Xeon E5 2620 at 2.1-3ghz (cheapest Intel 16 thread option I can find), or I can spend £100 less, and get a Ryzen 7 1700 (3.0-3.7ghz) and put that extra money towards more RAM so I can run more VMs and get more work done.

    For those of us who aren't high end gamers - which is basically almost everyone, and a far more significant market - these chips may well give Intel a bloody nose in the workstation space; AMD have confirmed they'll use ECC RAM quite happily.

    Photographers, videographers, CAD-CAM, developers etc are a bigger market in terms of raw units than high end gamers, and these chips look like being a pretty compelling option as it stands.

    Steven R
  • Beany2013 - Friday, March 3, 2017 - link

    (VM server for home, I should have noted - for work, I'll see how the Ryzen based opterons and supermicro mobos etc pan out - money is important in these factors, but I'm not a moron, and I'm not going to run production gear on gaming hardware, natch....)
  • BurntMyBacon - Friday, March 3, 2017 - link

    @Beany2013: "I need a new VM server, and if I want all the threads (sixteen), I can either drop £450 on a Xeon E5 2620 at 2.1-3ghz (cheapest Intel 16 thread option I can find), or I can spend £100 less, and get a Ryzen 7 1700 (3.0-3.7ghz) and put that extra money towards more RAM so I can run more VMs and get more work done."

    It is clear by this statement that you fall into the category of people that can take advantage of the parallelism. Therefore, my statement doesn't apply to your presented in the slightest.

    I don't disagree that the Ryzen 7 series has a lot to offer to a lot of people (myself included). If I were in the market today, I'd be looking long and hard at an R7 1700X. The minor drop in gaming performance is less significant to me than the increase in performance for many other tasks I use my computer for. I do a little bit of dabbling in a lot of different things (most of which benefit from high thread count). I have noticed that for the set of applications I have open simultaneously and the tasks I have running, my computer is more responsive with more cores or threads, but single threaded performance is still important to the individual tasks.
    In my workflow: (i3 < i5/FX-8xxx < i7 <? R7)

    My point was that there is in fact a not so insignificant market of people putting computers together for the primary purpose of gaming. This market appears, by all metrics, to be growing. For this market, Intel's i7-7700K or better yet i5-7600K are still viable options that provide better performance/price than AMD's current options. I'll repeat: "AMD will have lower cost narrower processors to address that gap. I'm curious as to how the performance/price equation will stand once AMD releases their upper end 6c/12t and 4c/8t processors."
  • Cooe - Sunday, February 28, 2021 - link

    "or better yet i5-7600K"
    Arguably the most short-sited statement in this entire comments section lol. The 4c/4t i5's had roughly equal gaming performance to Ryzen at launch but with ZERO headroom left for the future. This is why the i5-7600K gets absolutely freaking ROFLSTOMPED by the R5 1600 in modern titles/game engines.
  • JMB1897 - Friday, March 3, 2017 - link

    Compelling, but I don't think it's totally there yet. I'd be worried about the memory issues. Increased latency as you add more DIMMs and dual vs quad channel. I'd spend that extra 100 on a Xeon personally.
  • Sttm - Friday, March 3, 2017 - link

    Thats who buys off the shelf CPUs thats cost $$$, Gamers. Thats who AMD needs to please with their product. GAMERS. Thats why AMD's stock has been tanking since Ryzen reviews went up, because GAMERS are the demographic that matters when it comes to performance CPU sales.
  • deltaFx2 - Saturday, March 4, 2017 - link

    @Sttm: You have an inflated opinion of the impact of gamers. No, AMD's stock isn't tanking because of gamers. I suggest you also look at Nvidia's stock, which is well down from its high of ~120, to ~98. Wed-Friday, Nvidia dropped from 105 to 98, and it dipped below that to ~96 at one point. That's roughly 7-8%. The two stocks are often correlated on drops, with AMD amplifying nvidia's drop. Both do GPUs, see? Some people make tonnes of money shorting AMD (and in recent times have lost their shirt doing so).

    Here's the truth: All Desktop, as per Lisa Su, is a 5 bn TAM market and gaming is part of this (let's say 50%). Nothing to scoff at, sure, but compared to laptop and server, it's a rounding error. There's NOTHING in these tests/reviews to suggest that AMD will suck in those markets; in fact, quite the opposite: power looks good, perf looks good. AMD's stock (long term) won't tank on the whims of gamers. They help get the mindshare, which is the only reason they're worth catering to (they tend to be a vocal, passionate, and sometimes irrational lot. You won't see datacenter gurus doing the stuff that gamers do. They certainly won't shoot each other over whose GPU is the best).
  • cmdrdredd - Saturday, March 4, 2017 - link

    Believe it or not there are millions of people worldwide who pretty much use their PC for two things. The internet (web browsing, email etc) and gaming. You don't need 16 threads to check email and read forums either so gaming performance is going to be critical. It's not just the CPU performance, it's the entire platform that contributes to Gaming related performance.
  • sans - Thursday, March 2, 2017 - link

    Yeah, stick with Intel because Intel is the standard and its products are the best for each respective market. AMD is a total failure.
  • EchoWars - Thursday, March 2, 2017 - link

    No, apparently the failure was in your education, since it's obvious you did not read the article.
  • Notmyusualid - Friday, March 3, 2017 - link

    Ha...
  • sharath.naik - Thursday, March 2, 2017 - link

    I think you missed the biggest news in this information dump. The TDP is the biggest advantage amd has. Which means that for 150watt server cpu. they should be able to cram a lot more cores than intel will be able to.
  • Meteor2 - Friday, March 3, 2017 - link

    ^^^This. I think AMD's strength with Zen is going to be in servers.
  • Sttm - Friday, March 3, 2017 - link

    Yeah I can see that.
  • UpSpin - Thursday, March 2, 2017 - link

    According to a german site, in games, Ryzen is equal (sometimes higher, sometimes lower) to the Intel i7-6900K in high resolution games (WQHD). Once the resolution is set very low (720p) the Ryzen gets beaten by the Intel processor, but honestly, who cares about low resolution? For games, the probably best bet would be the i7-7700K, mainly because of the higher clock rate, for now. Once the games get better optimized for 8 cores, the 4-core i7-7700K will be beaten for sure, because in multi-threaded applications Ryzen is on par with the twice expensive Intel processor.

    I doubt it makes sense to buy the Core i7-6850K, it has the same low turbo boost frequency the 6900K has, thus low single threaded performance, but at only 6 cores. So I expect that it's the worst from both worlds. Poor multi-threaded performance compared to Ryzen, poor single threaded performance compared to i7-7700K.

    We also have to see how well Ryzen can get overclocked, thus improving single core performance.
  • fanofanand - Thursday, March 2, 2017 - link

    That is a well reasoned comment. Kudos!
  • ShieTar - Thursday, March 2, 2017 - link

    Well, the point of low-resolution testing is, that at normal resolutions you will always be GPU-restricted. So not only Ryzen and the i7-6900K are equal in this test, but so are all other modern and half-modern CPUs including any old FX-8...

    The most interesting question will be how Ryzen performs on those few modern games which manage to be CPU-restricted even in relevant resolutions, e.g. Battlefield 1 Multiplayer. But I think it will be a few more days, if not weeks, until we get that kind of in-depth review.
  • FriendlyUser - Thursday, March 2, 2017 - link

    This is true, but at the same time this artificially magnifies the differences one is going to notice in a real-world scenario. I saw reviews with a Titan X at 1080p, while many will be playing 1440p with a 1060 or RX480.

    The test case must also approximate real life.
  • khanikun - Friday, March 3, 2017 - link

    They aren't testing to show what it's like in real life though. The point of testing is to show the difference between the CPUs. Hence why they are gearing their benchmarking to stress the CPU, not other portions of the system.
  • BurntMyBacon - Friday, March 3, 2017 - link

    @ShieTar: "Well, the point of low-resolution testing is, that at normal resolutions you will always be GPU-restricted."

    If this statement is accepted as true, then by deduction, for people playing at normal (or high) resolutions, gaming is not a differentiator and therefore unimportant to the CPU selection process. If gaming is your only criteria for CPU selection, then that means you can get the cheapest CPU possible until you are not GPU restricted.

    @ShieTar: "The most interesting question will be how Ryzen performs on those few modern games which manage to be CPU-restricted even in relevant resolutions, e.g. Battlefield 1 Multiplayer."

    I agree here fully. Show CPU heavy titles to tease out the difference between CPUs. Artificially low resolutions are academic at best. That said, according to Steam Surveys, just over half of their respondents are playing at resolutions less than 1080P. Over a third are playing at 1366x768 or less. Though, I suspect the overlap between people playing at these resolutions and people using high end processors is pretty small.

    Average frame rate is fairly uninteresting in most games for high end CPUs, due to being GPU bound or using unrealistic settings. Some, more interesting, metrics are min frame rate, frame time distribution (or simply graph it), frame time consistency, and similar. These metrics do more to show how different CPUs will change the experience for the player in a configuration the player is more likely to use.
  • Lord-Bryan - Thursday, March 2, 2017 - link

    Who buys a 500 dollar cpu to play games at 720p res. All that talk is just BS.
  • JMB1897 - Friday, March 3, 2017 - link

    That test is not done for real world testing reasons. At that low resolution, you're not GPU bound, you're CPU bound. That's why the test exists.

    Now advance a few years into the future when you still have your $500 Ryzen 7 CPU and a brand new GPU - you may suddenly become CPU bound even at QHD or 4k, whereas a 7700k might not quite be CPU bound just yet.
  • MAC001010 - Saturday, March 4, 2017 - link

    Or a few years in the future (when you get your new GPU) you find that games have become more demanding but better multi-threaded, in which case your Ryzen 7 CPU works fine and the 7700k has become a bottleneck despite its high single-threaded performance.

    This illustrates the inherent difficulty of comparing high freq. CPUs to high core count CPUs in regards to future potential performance.
  • cmdrdredd - Saturday, March 4, 2017 - link

    "Or a few years in the future (when you get your new GPU) you find that games have become more demanding but better multi-threaded, in which case your Ryzen 7 CPU works fine and the 7700k has become a bottleneck despite its high single-threaded performance."

    Maybe, the overclocking scenario is also important. Most gamers will overclock to get a bit of a boost. I have yet to replace my 4.5Ghz 3570k even though new CPUs offer more raw performance, the need hasn't been there yet.

    One other interesting thing is how Microsoft's PlayReady 3.0 will be supported for 4k HDR video content protection. So far I know Kaby Lake supports it, but haven't heard about any of AMD's offerings unless I missed it somewhere.
  • Cooe - Sunday, February 28, 2021 - link

    Lol, except here in reality the EXACT OPPOSITE thing happened. A 6-core/12-thread Ryzen 5 1600 still holds up GREAT in modern titles/game engines thanks to the massive advantage in extra CPU threads. A 4c/4t i5-7600K otoh? Nowadays it performs absolutely freaking TERRIBLY!!!
  • basha - Thursday, March 2, 2017 - link

    all the reviews i read are using NVidia 1080 gfx card. my understanding is AMD graphics has better implementation of DX12 with ability to use multiple cores. I would like to see benchmarks with something like RX480 crosfire with 1700x. this would be in the similar budget as i7 7700 + GTX 1080.
  • Notmyusualid - Friday, March 3, 2017 - link

    http://www.gamersnexus.net/hwreviews/2822-amd-ryze...
  • cmdrdredd - Saturday, March 4, 2017 - link

    Overclocking will be interesting. I don't use my PC for much besides gaming and lately it hasn't been a lot of that either due to lack of compelling titles. However, I would still be interested in seeing what it can offer here too for whenever I finally break down and decide I need to replace my 3570k @ 4.5Ghz.
  • Midwayman - Thursday, March 2, 2017 - link

    Here's hoping the 1600x hits the same gaming benches as the 1800x when OC'd. $500 for the 1800x is fine, Its just not the best value for gaming. Just like the i5's having been better value gaming systems in the past.
  • FriendlyUser - Thursday, March 2, 2017 - link

    True. The 1600X will be competitive with the i5 at gaming and probably much faster in anything multithreaded. The crucial point is the price... $200 would be great.
  • MrSpadge - Thursday, March 2, 2017 - link

    "Ryzen will need to drop in price. $500 1800x is still too expensive. According to this even a 7700k @ $300 -$350 is still a good choice for gamers."

    That's what the 1700X is for.
  • lilmoe - Thursday, March 2, 2017 - link

    +1
    And for that, I'd say the 1700 (non-x) is the best consumer CPU available ATM. BUT, if someone just wants to game, I'd say get the Core i5... For me though, screw Intel. Never going them again.
  • fanofanand - Thursday, March 2, 2017 - link

    The 1700 is the sweet spot for anyone not trying to eek out a few more fps or drop their encode/decode times by a couple of seconds. To save $170 and lose a couple hundred mhz, I know which chip seems like the best all-around for price/performance and that's the 1700.
  • lilmoe - Thursday, March 2, 2017 - link

    Yep. You get both efficiency and performance when needed. This should allow for super quiet and very performant builds. Just take a look at the idle system power draw of these chips. Super nice.

    Everything is going either multi-threaded or GPU accelerated, even compiling code. What I'm really waiting for is Raven Ridge. I've got lots of stock $$ and high hopes for a low power 4-6 core Zen APU, with HBM and some bonus blocks for video encode (akin to Quicksync). I have a feeling they'll be much better for idling power and have better support for Microsoft's connected standby.
  • khanikun - Friday, March 3, 2017 - link

    i5 is a good gamer and all around cpu for majority of users. If all you plan to do is game and a tight budget, the i3 7350k is a great cpu for just that. Once the workload goes a bit more multithreaded, that's where you'll want to move to an i5.
  • Valis - Friday, March 3, 2017 - link

    I game now and then, but I do a lot of other things too. Video rendering, Crypto coins, Folding @ home, VM, etc. So any Zen, perhaps even 4 Core later thins year with a good GPU will suit me fine. :)
  • nos024 - Thursday, March 2, 2017 - link

    So the 1800x is pointless?
  • lilmoe - Thursday, March 2, 2017 - link

    I don't think pointless is the right word. I'd say it's the worse value for dollar of the three.
  • tacitust - Thursday, March 2, 2017 - link

    Not at all pointless if you do a lot of video transcoding or other CPU intensive tasks well suited to multiple cores. The price premium is still for the 1800x is way lower than the price premium for the Intel processors.
  • Meteor2 - Friday, March 3, 2017 - link

    ...In which case you'd be better off with a 7700K, looking at the benchmark results. Cheaper too.
  • ddriver - Thursday, March 2, 2017 - link

    Ryzen offers the same performance at half the cost. More pci-e lanes is good for io, however quad channel memory is pretty much pointless, aside of pointless synthetic benches. Ryzen might not make it to my personal workstation due to the low pci-e lane count, but it has enough to replace my aging 3770k farm nodes, to which it will be a significant upgrade, provided the chip and platform turn out to be stable and bug free,

    Intel has gotten lazy and sloppy, bricking products, chipset bugs, they haven't really done anything new architecture wise for years, milking the same old cow.

    It is rather silly to assume that gaming dictates CPU prices, this IS NOT a gaming product, if your ass-logic is to be followed, the intel needs to drop the 7700k price to 168, because in games it is barely any faster than the i3-7350K, and has the same pathetic, even lower than ryzen, number of pci-e lanes.

    This is a chip for HPC, which gaming is NOT. Go back to the kiddie garden, eight core chips are for grown ups ;)
  • imaheadcase - Thursday, March 2, 2017 - link

    People compare it to gaming, because its the main driving for these type of CPUs, its not even gaming specfic but VR, Graphics modeling, etc. You honestly think people are buying these for offices or industry for complex math problems? lol
  • ddriver - Thursday, March 2, 2017 - link

    It is not "people" but "fanboys", and they cling to gaming because it is the only workload where intel can offer better performance for the price, albeit by comparing products from different tiers, which is quite frankly moronic.

    Cars are faster than trucks, so who in the world needs to spend money on trucks? That's the kind of retarded logic you are advocating...

    Smart people buy whatever suits their needs. Obviously, if all you do is play games you wouldn't be buying ryzen or a lga2011 system. Just get an unlocked i5 and overclock it, best bang for the buck. You must realize that even if you don't, other people use computers for tasks other than gaming. And for a large portion of them ryzen will be the best deal, because it is versatile - it is good enough for gaming too, while still offering significant performance advantage compare to an intel quad in tasks that are time staking, and are very much competitive with intel's 8 and 10 core chips while delivering more than twice the value, which is important for everyone who doesn't have money to throw away.

    Claiming that "gaming is the main driving for these type of CPUs" is foolish to say the least, because games don't benefit from that particular type of CPUs. Most of the games can't even property utilize 4 threads. And this is not likely to change soon, because the overhead of complexity and thread synchronization is not worth it for non-performance demanding tasks such as games.
  • Lord-Bryan - Thursday, March 2, 2017 - link

    That's one really well thought out argument
  • rarson - Thursday, March 2, 2017 - link

    Ryzen's versatility and price are the two biggest factors that make it so good. It might not beat the very best gaming CPU that Intel has, or the very best multi-threaded monster that Intel has in every scenario, but it's competitive with both at half the price of the high-end stuff. Hence, while I do game some and want to build a computer to use for gaming, I also do other stuff like audio production that benefits greatly from Ryzen's multi-threaded performance. To me, it's a no-brainer: Ryzen right now is the best bang-for-the-buck chip for someone who wants all-around high-end performance, by far. Maybe not the 1800X, I kind of think the 1700X is a better value, but still, for most people who want multiple-use performance instead of absolute maximum gaming performance, Ryzen is the clear choice.

    Ryzen's max clock speeds seem, like Intel's, to be hindered by the total number of cores on chip, so it should be extremely interesting to see how the 4- and 6-core chips overclock once they arrive, and what kind of performance they'll achieve. I actually think that, like Intel, a 4-core Ryzen might be a better gaming chip than the 8-core, and if that's the case, then it might be really darn close to Intel's best Kaby Lake, because like you pointed out, most games aren't threaded well at all.

    Additionally, from a gaming perspective, it seems like AMD has done more to push technology forward in that respect than anyone else. They've worked on Mantle, Vulkan, FreeSync, TrueAudio, and others. They've always tried to give performance value by offering more cores, but software has been slow to take advantage of them. Intel is content to stagnate by offering extremely incremental increases because performance is "good enough" so developers have no reason to really try to take advantage of extra cores aside from outside use cases. With Ryzen, AMD is pushing chips towards higher core counts (much like they did with the Athlon X2) but this time, they're trying harder to get developers on board and help them achieve good results. So while it always takes forever for software to better utilize the hardware, once the hardware becomes more common the software will start to follow and you'll see the actual gaming performance improve. Is that a valid reason to buy Ryzen today if your sole focus is gaming? Of course not, but it does bode well for Ryzen owners in the future. The performance can only get better. Can the same be said about Intel? Well, probably not if you're using one of the 4-core chips. It's pretty much a known quantity.

    I had high hopes for Bulldozer and Ryzen is the exact opposite of what Bulldozer was. I feel like the CPU market has been stagnant for years and now suddenly there's a reason to be excited. This makes AMD competitive again, which will be good for pricing even if you're an Intel fan. It's been a long wait, but it was worth it, this is a good product.
  • Notmyusualid - Friday, March 3, 2017 - link

    http://www.gamersnexus.net/hwreviews/2822-amd-ryze...
  • Makaveli - Thursday, March 2, 2017 - link

    +1 ddriver you destroyed that kid with your logic well done.
  • khanikun - Friday, March 3, 2017 - link

    Gaming definitely isn't the main driving force for CPUs, as use case changes. I bought a 7700K for my gaming rig. I'd get a 1700 for a VM host, as I'd like to start building a lab again. It won't be this round though. I'd rather wait for AMD to iron our any kinks and buy the next generation. It's something the 7700K could do, but more cores would definitely make it a much better lab.
  • Meteor2 - Friday, March 3, 2017 - link

    Nobody buys mid/high-end consumer chips for HPC. They buy them for gaming. A few for video production. That's it.
  • Cooe - Sunday, February 28, 2021 - link

    Find me these so-called people buying Intel HEDT CPU's (aka OG Ryzen 7's direct competition) for gaming & never for HPC uses.... Oh wait. They don't exist.
  • Haawser - Thursday, March 2, 2017 - link

    Yeah, but if you're a gamer who streams, Ryzen is waaaay better than anything Inter offer for $499. Especially if you're gaming at 4K, or going to be. Different people have different needs, even gamers.
  • Jimster480 - Thursday, March 2, 2017 - link

    Yes but no,
    Because Broadwell-E and Haswell-E HEDT platforms are in the same boat as Ryzen.

    But this is what this Ryzen 7 release is meant to do.
    Compete with the HEDT platforms, not against the "APU" chips.
    Those chips will come later, albeit with much higher clockspeeds to compete with intel.
    For now you have Intel with 10-20% clockspeed advantages in clockspeed dependent applications.
  • Meteor2 - Saturday, March 4, 2017 - link

    I hope you're right but there's no indication they will be clocked higher. AMD has access to processes which are generation behind Intel's, at least for a couple of years. We can't expect miracles.
  • nos024 - Thursday, March 2, 2017 - link

    Lol, butt hurt? Why even bother running gaming benchmarks? You even said it yourself that ryzen wont make it to your so called grown-up workstation because if low pcie count.

    So tell me who is this $500 Ryzen chip designed for? Not grown ups running workstation, or pathetic kiddies gamers...so theyre for Wannabes?
  • Tunnah - Thursday, March 2, 2017 - link

    He literally said it is ideal to replace his aging 3770k, he gave an example of how it will be used. Try more reading and less being a turd
  • ddriver - Thursday, March 2, 2017 - link

    Ryzen is that much more affordable that with the price difference I could have built another whole system, dedicated to running the 2 HBA adapters, thus saving on the need of 16 lanes. 40 - 16 is exactly 24, which is what ryzen has. If it was available a year ago I would have simply built two systems, offering a good 50-60% more CPU performance, double the GPU performance, with enough need to accommodate my IO needs, even if between two systems, that wouldn't have been much of an issue.

    The pci lane count is lower than intel E series chips, however it is still 50% higher than what you can get from intel outside the E series. It will actually suffice in most workstation scenarios, even if you end up running graphics at x8, which is not really a big deal.
  • ddriver - Thursday, March 2, 2017 - link

    "you even said it yourself that ryzen wont make it to your so called grown-up workstation because if low pcie count"

    I did not say that. Not all workstations require 40 pcie lanes. Most could do with 24. I was talking about my workstation in particular, which has plenty of pcie hardware. For the vast majority of HPC scenarios that would not be necessary, furthermore as already mentioned, with the saved money you can build additional systems dedicated to specific tasks, offloading both the need of more pcie lanes and the cpu time the attached hardware consumes.

    It remains to be seen how much IO will the server zen parts have. Ryzen is not particularly a workstation grade chip, it just happens to be GOOD ENOUGH to do the job. AMD give you 50% more performance and 50% more IO at the same or better price point, and I think they will do the same for the chips they actually design for workstation.

    It looks like the 16 core workstation chip will have 64 pcie lanes, and the 32 core - a whooping 128 lanes. So intel E series looks like a sad little orphan with its modest 40 lanes... And no, xeons aren't much better, they are in fact worse, the 24 core E7-8894 v4 only has a modest 32 lanes.

    So no, while I will not be replacing my main 10 core workstation with a ryzen, because that would win me nothing, I am definitely looking forward to replacing it next year with a Naples system, and I definitely wished ryzen was available last year as I could have spent my money much better than buying intel.
  • Intel999 - Thursday, March 2, 2017 - link

    "So tell me who is this $500 Ryzen chip designed for?"

    Logic would imply it is aimed at anyone that works in an environment where they need superior multithreading performance. For instance, anyone that has bought a 6900k or 6950k, but more importantly it is for those individuals that "wanted" to buy either of Intel's multi core champs but couldn't due to ridiculous prices.

    I'd dare to make a bet there are more people that wanted to buy a 6900k than there are people that actually did. Now they can buy one and still put food on the table this month.
  • FriendlyUser - Thursday, March 2, 2017 - link

    Exactly right. I was always tempted by the 6850K, but the price of the CPU+platform was simply ridiculous. For much less I got a faster CPU and a high-end MB. I won't miss the 40PCIe lanes.
  • lakerssuperman - Thursday, March 2, 2017 - link

    People like me. I was previously running a 2600k overclocked. Nice chip. Still runs great, but I was looking for an upgrade about a year ago as one of the things I do a lot of is Handbrake conversion for my HTPC. Going to even the newest Intel 4 core got me maybe 20% improvement on one of my major workloads for insane amounts of money and going to the high end to get 8-10 cores was just not justifiable.

    I ended up buying a used Xeon/X79 motherboard combo for around $300 off ebay. 8 cores/16 threads and it works great for Handbrake. I lost some clock speed in the move so single thread performance took a bit of a hit, but was more than made up for in multi-thread performance. I can still game on this CPU just fine and I don't play the newest stuff right away anyway just because of time constraints.

    The X79 platform is fine for what I'm doing with it. Would I like the new stuff? Sure. And if I was in the position I was last year looking for an upgrade I don't see how I wouldn't get an 1800x. It gives me the right balance of features for what I do with my computer.

    If I was just gaming, I'd look at Intel currently because their 4 core i5 is the sweet spot for this. But I'm not just gaming so this chip is infinitely more attractive to someone like me. With the price and features I can't see how it isn't a winner and when the 4 and 6 core parts come out at likely higher frequencies, I think they are going to be the real winners for gaming.
  • rarson - Thursday, March 2, 2017 - link

    Ryzen is clearly well-suited to anyone who values high performance in a multitude of usage scenarios over one single usage scenario, especially if one cares about how much money they need to spend to achieve those results.
  • injurer - Friday, March 3, 2017 - link

    1800X is definitely designed for enthusiast, and AMD fans, but when you go to 1700X this is a price killer targeting the mainstream. 1700 is on the same boat but at even lower price. All the 3 are 8 core chips and are quite close to the 6900K but at 2-4 times lower price.

    At the end I really believe AMD are still having to show us the real potential of their architecture. Those chips are just the start. Remember Ryzen design is a new from its core, so they definitely have room to ecpand and enhance it.
  • bill.rookard - Thursday, March 2, 2017 - link

    Well, thing to remember is that for those looking for a new build, they now have a legitimate choice. I still do see in the future that things will only go more multithreaded, and even though the i7-7700k is still a great chip, having more physical cores and resources to throw at it will only help.

    To that end, again, anyone planning a NEW build from the ground up will be able to seriously consider a Ryzen system.

    Worst case, think about it. In the deep dive they had mention of 'competitive resource sharing' with SMT enabled. If you were to disable SMT on Ryzen - it would give you 8 PHYSICAL cores versus the 4 physical/4 logical cores of the 7700k. Without those resources being partially used across 16 threads - all resources would be allocated to the physical cores instead, potentially allowing more processing power per physical core.

    There's still quite a bit to be checked out and dug through.
  • lilmoe - Thursday, March 2, 2017 - link

    This. I want 2 things dug deeper in follow ups:
    1) Single/multi threaded performance with SMT disabled VS SMT enabled.
    2) Game comparisons with more sensible GPUs (which actually ship and sell in volume, IE: the ones people actually buy), like the GTX 1060 and/or RX 480.
  • BurntMyBacon - Friday, March 3, 2017 - link

    @lilmoe

    I agree with 1). Intel had HT for several generations before it was universally better to leave it enabled (still needs to be disabled some times, but these are more the edge cases now).

    Not so sure I'm onboard with 2). Pairing a $200 GPU with a $500 processor for gaming purposes seems a little backwards. I'd like to see that (GTX1060 / RX480) gaming comparison on a higher clocked R5 or R3 processor when they are released.
  • Meteor2 - Friday, March 3, 2017 - link

    I'd rather see tests paired with a 1080 Ti. At RX480/1060 level, it's well known the bottleneck is GPU performance not CPU. A 1080 Ti should be fast enough to show up the CPU.
  • lilmoe - Friday, March 3, 2017 - link

    @BurntMyBacon @Meteor2

    Lots of people, like me, are more into CPU power. I'm OK with a mid-range GPU. Gaming is not my top priority, and when I do, It's never above 1080p.

    It'd be interesting to see if there are differences. I wouldn't dismiss it, saying the GPU would be the bottleneck so fast.
  • bigboxes - Sunday, March 5, 2017 - link

    I'm with you on that. Gaming is way down in my priority list. I do it occasionally just because I love to see what my hardware can do. I currently have a ultrawide 1080p monitor. When I move to 4K then hopefully midrange GPU will cover that. My CPU is a 4790K. It's great for most tasks. I've been wanting to go to 6/8 core for some time, but the cost for the platform was too high. I think in a couple of years I will seriously think about Ryzen when building a new workstation.
  • rarson - Thursday, March 2, 2017 - link

    I am interested in seeing potential improvement due to BIOS updates. Additionally, I'm interested in seeing potential improvement due to better multi-threaded software. My hunch is that AMD is either on-par or better than Intel, or maybe damn near that prediction, so I think the 4-core parts will compare well to the current Skylake SKUs. I also expect them to overclock better than the 8-core chips. I guess we'll just have to wait for them to release.

    8 physical cores is definitely better than 4 cores with SMT/HTT/whatever you want to call it.
  • zangheiv - Thursday, March 2, 2017 - link

    Hard to believe how a company like intel that repeatedly and knowingly engaged in illegal acts and other tactics to monopolize the market and cheat the consumers into high-prices, can still have dumb happy consumers after Ryzen
  • lmcd - Thursday, March 2, 2017 - link

    Some people like 256-bit vector ops I guess :-/ who would've guessed?
  • Ratman6161 - Thursday, March 2, 2017 - link

    Have to agree. To me, the i7-7700K seems like the better bargain right now. Then again, I'm looking at a $329 I7-6700K motherboard and CPU bundle and the 7700K isn't really all that much of an upgrade from the 6700K. But in the final analysis, after all this reading, I'm still not seeing anything that makes me want to rush out and replace my trusty old i7-2600K.
  • Meteor2 - Friday, March 3, 2017 - link

    +1. Maybe, as Rarson says above, a 4C/8T Zen might clock fast enough to challenge the 7700K. But in the workloads run at home, the 1800X does not challenge the (cheaper) 7700K.

    HPC and data centre are completely different and here Zen looks like it has real promise.
  • Meteor2 - Friday, March 3, 2017 - link

    ...Sadly the R5s are clocked equally low.

    https://www.google.co.uk/amp/wccftech.com/amd-ryze...

    Limited by process, I guess.
  • Cooe - Sunday, February 28, 2021 - link

    Again. You're an absolute idiot for thinking that the only "workloads done at home" are 1080p gaming & browsing the web.... You are so out of touch with the desktop PC market, it's almost unbelievable. Here's hoping you were able to aquire some common sense over the past 4 years.
  • cmdrdredd - Saturday, March 4, 2017 - link

    " I'm still not seeing anything that makes me want to rush out and replace my trusty old i7-2600K."

    I agree with you. I have an overclocked 3570k and I don't see anything that makes me feel like it's too old. I'm mostly gaming on my system when I use it heavily, otherwise it's just general internet putzing around
  • Jimster480 - Thursday, March 2, 2017 - link

    Sorry but this is not the case.
    This is competing against Intel's HEDT line and not against the 7700k.

    2011v3 offers more PCI-E lanes only if you buy the top end CPU (which ofc isn't noted in most places) a cheaper chip like the 5820k for example only offers like 24 lanes TOTAL. Meaning that in price comparison there is no actual comparison.
  • Ratman6161 - Thursday, March 2, 2017 - link

    Well, whomever is trying to compete against, I7-7700K is about the top of the price range I am willing to spend. So Intel's 2011V3 lineup isn't in the cards for me either. AMD really isn't offering anything much for the mid range or regular desktop user either. In web browsing, office tasks, etc, their $499 CPU is often beaten by an i3. Now, the i3 is just as good as an i7-6900K too and in at least one test the i3 7350K is top of the charts. Why does this matter? Well, where does AMD go from here? If the i3 out performs the 1800x for office tasks, what will happen when they cut it to 4 cores to make a cheaper variant? Seems like they are set up for very expensive CPU's and for CPU's they have to sell for next to nothing. Where will their mid range come from?
  • silverblue - Thursday, March 2, 2017 - link

    Something tells me that if I decide to work on something complicated in Excel, that i3 isn't going to come anywhere near an R7. Besides, the 4- and 6-core variants may end up clocked higher, we don't know for sure yet.
  • mapesdhs - Thursday, March 2, 2017 - link

    It would be bizarre if they weren't clocked a lot higher, since there'll be a greater thermal limit per core, which is why the 4820K is such a fun CPU (high-TDP socket, 40 PCIe lanes, but only 4 cores so oc'ing isn't really limited by thermals compared to 6-core SB-E/IB-E) that can beat the 5820K in some cases (multi-GPU/compute).
  • Meteor2 - Friday, March 3, 2017 - link

    ...Silverblue, look at the PDF opening test. What comes top? It's not an AMD chip.
  • Cooe - Sunday, February 28, 2021 - link

    Lol, because opening PDF's is where people need/will notice more performance? -_-

    CPU's have been able to open up PDF's fast enough to be irrelevant since around the turn of the century...
  • rarson - Thursday, March 2, 2017 - link

    "AMD really isn't offering anything much for the mid range or regular desktop user either."

    So I'd HIGHLY recommend you wait 3 months, or overpay for Intel stuff. Because the lower-core Zen chips will no doubt provide the same performance-per-dollar that the high-end Ryzen chips are offering right now.
  • rarson - Thursday, March 2, 2017 - link

    "their $499 CPU is often beaten by an i3."

    It's clear that you're looking at raw benchmark numbers and not real-world performance for what the chip is designed. If all you need is i3 performance, then why the hell are you looking at an 8-core processor that runs $329 or more?
  • Ratman6161 - Friday, March 3, 2017 - link

    Its all academic to me. As I posted elsewhere, my i7-2600K is still offering me all the performance I need. So I'm just reading this out of curiosity. I also really, really want to like AMD CPU's because I still have a lot of nostalgia for the good old days of the Athlon 64 - when AMD was actually beating Intel in both performance and price. And sometimes I like to tinker around with the latest toys even if I don't particularly need it. I have a home lab with two VMWare ESXi systems built on FX-8320's because at the time they were the cheapest way to get to 8 threads - running a lot of VM's but with each VM doing light work.
    I also run an IT department so I'm always keeping tabs on what might be coming down the pike when I get ready to update desktops. But there is a sharp divide between what I buy for myself at home and what I buy for users at work. At work, most of our users actually would do fine with an i3. But I'm also keeping an eye out for what AMD has on offer in this range.
  • Notmyusualid - Tuesday, March 7, 2017 - link

    @ Jimster480

    Sorry pal, but that is false, or inaccurate information.

    ALL BUT the lowest model of CPUs in the 2011v3 platform are 40 PCIE lanes. Again, only the entry-level chip (6800K),has 28 lanes:

    http://www.anandtech.com/show/10337/the-intel-broa...

    But I do agree with you, that this is competing against the HEDT line.

    Peace.
  • slickr - Thursday, March 2, 2017 - link

    I'm sorry, but that sound just like Intel PR. I don't usually call people shills, but your reply seems to be straight out of Intel's PR book! First of all more and more games are taking advantage of more cores, you can easily see this especially with DX12 titles where if you have even 16 cores it will take advantage of.

    So having 8 cores for $330 to $500 is incredible value! We also see that the Ryzen chips are all competitive compared to the $1100 6900k which is where the comparison should be. Performance on 8 cores.

    And as I've found out real world performance on 8 cores compared to 4 cores is like night and day. Have you tried running a demanding game, streaming in through OBS to Twitch, with the browser open to read Twitch chat and check other stuff in the process, while also having musicbee open and playing your songs and a separate program to read Twitch donations and text, etc...

    This is where 4 core struggles a lot, while 8 core responsiveness is perfect. I can't use my PC if I decide to reduce a video size to a smaller one with a 4 core. Even 8 cores are fully taken advantage off, but through one core you can always do other stuff like watch movie or surf the internet without it struggling to process.

    But even if games are your holy grail and what you base your opinion on, then Ryzen does really well. Its equal or slightly slower than the much much more optimized Intel processors. But you have to keep in mind a lot of the game code is optimized solely for Intel. That is what most gamers use, in fact over 80% is Intel based gamers, but developers will optimize for AMD now that they have a competitor on their hands.

    We see this all the time, with game developers optimizing for RX 400 series a lot, even though Nvidia has the large majority of share in the market. So I expect to see anywhere from 10% to 25% more performance in games and programs that are also optimized for AMD hardware.
  • lmcd - Thursday, March 2, 2017 - link

    How can you call someone a shill and post this without any self-awareness? Your real-world task is GPU-constrained anyway, since you should be using a GPU capable of both video encode and rendering simultaneously. If not, you can consider excellent features like Intel's Quick Sync, which works even with a primary GPU in use these days.
  • Meteor2 - Friday, March 3, 2017 - link

    Game code is optimised for x86.
  • Cooe - Sunday, February 28, 2021 - link

    Absolute nonsense. Game code is optimized specifically for the Intel Core pipeline & ESPECIALLY it's ring bus interconnect. There's no such thing as "optimizing for x86". Code is either written with the x86 ISA or its not...
  • FriendlyUser - Thursday, March 2, 2017 - link

    The 1700X with a premium motherboard is cheaper and faster than the 6850K. If you absolutely need the extra PCIe lanes or the 8 DIMM slots, then x99 is better, otherwise you are getting less perf/$.
  • mapesdhs - Thursday, March 2, 2017 - link

    Or a used X79. I'm still rather surprised how close my 3930K/4.8 results are to the tests results shown here (CB10/ST = 7935, CB10/MT = 42389 , CB11.5/MT = 13.80, CB R15 MT = 1241). People are selling used 3930Ks for as little as 80 UKP now, though finding a decent mbd is a bit more tricky.

    I have an ASUS R5E/6850K setup to test, alongside a used-parts ASYS P9X79-E WS/4960X which cost scarily less than the new X99 setup, it'll be interesting to see how these behave against the KL/BW-E/Ryzen numbers shown here.

    Ian.
  • Aerodrifting - Thursday, March 2, 2017 - link

    "$500 1800x is still too expensive. According to this even a 7700k @ $300 -$350 is still a good choice for gamers."
    Same thing can be said for every Intel extreme platform processors, $1000 5960X/6900K is still too expensive, $1600 6950X is too expensive, Because 7700K is better for gaming.
    Then you said "2011-v3 still offers a platform with more PCIe3 lanes and quad memory channel. ", Which directly contradict what you said earlier about gaming, How does more PCIe3 lanes and quad channel memory improve your FPS when video cards run fine with x8.
    Your are too idiotic to even run coherent argument.
  • lmcd - Thursday, March 2, 2017 - link

    What on earth are you talking about? PCIe3 lanes and quad channel memory are helpful for prosumer workloads. It's not contradictory at all?
  • mapesdhs - Thursday, March 2, 2017 - link

    Yup, quad GPU for After Effects RT3D, and fast RAM makes quite a difference.
  • Notmyusualid - Friday, March 3, 2017 - link

    @mapesdhs:

    Indeed.

    Also, I can actually 'feel' the difference going from dual to quad channel ram performance.

    I checked, and I hadn't correctly seated one of my four 16GB modules...

    Shutdown, reseat, reboot, and it 'felt' faster again.
  • Aerodrifting - Thursday, March 2, 2017 - link

    Learn to read a complete sentence please.
    "nos024" was complaining gaming performance, Then he pulled out extra PCIe3 lanes and quad channel memory to defend X99 platform even though they were also inferior to 7700K in gaming (just like Ryzen). That makes him sound like a completely moron, Because games don't care about those extra PCIe lane or quad channel memory.
  • Notmyusualid - Friday, March 3, 2017 - link

    X99 'inferior'?

    I just popped over the 3dmark11's results page, selected GPU as 1080, and I had to scroll down to 199th place (a 7700k clocked to a likely LN2 5.8GHz), to find a system that wasn't triple, or quad channel equipped.

    Here: http://www.3dmark.com/search#/?url=/proxycon/ajax/...

    So I guess those lanes don't help us multip-gpu people after all?

    Swallow.
  • Aerodrifting - Saturday, March 4, 2017 - link

    Because 3Dmark11 hall of fame ranking equals real life gaming performance.

    Are you a moron or just trolling? Everyone knows when it comes to gaming, A high frequency i7 (such as 7700K) beats everything else, Including 8 core Ryzen or 10 core i7 extreme 6950X.
  • Notmyusualid - Saturday, March 4, 2017 - link

    Then why was the first 7700K at 199th place?

    The results dont' tie in with your retort.
  • Notmyusualid - Saturday, March 4, 2017 - link

    And futhermore, 3dmark11 is a respected benchmark, that I and other PC games I know, use regularly.

    If you visit the 3DMark website, you will see LOADS of 3dmark11 results 'ticking in' on the global map.

    And personally, every PC I've played with high 3dmark11 numbers has played games better than ones that have lower numbers, so you can't discount it. It works, whether you like it or not.

    7700K, 199th place for 1080. Look it up. All above are X79, X99 type boards.
  • Aerodrifting - Sunday, March 5, 2017 - link

    That just proved you really don't know anything about gaming, No point wasting my tim
  • Notmyusualid - Monday, March 6, 2017 - link

    Then don't.

    Blissfull in ignorance..
  • Notmyusualid - Monday, March 6, 2017 - link

    @ Aerodrifting

    In addition to MY comments on 3dMark11, try this from Guru3d, unless of course YOU always know better:

    Quote:

    3DMark 11 is the latest version of what is probably the most popular graphics card benchmark series. Designed to measure your PC’'s gaming performance, 3DMark 11 makes extensive use of all the new features in DirectX 11 including tessellation, compute shaders and multi-threading. Trusted by gamers worldwide to give accurate and unbiased results, 3DMark 11 is the best way to consistently and reliably test DirectX 11 under game-like loads. We test 3DMark 11 in performance mode which will give is a good indication of graphics card performance in the low, mid-range and high end graphics card segment.

    And Link:

    https://www.guru3d.com/articles_pages/geforce_gtx_...

    And with that, I'm done here.
  • divertedpanda - Saturday, March 4, 2017 - link

    "Everyone Know" ....... Except... Numbers don't lie.
  • tmach - Thursday, March 2, 2017 - link

    Yes and no. The $340 7700k beats the $1,000 6900k in gaming at stock speeds, too. That just goes to show that 8-cores still aren't for gaming, no matter how much some people (AMD included) want to make it a thing.
  • GatesDA - Thursday, March 2, 2017 - link

    They're not aimed at gamers; they're undercutting Intel's 6- and 8-core chips. Multithreading's the only reason to get a Ryzen 7, since the 6-core Ryzen 5 1600X will have the same clock speeds as the Ryzen 7 1800X for way less. Their unrevealed 4-core chips will probably be the gaming sweet spot.
  • nader_21007 - Thursday, March 2, 2017 - link

    well yeah you could stick more PCIe3 lanes and quad memory channels up your @$$.
    what use have these if the performance of your $1000 CPU is inferior to a $500 CPU? tell me.
  • prisonerX - Thursday, March 2, 2017 - link

    Wow, gamers, that self-important 5% of the market. They'll never leave Intel anyway. When your customers are such rubes they'll pay top dollar for chips that contain 65% useless functionality, you can abuse them as much as you like. Gamers will still be buying 2% performance gains for generations to come.
  • Notmyusualid - Friday, March 3, 2017 - link

    Not true my friend, and this generalization is ridiculous.

    Give me a faster chip (for gaming), and I'll jump ship immediately!

    I don't know why fanbios even exist. Its not like they are 'pals' of the companies.
  • MarkieGcolor - Thursday, March 2, 2017 - link

    I agree 100%. Why no more PCIe3 lanes? Is it a cost to put more? Nothing exciting at all for me due to this.
  • Notmyusualid - Friday, March 3, 2017 - link

    Especially running multiple GPUs, PCIe based storage, and more.
  • Lord-Bryan - Thursday, March 2, 2017 - link

    Why should amd slash prices, ryzen is already competitive - performance wise, with intel hedt at less than half the price. Ryzen is a boon for productivity focused tasks, like rendering, video editing, encoding etc.
    Gamers are not the only people who buy cpus, there are people out there who do useful jobs with their cpus.
  • synchrome - Friday, March 3, 2017 - link

    not everyone is a gamer bud
  • Meteor2 - Friday, March 3, 2017 - link

    But if you're not gaming, why do you need high performance? Video production? I think there are far more gamers than vloggers. What else?
  • bji - Friday, March 3, 2017 - link

    Software development can use every core you can throw at it for large, long-running compilation jobs.

    This is a niche, no doubt, but it's an example of another use case where Ryzen should really shine.

    Gamers don't need "high performance" either, unless you meant, more specifically and correctly, "the 0.5% of gamers who run high-end PC games and care about the difference between 90 fps and 100 fps". And that is not a large market segment either.
  • Notmyusualid - Friday, March 3, 2017 - link

    But we do exist, and have the $$$ to back up our ideals.

    I dropped nearly 1100 GBP on a 6950X, so I both care, and voted with my feet. Not just talk.
  • tamalero - Friday, March 3, 2017 - link

    Not everything is related to gamers you know?
    Ryzen with its multicore performance would benefit a lot on machines like workstations for rendering, graphic editing..etc.
  • Alexvrb - Saturday, March 4, 2017 - link

    I'm confused. You're comparing Ryzen 7 to 7700K for gaming purposes, and then you are talking about 2011 + 6850K? Which is it? Because if you need lots of threads and you're looking at a 6850K, I think the 1800X will be the better buy (and not all X370 boards are expensive look harder). If you DON'T need all those cores, get the i7. Myself I won't spend that kind of coin on ANY CPU, so I'll be looking at Ryzen 5 vs i5 later this year.
  • nt300 - Saturday, March 11, 2017 - link

    FYI, the 7700K isn't the best gaming CPU, because it suffers from in game stuttering, based on many reviews of that CPU. At least with Ryzen, it games smooth regardless of FPS. Again, based on review sites.
  • MongGrel - Sunday, March 12, 2017 - link

    Why is there a an apparent 10 year old Moderating the CPU forums on this site, and randomly banning people ?

    The 7700K is still as valid, AMD is catching up a bit finally, but my old X5680 still performs well for half the money.

    Nothing like having a mod in the forums that ca ot even spell the site right when he's banning people.

    Abanbant Moderator pretty much said it all to another banned guy there, it's like a mod has a 10 year old on his account.
  • nathanddrews - Thursday, March 2, 2017 - link

    "Oh my Gawwwwwwwwd!"
    -Rich Evans
  • rarson - Thursday, March 2, 2017 - link

    Thumbs up to you, sir, for a Rich Evans reference.
  • Chaitanya - Thursday, March 2, 2017 - link

    Intel caught sleeping and nude. They really need to stop rehashing that old nehalem architecture.
  • Gondalf - Thursday, March 2, 2017 - link

    Nehalem arc was abandoned years ago :)
  • Stuka87 - Thursday, March 2, 2017 - link

    Nehalem? Seriously? The current chips definitely can be traced back as far as Sandy Bridge, but not before that.
  • caqde - Thursday, March 2, 2017 - link

    Actually I would trace it back to the Pentium III. Take a close look at the front end of the PIII and Skylake. There are some large changes here and there, but if you follow it from start to end you will notice that the original design is essentially a vastly improved Pentium III. (Go from PIII -> Pentium M -> Core -> Core 2 -> i7 series)

    Pentium III -> https://image.slidesharecdn.com/pentiumiii-1304200...

    Skylake -> https://en.wikichip.org/w/images/thumb/7/7e/skylak...
  • rarson - Thursday, March 2, 2017 - link

    ...which was based off Pentium-II, which was based off Pentium Pro.
  • jordanclock - Thursday, March 2, 2017 - link

    Which is just a vastly improved 8086! It's old architectures all the way down!
  • EasyListening - Thursday, March 2, 2017 - link

    caqde is correct. There was a point where an Israeli Intel research team went back to the P3 and updated it, focusing on the tightness and speed of the P3 vs the P4 which had become bloated. That was the beginning of what later became the core and i series.
  • lmcd - Thursday, March 2, 2017 - link

    Pentium III was the basis for the Core 2 architecture. Pentium IV actually has inspired a lot of things in Sandy Bridge and newer.
  • lmcd - Thursday, March 2, 2017 - link

    See: https://en.wikipedia.org/wiki/NetBurst_(microarchi...

    This is actually a big deal and a huge part of the separation from Sandy Bridge and previous architectures. Obviously the NetBurst version was more complicated, as was everything Intel was doing at the time, but TL;DR P4 has had huge, positive effects on Intel's present product portfolio overall.
  • rarson - Thursday, March 2, 2017 - link

    You're an idiot. They didn't go from successful P-Pro/PII/PIII architecture to Netburst, which was a failure in P4, to Core architecture which was PII-based and wildly successful, and then BACK to Netburst after two successful generations for no reason whatsoever. Sandy Bridge was a refinement of the same Core architecture, same as every other generation.
  • Gondalf - Thursday, March 2, 2017 - link

    Ummm the core IPC looks lower than Haswell.....there is work to be done to equal Intel.
    AMD hides the deficit with an higher clock speed and likely an higer average power consumption.
  • The_Countess - Thursday, March 2, 2017 - link

    power consumption is in fact lower. and they aren't lower then haswell. and skylake is only 7% faster then haswell, so who cares.
  • lmcd - Thursday, March 2, 2017 - link

    List power is lower.

    7% is insignificant?
  • silverblue - Friday, March 3, 2017 - link

    If you go from 57% slower, sure.
  • Cooe - Sunday, February 28, 2021 - link

    Uhhh.... Zen's IPC is HIGHER than Haswell in the vast majority of workloads, landing just shy of Broadwell as far as single-thread IPC & just ahead for multi-thread IPC (thanks to AMD's superior SMT implementation).
  • tankNZ - Sunday, March 5, 2017 - link

    Hahaha great<img src="http://smsh.me/li3a.png" width="0" height="0">
  • Manch - Thursday, March 2, 2017 - link

    "Through the various press release cycles from AMD stemming from that original Zen announcement, the industry is in a whipped frenzy waiting to see if AMD, through rehiring guru Jim Killer and laying the foundations of a wide and deep processor team for the next decade, can hold the incumbent to account." Jim Killer LOL
  • trane - Thursday, March 2, 2017 - link

    Great stuff. But come on, we have been waiting for "Return of the Jedi" subtitle for a decade now! Ryzen more than deserves to keep up AnandTech's long time tradition.

    (For those unaware, Intel's Core 2 launch was "The Empire Strikes Back"; Trinity was "A New Hope" etc. Ryzen's gotta be "Return of the Jedi".)
  • sans - Thursday, March 2, 2017 - link

    There is nothing similar with Star Wars saga.
  • lmcd - Thursday, March 2, 2017 - link

    Yea there is. No 256-bit FMAC means Ryzen's an incomplete Death Star 2.
  • Cooe - Sunday, February 28, 2021 - link

    Zen 2 says high. Also, the dual 128-bit FMA setup in Zen/Zen+ has held up JUST fine for everything but the most absurdly AVX2 FP heavy workloads.
  • Ian Cutress - Thursday, March 2, 2017 - link

    You know, I didn't know that. Before my time. Perhaps Part 2 :)
  • The Von Matrices - Thursday, March 2, 2017 - link

    Considering how much AMD has adjusted its architecture to be more similar to Intel (compared to Bulldozer), I think "Attack of the Clones" works well too.
  • Brazos - Thursday, March 2, 2017 - link

    It's March 2 but I'm not seeing any 3rd party benchmarks. Shouldn't they be out now?
  • Ian Cutress - Thursday, March 2, 2017 - link

    Did you read beyond page 12?
  • Brazos - Thursday, March 2, 2017 - link

    When I posted there was only 1 page. Updated now.
  • Ro_Ja - Thursday, March 2, 2017 - link

    Tech City already have benchmark
  • Gothmoth - Thursday, March 2, 2017 - link

    what are you talking about.... they are everywhere.....
  • Brazos - Thursday, March 2, 2017 - link

    Not when I posted. Guess the NDA didn't expire at midnight but at certain other time?
  • fanofanand - Thursday, March 2, 2017 - link

    it expired at 9 AM EST
  • Brazos - Thursday, March 2, 2017 - link

    Thank you fanofanand - that's exactly what I was trying to find out. That's why I couldn't find any.
  • creed3020 - Thursday, March 2, 2017 - link

    Once again Dr. Ian Cutress, and the AnandTech crew right on point. Thanks for having this all ready and so detailed for the NDA liftoff.

    WAY too exciting to do anything else right now but to slowly take this all in!
  • fanofanand - Thursday, March 2, 2017 - link

    I totally agree, it may not be the deepest dive ever, but it sated my needs. Fantastic review by Ian as always!
  • BrokenCrayons - Thursday, March 2, 2017 - link

    Absolutely agreed! Thanks for all of your hard work on this review Ian! It's a really good, in-depth analysis of Ryzen.
  • Meteor2 - Friday, March 3, 2017 - link

    Yes an excellent set of benchmarks.
  • tipoo - Thursday, March 2, 2017 - link

    AMD Ryzen Review: Affordable Core Act
  • Manch - Thursday, March 2, 2017 - link

    AMD Ryzen Review: Grabbed Intel by the transistors.....and they let AMD do it
  • lazarpandar - Thursday, March 2, 2017 - link

    hahahaha
  • mapesdhs - Thursday, March 2, 2017 - link

    Meme spillage. :D
  • Gothmoth - Thursday, March 2, 2017 - link

    AMD... let´s go stir up the shit.
  • Crono - Thursday, March 2, 2017 - link

    The encoding benchmark results page is giving me the vapors *swoons*
  • Pork@III - Thursday, March 2, 2017 - link

    Ryzen.
    Plus: Very good core architecture. Very good MT and SMT.
    Minus: Very bad memory controller. Very slow caches of all levels. Too small number of PCI-E 3.0 lines.
  • fanofanand - Thursday, March 2, 2017 - link

    Other sources are saying AMD's IMC is more efficient than Intel's, so looking solely at the MHz numbers might not be the best way of viewing things. The highest I have seen is Asus showing 3466 supported which isn't bad. With the way memory scales I don't know of many applications that could use more speed than that anyway. I'm not sure the cache is slow either, how did you deduce that? The number of PCI-e lanes is the same as Z270, so I would assume you also complained of Intel's limited lanes? Especially keeping in mind that SLI is basically losing all Nvidia support, are dozens of PCI-e lanes still a big decision-maker? You can still run a 16 x 8 CF solution without a PLX chip and still have lanes leftover for M.2 etc. so how many lanes do you really need?
  • wolfemane - Thursday, March 2, 2017 - link

    42
  • mapesdhs - Thursday, March 2, 2017 - link

    My brain read that in a deep voice... #)
  • Cooe - Monday, March 1, 2021 - link

    The memory controller itself was just fine. Better than average in fact. It was the incredibly immature microcode (what AMD calls "AGESA") driving it that was the issue here. With properly updated AGESA microcode & better built boards (ala 400/500 series), Zen 1 willl do up to ≈3400MHz DD4 all day, with Zen+ pushing that up to the ≈3600MHz mark.
  • rarson - Thursday, March 2, 2017 - link

    Ryzen seems to have great performance in multi-threaded scenarios, and competitive performance in single-threaded instances, which will probably start to matter less and less once 8C/16T processors become the norm and software can utilize more of those cores. So while IPC still doesn't seem to match Intel's latest, the extra cores and lower pricing seem to make it an excellent value option. You're getting a lot for the money here! Can't wait to see how the lesser-core SKUs perform once they hit the market.
  • Meteor2 - Friday, March 3, 2017 - link

    Sadly they don't have higher clocks so... Not well.
  • Cooe - Monday, March 1, 2021 - link

    I really wonder if you have figured out that PC's aren't just for playing freaking video games sometime in these past 4 years....
  • Flunk - Thursday, March 2, 2017 - link

    Well, it may not utterly destroy Intel's processors, but it is a much better price\performance deal for multi-threaded applications and pretty competitive for games. I guess this ends the "don't buy AMD processors" era quite solidly.
  • ahtoh - Thursday, March 2, 2017 - link

    I expected more
  • Gothmoth - Thursday, March 2, 2017 - link

    yeah well but you are gamer kiddo who has no job and no ideas.

    creatives and content creator will love the new amd chips.
  • Murloc - Thursday, March 2, 2017 - link

    any nerd running VMs can appreciate the low cost 8 cores.
  • Meteor2 - Friday, March 3, 2017 - link

    Lol
  • Notmyusualid - Saturday, March 4, 2017 - link

    @ Gothmoth

    More gamer-bashing.

    I wonder if I've spent more on PCs in the past 3 years, than the value of your car.

    You can't be a gamer at the high end, and be unemployed too, in my experience.
  • Cooe - Monday, March 1, 2021 - link

    He wouldn't need to "gamer bash" if the "I exclusively use my PC for gaming" folks didn't keep saying freaking retarded things... You can't seem to get the fact that "I built a high end PC EXCLUSIVELY for playing games at 1080p on" is a MINORITY of PC owners (and ESPECIALLY those interested in high core count CPU's), rather than a majority, into your thick ass skull....
  • Drake H. - Thursday, March 2, 2017 - link

    Where is the Dolphin Bench?
  • Ian Cutress - Thursday, March 2, 2017 - link

    Our updated version wasn't working in time - kept giving the same value no matter what CPU version I was using. One of the devs forwarded me a new version, but it came too late so I need to test it for suitability next week.
  • iranterres - Thursday, March 2, 2017 - link

    Competition is back, keeping in mind this is an all new architecture with lots of stuff to be ironed out... Ryzen has a lot of development room, which is awesome and the performance is very competitive...
  • mdriftmeyer - Thursday, March 2, 2017 - link

    ``Along with the new microarchitecture, Zen is the first CPU from AMD to be launched on GlobalFoundries’ 14nm process, which is semi-licenced from Samsung.''

    It's a cross-licensed agreement between both corporations, not a straight-line license of Samsung by GF. Each has IP in the game and jointly forged a long-term partnership on FinFET 14nm.
  • Marlin1975 - Thursday, March 2, 2017 - link

    So what did intel ask and/or tell you to do when they contacted Purch/Anandtech about the AMD Zen/Ryzen reviews?
    Did you do as they asked or told them no like other sites have reported?
  • Gothmoth - Thursday, March 2, 2017 - link

    do much of the reporting about stupid games. we all know that is what 6 -10 core systems are for.
    to play stupid shooter games...... :-)

    i am very happy with the benchmark results.
    for me it is 50% of the price for 100% of the performance.
  • rudolphna - Thursday, March 2, 2017 - link

    You really need to calm your tits, I don't think it's necessary at all to badmouth gamers because "hurrr durr gamers lelz no jurbz". Why not show a little adult maturity and ignore the Intel trolls?
  • Notmyusualid - Saturday, March 4, 2017 - link

    @ rudolphna

    Sorry pal, you can't fix stupid.

    This guy has it in for 'gamers' in general it would seem. Apparently we are all living with mommy, and drawing welfare, and have no life too. I'm going to make a point to avoid responding to him in future.

    So, I'll 'worry' about him, from the poolside of my private villa, playing Team Deathmatch over my 600Mb/s fiber connection.
  • Meteor2 - Monday, March 6, 2017 - link

    Calm down.
  • Medicos - Thursday, March 2, 2017 - link

    Good question, lol.
  • Gothmoth - Thursday, March 2, 2017 - link

    do much of the reporting about stupid games. we all know that is what 6 -10 core systems are for.
    to play stupid shooter games...... :-)

    i am very happy with the benchmark results.
    for me it is 50% of the price for 100% of the performance.
  • fanofanand - Thursday, March 2, 2017 - link

    I was hoping the "contact us before releasing your review" would be addressed. There are a lot of rumors that Intel is up to it's old shenanigans and I for one would have really liked Dr. Cutress's response to that.
  • Meteor2 - Friday, March 3, 2017 - link

    Source?
  • JasonMZW20 - Thursday, March 2, 2017 - link

    I'm pleasantly surprised. AMD is back; competition is back. No, it doesn't win everything, but it actually has some clear wins. That it can fight toe-to-toe (and sometimes beat) a $1050 6900k is freakin' awesome.

    I was expecting it to be behind about 10-15% in most things, except Blender and Handbrake (cynical, I know; ignored the leaks). So, bump up the clocks, and it'll knock on 7700k's door too, where clock speed is more of a determining factor.

    Content creators must be thrilled!
  • rarson - Thursday, March 2, 2017 - link

    I think the core count is holding clocks back, so it should be interesting to see what a 4C/8T chip can do once released. I think overclocked, a 4C/8T Ryzen could get very close or maybe even beat an overclocked 7700k in some benchmarks. Not to mention by that time, the AM4 platform should be a bit more mature.

    I ordered a 1700X and I'm extremely happy with the benchmarks I'm seeing reported. Can't wait to get it built! I haven't had a really good reason to build an AMD system in a long time.
  • mapesdhs - Thursday, March 2, 2017 - link

    Well, sort of. One down side of this platform is the RAM limit (people I know would prefer 128GB+ for a bit of headroom) and lesser no. of PCIe slots (Intel's X99 does have an advantage there, allowing for several GPUs and also addin cards such as additional NVMe, audio, video, etc.) However, for those who haven't been moving on from older systems due to the crazy costs of Intel's newer CPUs, it looks like a good stepping stone to Naples.
  • Manabu - Thursday, March 2, 2017 - link

    I don't understand the X thing in the name ( ‘eXtended Frequency Range’). All cpus are unlocked and can be overclocked to any multiplier, but not as good power management? The X permits you to just rise the turbo speed and enjoy better power management? Or something different? That part of the article is very unclear for me.
  • Ian Cutress - Thursday, March 2, 2017 - link

    All CPUs have XFR, but XFR still has a limit imposed on the chip (e.g. +100MHz over max turbo). With an 'X' chip, that limit is higher than a non-FX chip (e.g. +200MHz). Or you can just overclock manually.
  • Tunnah - Thursday, March 2, 2017 - link

    The none-X chips won't automatically boost speed when they detect sufficient cooling
  • Tunnah - Thursday, March 2, 2017 - link

    Just realised who I'm replying to. Gonna guess you know more than me on this haha
  • mapesdhs - Thursday, March 2, 2017 - link

    Must admit I'd inferred from earlier reports on XFR that there wasn't any particular limit impossed. A max of +100 doesn't really impress TBH. The PR referred to higher clocks if the cooling system is good enough; surely anyone with a typically decent water AIO as a baseline is going to have a setup that can push beyond a mere +100, so XFR seems rather limiting given this.
  • entrigant - Thursday, March 2, 2017 - link

    No ECC is a deal breaker. That single feature is what kept me using AMD up until the Phenom II, and it's why I'm on a Xeon now. I had huge hopes Ryzen would allow me to have ECC and OC support in one chip.

    I don't understand why consumers don't demand ECC RAM. With tests by the likes of Google showing how common errors are how can anyone feel confident in a machine that can't even detect when such an error has occurred?

    I haven't even gotten to the benchmarks yet, and I already know I won't be building a Ryzen system this year. What a let down.
  • sans - Thursday, March 2, 2017 - link

    Stick with your Intel Xeon and keep buying Intel's products forever. You will never regret if you keep your habit buying Intel based products.
  • amightywind - Thursday, March 2, 2017 - link

    I have been very happy with my 6 core Vishera for under $100.
  • entrigant - Thursday, March 2, 2017 - link

    The problem is all Xeon's are multiplier locked, and server boards with full ECC support don't let you touch RAM timings. So, wanting ECC sticks you with an artificial performance ceiling. It use to be AMD would let you have the best of both worlds. It's a shame they didn't bring that back.
  • Notmyusualid - Friday, March 3, 2017 - link

    My X99 allows me to change timings as I please.

    I have two memory kits, 64GB@2133MHz (running 2400MHz), and 3200MHz (also running 2400MHz due to Xeon proc) Vengence LED non-ECC.

    The ECC RAM beats the non-ECC everytime. Winsat mem is 48GB/s, 43G for the non-ECC. Go figure.
  • A5 - Thursday, March 2, 2017 - link

    I'm sure AMD will be happy to sell you an Opteron with ECC support. Home users don't care about ECC because it makes RAM cost more and they don't notice the errors.
  • rudolphna - Thursday, March 2, 2017 - link

    Because unless you are running sensitive compute workloads, the occasional memory error that would be caught by ECC doesn't matter. I've never used ECC memory and I can't think of a single time having ECC would have helped anything. Also, ECC memory is much more expensive, and typically is going to be lower performance.

    If you are a workstation user who is using scientific, data-sensitive applications that can impact, say research, then I'd understand. But the average consumer and even prosumer has no real need for ECC.
  • Ninhalem - Thursday, March 2, 2017 - link

    You need ECC memory if you're using a FreeNAS server. I don't want to see my critical data get sent to the IT black hole because I decided to skimp on good memory.
  • BrokenCrayons - Thursday, March 2, 2017 - link

    I've never encountered data loss or corruption running FreeNAS on non-ECC RAM. I would think the instances where non-ECC would corrupt stored data are few and far between based on personal experience. That said, were I fielding equipment in a data center, ECC would be a requirement. Anything outside of a server room or equipment sitting around a home simply doesn't need or benefit from ECC enough to justify the added platform costs it entails.
  • thomasg - Thursday, March 2, 2017 - link

    Sorry, but that is just wrong.
    You never _noticed_ data corruption, because ECC-ram is the only reasonable way to actually notice it.
    All the cases of silent corruption that might or might not have happened on your system, you'll never know about. Usually you won't notice a bit flip, and even if you do, you're likely to attribute it to other things (oh, the shitty program crashed - or oh, my PSU must be crappy).
  • BrokenCrayons - Thursday, March 2, 2017 - link

    Read the post I responded to and you'll find the discussion is about losing files due to corrupted data in storage due to a lack of ECC (which simply isn't true) and not about system crashes.
  • Notmyusualid - Friday, March 3, 2017 - link

    +1
  • Notmyusualid - Friday, March 3, 2017 - link

    ECC is the bomb. See my comment above.

    Ever had a random blue-screen on an otherwise perfectly-running system?

    That just might be your non-correctable RAM error showing its face.
  • BrokenCrayons - Tuesday, March 7, 2017 - link

    It's probably a mistake to attribute a "random blue-screen" to a lack of ECC RAM. Unless you're analyzing the system's BSOD dumps after the fact with the necessary tools and background knowledge, it's impossible to credit RAM as the culprit. Besides that, BSODs are a bit of an oddity in modern computing regardless of whether or not ECC is present so even on the off chance that one of those rare blue screeens does pop up, they're uncommon enough that it's pretty harmless to simply restart the system.

    Cost is unjustified and the benefits highly questionable/unprovable. Thus, non-ECC is just fine for the vast majority of client computing workloads.
  • Notmyusualid - Tuesday, March 7, 2017 - link

    As I said in an earlier comment, my ECC performs FASTER than my non-ECC RAM.

    And I paid the same for 64GB of ECC, as I did for 32GB of non-ECC.

    So size, and performance boxes were 'ticked'.
  • JasonMZW20 - Thursday, March 2, 2017 - link

    Well, AMD responded in the Reddit AMA, and ECC is supported.

    https://www.reddit.com/r/Amd/comments/5x4hxu/we_ar...
  • mapesdhs - Thursday, March 2, 2017 - link

    I'm confused... I've been reading on review sites that ECC definitely is supported, or do you mean that using ECC with Ryzen means no oc'ing?

    Consumers don't demand ECC because they don't know anything about it. I've seen visual effects tasks on oc'd i7s where render artefacts are not prevented because the system didn't have ECC.
  • shing3232 - Thursday, March 2, 2017 - link

    I ask Amd reddit and got reply that they're do have ecc enable by default and it's up to the motherboard to support it. Just Like Fx lineup I guess
  • Cooe - Monday, March 1, 2021 - link

    It has full ECC support...
  • XabanakFanatik - Thursday, March 2, 2017 - link

    Your Cinebench R10 and R11.5 benchmarks have swapped titles - the R10 scores are titled R11.5 and vice versa.
  • Yojimbo - Thursday, March 2, 2017 - link

    Looks like a huge improvement over what they had before. It's a competitive chip but it's not the home run lots of people seemed to be expecting when AMD released that single benchmark a couple of weeks back.

    It's good with multi-core usage but not so good with single core usage. At the same time it has various weaknesses, "edge cases" this article calls them. Machines that do heavy multi-core lifting are probably the ones most likely to be affected by these edge cases. So it ends up being not the best choice for most consumers because of the poor single core performance and probably not the best for most workstations or servers because of the edge cases as well as the ecosystem Intel can provide.
  • Stuka87 - Thursday, March 2, 2017 - link

    So 100% of the performance for 50% of the cost is not a home run?!
  • negusp - Thursday, March 2, 2017 - link

    Exactly- I loathe the Intel apologists here.

    In some cases it's even more than 100% of the performance, and as games continue to be optimized for multiple threads the advantages of multi-thread perf will continue to grow.

    For the price I don't understand how you can sit there with a straight face and say that what AMD did was underwhelming.
  • Yojimbo - Thursday, March 2, 2017 - link

    Yeah if the i7 6900K were the only chip Intel sold and it were locked into being sold at $1000 then AMD would have a home run. Unfortunately that's not the case and AMD isn't going to turn around the company by beating the i7 6900K.

    Ryzen may take AMD off life support but it hardly seems to put them in the driver's seat, technologically speaking. They'll probably still be walking around on crutches. (Maybe they can race wheelchairs through the halls.)
  • Lord-Bryan - Thursday, March 2, 2017 - link

    So what! Amd still has 4/8 parts on the way, so all that talk is BS.
  • Yojimbo - Thursday, March 2, 2017 - link

    Yeah but those 4/8 parts aren't going to be going up against a $1000 part, and they will still have all the weaknesses this chip has

    My points are not BS. And I'm not saying Ryzen isn't a competitive product. What I am saying is that I'm not convinced Ryzen will result in a financial situation that validates AMD's recent run-up in stock price.
  • Meteor2 - Friday, March 3, 2017 - link

    And indeed AMD's share price is going down today. The top 6C/12T and 4C/8T parts have been announced and they have equally low clocks. Given Zen's uncompetitive IPC... It's not looking good.
  • Cooe - Monday, March 1, 2021 - link

    ... "Uncompetitive IPC". So did you say that about Broadwell-E when it launched too?
  • Rene23 - Thursday, March 2, 2017 - link

    The other benchmarks may still run sub-optinmal code paths not making use of the latest and greatest SSE, AVX, et al. Most developers blindly trusting Intel's compiler may not even be aware: https://en.wikipedia.org/wiki/Intel_C%2B%2B_Compil...
  • Yojimbo - Thursday, March 2, 2017 - link

    Rene23, looks like the intentional sabotage was from 2005 to 2009 and was settled in 2010. I think it's less likely they are doing it now being they were already caught and have already settled for it.

    It's also to be expected that Intel will spend their effort optimizing their compiler for their own hardware. I think there's no reason to expect them to support other hardware at all other than to increase the usefulness and market presence of their compiler. AMD relying on Intel to optimize for AMD's chips is a reflection on AMD's ability to support their chip, not a reflection on Intel. If a benchmark runs faster on AMD hardware with gcc than Intel's compiler, then gcc should be used for the AMD benchmarks. If AMD comes out with a compiler and libraries capable of outperforming Intel's compiler on AMD's hardware, or if they contribute to an open source compiler in order to help optimize it for their hardware then hopefully developers will use those tools to make the best use of AMD's hardware.
  • Rene23 - Friday, March 3, 2017 - link

    I personally would not use Intel's compiler to start with and use gcc, and clang instead, ... However, many AAA game tittles optimize with Intel compiler and libraries, and thus in my opinion it is very likely that their silicon gets and biased advantage, ... Look on unbiased kernel compile time as an indicator for actual silicon performance: http://www.phoronix.com/scan.php?page=article&...
  • Yojimbo - Saturday, March 4, 2017 - link

    You're looking at it the wrong way. It's not a "bias". Why do you want to force people to use worse compilers just because it helps AMD? "Actual silicon performance" doesn't mean a thing. Intel puts work into providing tools that take advantage of their chips. That work is part of what one gets when one buys their chips.
  • divertedpanda - Saturday, March 4, 2017 - link

    What makes Intel's compiler better than the others ?
  • Meteor2 - Monday, March 6, 2017 - link

    Best compiler is often PGI, but Intel's usually produces good results. Buying and testing compilers depends on potential cost savings in better optimised-code, e.g. big 24/7 HPC stuff.
  • Yojimbo - Thursday, March 2, 2017 - link

    I'm looking at Ryzen as a whole, not the market for a niche chip. How much revenue do you think the i7-6900K gives Intel? And how much do you think AMD will derive by beating it in a competition?

    Have some perspective.
  • Akkuma - Thursday, March 2, 2017 - link

    If your comparison forgets about 7700k then yes that statement is correct. However, the 7700k offers more performance in many scenarios for roughly the same price as the 1700, which means you save ~$150. It basically comes down to whether or not you believe more than 4/8 core/thread scaling will happen in stuff before your next planned upgrade or if you can leverage it already in applications that scale well.

    In my opinion, if one upgrades about every 4 years and they are counting on superior core scaling in new applications, I'd say they are making a poor decision. The lag time will be at least 2 years before it starts becoming more normal and probably won't be fully realized for at least 5 years. Additionally, not everything can be multithreaded that well.
  • fanofanand - Thursday, March 2, 2017 - link

    I'm unable to follow your logic, how does choosing the 7700k over the 1700 save you $150? Considering the two chips are basically at price parity after Intel's reductions, and Am4 boards are cheaper almost across the board, I'd say you probably save $50-$100 going with Ryzen over Intel.
  • Meteor2 - Friday, March 3, 2017 - link

    Saves $150 over the 1800X, but is faster on anything which isn't thread hungry -- which isn't much now, and might not be for several years.
  • Lord-Bryan - Thursday, March 2, 2017 - link

    You've got some hilariously screwed logic there.
  • Meteor2 - Friday, March 3, 2017 - link

    Makes perfect sense, what don't you understand?
  • Cooe - Monday, March 1, 2021 - link

    Hahahahaha xD. You couldn't have been more wrong if you had tried. The 4-core/4-thread i5-7600K started becoming a major bottleneck for new games in just 2018 & is practically useless for gaming today, whereas the 6c/12t Ryzen 5 1600 still holds up GREAT!
  • Meteor2 - Friday, March 3, 2017 - link

    100% of the performance for 50% of the price... on scientific compute. Less performance for more money elsewhere. Most scientific compute is done on GPUs these days anyway.
  • Lord-Bryan - Thursday, March 2, 2017 - link

    So it ends up being not the best choice for most consumers because of the poor single core performance and probably not the best for most workstations or servers because of the edge cases as well as the ecosystem Intel can provide for 2x the price at the same performance.
    There fixed it for you!
  • Cooe - Monday, March 1, 2021 - link

    Back when idiots would say that i9-6900K tier single-thread performance was so "poor" as to be disqualifying for most users... ROFL. You're freaking delusional.
  • AndrewJacksonZA - Thursday, March 2, 2017 - link

    Typo on page 3: "The base design supports 512MB of private L2 cache." That's a whole whack of cache. I think you meant "KB." :-)
  • lilmoe - Thursday, March 2, 2017 - link

    Did any of you actually read the whole thing before commenting? lol, I'm still on page 6 and there are already 7 pages of comments... Lots of shitty ones at that.
  • fanofanand - Thursday, March 2, 2017 - link

    I logged in to this article at 8:00 CST, the exact moment of the NDA lift and there were already two comments, so most of these comments were made before even reading the article. It's that stupid "first" mentality that "journalists" and commenters all seem to have.
  • mapesdhs - Thursday, March 2, 2017 - link

    Spotted a youtube channel recently where the channel author always posts "First!" just to annoy such people. :D
  • Meteor2 - Friday, March 3, 2017 - link

    +1!
  • mpbrede - Thursday, March 2, 2017 - link

    In the opening paragraphs you refer to "Jim Killer" - Typo?
  • awehring - Thursday, March 2, 2017 - link

    No.
    British humor.
  • zeeBomb - Thursday, March 2, 2017 - link

    How deep can u go? Excellent analysis by Anandtech
  • PEJUman - Thursday, March 2, 2017 - link

    Ian, nice work. This is why I go to Anandtech for CPU and GPU launches.
  • Carmen00 - Thursday, March 2, 2017 - link

    Typo, "rehiring guru Jim Killer" ==> "rehiring guru Jim Keller", first page.
  • Koenig168 - Thursday, March 2, 2017 - link

    AMD needs to make the RGB Wraith cooler available at retail.
  • hast_do_angst - Thursday, March 2, 2017 - link

    The 1800X at its stock clock rate shows up well behind Intel's Core i7-6900K, but to keep things in perspective, you get 87% of Broadwell-E's performance for less than half of its price with Ryzen 7 1800X. Conversely, you can opt for the $350 Core i7-7700K and enjoy more performance than AMD's 1800X in many popular game titles. Conversely, the Ryzen 7 1800X is in its element when you throw professional and scientific workloads at it. It isn't the fastest in every high-end benchmark, but any calculation that factors in value almost assuredly goes AMD's way.
  • Medicos - Thursday, March 2, 2017 - link

    For gaming, the bottleneck being gpu at 4k and cpu at 1080p(which wont be a visual bottleneck unless there is a higher refresh rate screen north of 120 hz) means that ryzen is "the" needed cpu on market
    Both in the sense of competition for lowering price for exclusive intel deals and for general consumers in the name of amd
  • Ensate - Thursday, March 2, 2017 - link

    Great article and informative as always Anand. If I had to make a suggestion. The Benchmarks all seem to be geared towards Gamers or encoding. My own use case is more prosumer. I use my pc for gaming, and also work involving coding, Unity3d, and Virtual machines labs running on VMware workstation. Can I suggest benchmarks invovling Code compilation and Vmware benchmarks as I know its a large use case used by a huge number of people. Thanks.
  • RickyBaby - Thursday, March 2, 2017 - link

    Agree with above comment. Yes, we know gamers want to see the game benchmarks but as a professional developer ... I'd like to see some code compilation numbers. How about a massive Visual Studio build of something ? Running a VM where its allocated a large number of cores. Running several VMs in paralllel, etc.
  • ABR - Thursday, March 2, 2017 - link

    +1
  • jimbo2779 - Saturday, March 4, 2017 - link

    They have had VS build time for a while now and it is the thing I was most interested in. Sadly foe whatever reason it hasn't been done for the ryzen. Nowhere has benchmarked VS compile time as far as I can tell.

    I hope it is in part 2 as it isn't in the Bench either.
  • pogostick - Thursday, March 2, 2017 - link

    The last AMD CPU I bought was a Phenom II x4 965, and it's still in use. I didn't like or buy any of the earth-mover varieties. Zen is worth my money again. Congrats AMD.
  • alpha64 - Thursday, March 2, 2017 - link

    I think the page on the launch details has an error:

    The base design supports 512MB of private L2 cache per core and 2MB of a shared exclusive L3 victim cache.

    I think that should be 512KB of private L2 cache per core...
  • ET - Thursday, March 2, 2017 - link

    So far the 1700 looks like incredible value for content creators. Intel still wins for everyday use and gaming, but altogether not bad.

    Looking forward to the next part. Hopefully it will contain some tests with 2 or 4 cores disabled, to simulate what we can expect from R5.
  • imaheadcase - Thursday, March 2, 2017 - link

    I feel this is a little to late chip to be honest. Anandtech pretty much said what everyone is thinking, intel already has the supply and backing of developers.

    The main issue is not price or performance, its simple that upgrading for people is not appealing at all considering most high end chips are just fine for modern gaming or whatever you do.

    If this was released next to intels top of line chip it would be a whole different story, but the fact remains the market is at a saturation point in high end chips vs years ago the upgrading growth for these don't fit in well.

    Even if buying a new system from a retailer from a "i don't know computers" type of person, the branding is still in intel court, not only that, but Kaby Lake is still priced as good or better with performance to match.
  • imaheadcase - Thursday, March 2, 2017 - link

    To clarify by upgrade path, i mean that its no more when people upgraded every year to a new chip, most people keep the same fast chip for a few years now.
  • fanofanand - Thursday, March 2, 2017 - link

    There are a LOT of people still sitting on their 2500k's, for those people the 7700k isn't that big of a jump but doubling the cores and threads, while maintaining similar performance single threaded, I'd say it gives a fairly large reason for a lot of people to upgrade. What has been the cry of Intel fans over the years? Give me more cores! Maybe this year Intel will finally give the mainstream 6 cores. There is talk of Intel now giving a 6 core cpu in the mainstream line next year. If 4 cores was enough for 95% of people why would Intel do that? They have stuck at 4 cores to force enthusiasts to pony up for HEDT, AMD just gave those people an option at half price.
  • Meteor2 - Friday, March 3, 2017 - link

    If you've (sensibly) stayed with a 2500K then a Ryzen is no more of an upgrade than a 7700K. Less of one, if anything, given the slower cores. There's little benefit for higher core counts outside of esoteric uses.
  • Cooe - Monday, March 1, 2021 - link

    God. I wish you didn't stop posting on Anandtech in 2020 so I could rub your nose in all your wrongness here.
  • Lord-Bryan - Thursday, March 2, 2017 - link

    How, pls clarify.
  • flexy - Thursday, March 2, 2017 - link

    This is crazy. In SOME tests, Rhyzen 1700 is equal and sometimes even faster than Intel's flagship which costs THREE TIMES more. Check the Handbrake etc. tests. $339 CPU vs. $1089 CPU.
  • dullard - Thursday, March 2, 2017 - link

    This is crazy. In SOME tests, Intel 6320 is equal and sometimes even faster than AMD's flagship which costs THREE TIMES more. Check the Web etc. tests. $157 CPU vs $499 CPU.

    My point being that focusing on just one use case in comments is stupid. If that one use case is what you need, get the CPU that best meets your needs. If you want general use for multiple applications, it is a tradeoff where AMD is better in some programs and Intel is better in other programs.
  • Murloc - Thursday, March 2, 2017 - link

    it's not stupid to focus on a restricted set of applications if that's what the 2011 and R7 processors are aimed at. These are not mainstream gaming CPUs.

    The real test will be the R5 vs i5.
  • dullard - Thursday, March 2, 2017 - link

    Like I said, if you have one main use case, then get the best CPU for that use case. That isn't stupid.

    What is stupid, is in a general comment thread posting how great just one use case is--just to make one side or the other side seem so much better.
  • Meteor2 - Friday, March 3, 2017 - link

    Well said.
  • Lord-Bryan - Thursday, March 2, 2017 - link

    Yeah and it will be most interesting.
  • Meteor2 - Friday, March 3, 2017 - link

    It doesn't look good. Lower IPC and lower clocks for the 1600X and 1500X.
  • Lord-Bryan - Thursday, March 2, 2017 - link

    Yeah, use your Intel 6320 to encode hevc 10bit and watch it choke
  • Notmyusualid - Friday, March 3, 2017 - link

    ^ This is the most important statement here -

    "Get the the CPU that meets your needs."
  • dullard - Thursday, March 2, 2017 - link

    Not to mention that the 6900K isn't Intel's flagship either. The 6950X is 15% faster than the 6900K in Handbrake.
  • JasonMZW20 - Thursday, March 2, 2017 - link

    And is a 10c/20t part, so I'd certainly hope so.

    The LGA 2011-3 chips are repurposed server parts for ultra-high end PCs. Naples has quad channel memory controller and at least 40 PCIe lanes, but I wonder if AMD will sell the binned 10/12/14c parts (out of 16c/32t) as ultra-high end consumer CPUs in the $600+ bracket. The lower clocks wouldn't be favorable though. Guess we'll find out later this year.
  • Notmyusualid - Friday, March 3, 2017 - link

    Intels flagship is not the 6900K, it is the 10C/20T 6950X, at an unbelievable price.

    I sorely missed its inclusion in this review, but I think that was to compare AMD 8 core to Intel 8 cores, and of course, there is the price.
  • sorten - Thursday, March 2, 2017 - link

    Thanks for the solid review Ian!

    Looks like Ryzen dominates in multithreaded tasks, but still lacks in single threaded. No reason for me to upgrade, but I'm glad they will make Intel work harder.
  • Lord-Bryan - Thursday, March 2, 2017 - link

    Because of the low clocks, wait for r5 before you conclude on what is and what is not
  • Meteor2 - Friday, March 3, 2017 - link

    R5 has equally low clocks.
  • Nem35 - Thursday, March 2, 2017 - link

    It doesn't even need to be good for gaming. It will be more than enough for it to be awesome for productivity. People who are saying that it's not good as Intel are wrong. Yes, it is. It is even better, because it's half of the price.
    Ryzen is telling us that in the next couple of months we will be able to have business laptops and ultrabooks with similar or even SAME performance for at $100-150 less. Also, this will FORCE Intel to lower their price OR to make their CPU's even better in order to keep the same price.

    Competition is always good.

    I was with Intel all these years because there was no competition, but from now on, heads up AMD. Make our life easier/cheaper.

    Good work!
  • liquidaim - Thursday, March 2, 2017 - link

    Please perform a power consumption comparison as well to include in your part 2. this was standard practice in bulldozer/piledriver reviews. its only fair to continue the practice.
  • osxandwindows - Thursday, March 2, 2017 - link

    No thunderbolt?
    No buy from me. Sorry amd.
  • fanofanand - Thursday, March 2, 2017 - link

    Thunderbolt means paying Intel, why would AMD do that when HDMI, DP, and USB 3.0/3.1 are used by virtually everyone? Thunderbolt is like Firewire, it may be awesome but it's price means that it will never be mainstream. Besides, Thunderbolt can always be added at the board level by partners, so it's not like there will NEVER be an AM4 board with TB support.
  • Meteor2 - Friday, March 3, 2017 - link

    TB is a godsend for 'content creators', the few who might be able to take advantage of the 1800X.
  • asH98 - Thursday, March 2, 2017 - link

    Cultress, HEDT?? Really?? RyZen is clearly a chip for the masses- There's no need for AMD to compete with Intel on the highest CPU levels when you have Vega too.
    Both Intel and AMD can succeed!!
  • none12345 - Thursday, March 2, 2017 - link

    What i care about for game benchmarks is 1080p, 1440p, and 4k. I dont give a rats ass about which processer is faster rendering a game at 1000fps in 640x480, because i will NEVER play a game at that res.

    If 2 processors preform about the same in 1080 or 4k thats all i care about. If they are about the same, then i wont make my decision on game benchmarks, i will move on to the next area i care about. Be that heavy multitasking, rendering, compiling, browsing, office, whatever.

    What i want to know on ryzen and gaming is is it about the same? If it is, i dont care which has 3 more fps or 5 more fps, when you already have 80 fps. (i care about 3 or 5, when you only get 20 fps)

    To be honest for gaming, you should only be testing a $500 cpu with a $500+ gpu in 4k(and 1440p), not some pleb lower resolution like 1080, or 720. (note im a pleb with 1080p). A lower resolution is just a foot note, its good to see, but should have no real weight. What matters is what you are going to actually play at.

    It looks like ryzen is about what i expected. Its not as fast as kabylake in single thread. Its more or less on par in games with settings people actually use. But it destroys it in multithread.

    So im still a sale, i want a good enough gaming cpu, with a lot more cores for heavy multitasking, and code compiling. Ryzen looks to be perfect for me. a 7700k vs a 1700 with that usage in mind is a no brainer win for the 1700. A 1800x vs a 6800k is also a no brainer win for ryzen(tho value wise a 1700, or 1700x is even better).

    Games released this last year seem to be finally be heading in the more thread direction, so future proofing wise, it also looks like an easy win.

    IF all you care about is absolute highest fps today(or in the games of yesteryear which run fine on any cpu), 7700k wins easily, so buy that. But i care about more then that.
  • Lord-Bryan - Thursday, March 2, 2017 - link

    Best comments ever! Kudos
  • Medicos - Thursday, March 2, 2017 - link

    👍 +1
  • dantecolo - Thursday, March 2, 2017 - link

    Question, windows is already fully patched for Zen ? Because , Linux is starting to support it on kernel 4.11 which was not released yet
  • highlnder69 - Thursday, March 2, 2017 - link

    Here is a little snippet from the conclusion section from the Ryzen review at Toms Hardware. There may currently be an issue with some games on the performance side.

    "We come away from today's coverage with a number of questions that couldn't be answered in time for the launch. For instance, we discovered Ryzen's tendency to perform better in games with SMT disabled. Could this be a scheduling issue that might be fixed later? AMD did respond to our concerns, reminding us that Ryzen's implementation is unique, meaning most game engines don't use if efficiently yet. Importantly, the company told us that it doesn’t believe the SMT hiccup occurs at the operating system level, so a software fix could fix performance issues in many titles. At least one game developer (Oxide) stepped forward to back those claims. However, you run the risk that other devs don't spend time updating existing titles."
  • lilmoe - Thursday, March 2, 2017 - link

    Compare this article with the Galaxy S7/Note7 and iPhone reviews. What a difference.

    Dear Anandtech, either still with this type of quality and facts presentation, or don't bother with phone reviews.

    Great article. Thanks for the read. Now back to troll the trolls.
  • fanofanand - Thursday, March 2, 2017 - link

    Ian is the best of the best, you can't expect every staff member to be as solidly unbiased as Ian. I don't even bother with Anandtech phone reviews anymore, they are all the same Android is crap, Apple was sent from God above.
  • BigDragon - Thursday, March 2, 2017 - link

    This is a very good and thorough review.

    It looks like Ryzen is somewhere between Broadwell-E and Kaby Lake. This means AMD is competitive. That's what we really need in the CPU market -- competition. I don't see the blowout that AMD hyped up, but I do see enough performance to buy into the platform.
  • fanofanand - Thursday, March 2, 2017 - link

    The blowout is in multi-threaded applications. The price/performance disparity is enormous there.
  • regis440 - Thursday, March 2, 2017 - link

    "The base design supports 512MB of private L2 cache" - should be 512KB
  • Ryan Smith - Friday, March 3, 2017 - link

    Thanks!
  • adrian_sev - Thursday, March 2, 2017 - link

    would be nice to have a coverage also for linux numbers (and if benchmark tools are lacking there is openbenchmarking)
  • serendip - Thursday, March 2, 2017 - link

    Oh yes, I'd love to see Linux numbers for multicore build times. This chip could be an amazing workstation chip at half the price of the equivalent i7.

    Welcome back AMD!
  • Leyawiin - Thursday, March 2, 2017 - link

    Checking other reviews the Ryzen doesn't fair nearly as well in gaming... save for minimum FPS (which it does very well). An i7-4790k still beats it most of the time (and often by a substantial amount). While its no turd in gaming like the FX, there's no reason to replace a Haswell on up i7 for it. Intel still beats AMD in gaming.
  • fanofanand - Thursday, March 2, 2017 - link

    Assuming all you do with your machine is gaming, yeah I'd agree the 7700k is better, but if you are running background applications while gaming, then Ryzen starts to show it's stuff. You have to remember all of these tests are done in a vacuum where nothing else is running. That's not real life where you have all sorts of clutter going on in the background.
  • silverblue - Thursday, March 2, 2017 - link

    Considering the clock speeds on those parts, I'd certainly hope so.
  • Cooe - Monday, March 1, 2021 - link

    Only an idiot would have been shopping for / buying an 8-core CPU in 2017 JUST to play games...
  • chrysrobyn - Thursday, March 2, 2017 - link

    Page 3: "The base design supports 512MB of private L2 cache per core" -> should be 512KB per core
  • keveazy - Thursday, March 2, 2017 - link

    Ryzen does show good performance and makes a good competition but Sorry it did not beat intel.

    Where it gets tough is price point. For people who are buying their first PC, ryzen maybe a better choice, but for existing professionals who already have an i3/i5 system looking to upgrade, its so not worth moving to AMD.
  • negusp - Friday, March 3, 2017 - link

    The hell you talking about? These CPUs are arguably even better for people hanging onto their 2500k or Nehalem.

    And it smokes Intel when it comes to encoding and multithread performance, which is only going to get more relevant as DX12/Vulkan becomes prevalent and games/applications use more threads. So for existing professionals and content creators, Ryzen is DEFINITELY the way to go.
  • Meteor2 - Friday, March 3, 2017 - link

    So that's about 1% of the market. For gamers on 2500Ks, Ryzen makes no more sense than a 7700K. Get an SSD and a better GPU.
  • eachus - Thursday, March 2, 2017 - link

    Memory for Ryzen:
    Corsair Vengeance 2x8GB DDR4-3000 C16 running at DDR4-2400 C15
    Memory for Intel chips:
    ?????

    I have to suspect that the reason the memory is running at about 3/4 of full speed is the need for a BIOS update not available while the NDA was still in place. If so, we need to see full (memory) speed benchmarks, if not, an explanation, especially since some memory does run at up to DDR4-3600 with Ryzen chips.
  • Da_Beast - Thursday, March 2, 2017 - link

    A fantastic CPU for poverty stricken people!

    But all the cool kids will still be pumping Intel....
  • RickyBaby - Thursday, March 2, 2017 - link

    Kinda surprised I haven't seen anyone mention this; but someone did mention no thunderbolt because that Intel propritary. Well, Intel might have another ace up its sleeve. Optane, 3DXPoint or whatever they are calling it. I've see references where it ONLY WORKS with Kaby Lake or maybe something inside the latest Intel chipset. If so ... Zen and the AMD chipsets probably won't support Optane. So AMD definitely made up a lot of ground but still have some catching up to do.

    The Optane, xpoint stuff is still very sketchy and unknown at this time at least to me; may Ian or someone knows more. But in my mind it has the potential to be a game changer ... and not video games, lol.
  • Makaveli - Thursday, March 2, 2017 - link

    Last time I checked poor people don't buy $500 cpu's.

    Its getting past your bed time isn't it....?
  • carewolf - Thursday, March 2, 2017 - link

    I don't think it is Intel's R&D only or even primareiy that makes Intel's chips perform more optimally. It is every developer out there with Intel workstations they are developing on, benchmark on, and thereby always optimizing for.

    Good move making a workstation cpu before the gaming cpu, there is a good chance it will make the gaming cpu faster once it is released, without it even being AMD direct work.
  • Rene23 - Thursday, March 2, 2017 - link

    I have the feeling that the benchmarks where Ryzen is not winning (Adobe Acrobat PDF opening and such) as simply using Intel compiler or libraries that still give the Intel CPU an bias by not using the latest and greatest SSExyz and AVX code paths, ...

    When you see how AMD Ryzen wins POV-Ray 3.7 you one really wonders why it falls back so much in other tests, ... ???

    Thanks god I compile my own binaries, and thus can trust on unbiased GCC to yield better optimized code for me - https://t2-project.org
  • Meteor2 - Friday, March 3, 2017 - link

    How does using published documented instructions count as 'bias'? It's up to AMD to provide their own equivalents.
  • Ryedog - Thursday, March 2, 2017 - link

    The big question is, when will AMD or their partners create a NUC competitor that will have 60fps average 1080p gaming performance in the size of a NUC? Lets go AMD, we know you can do it!
  • BrokenCrayons - Thursday, March 2, 2017 - link

    And laptops too please! Since NUCs use mobile processors, I think we'll both be happy campers if something like that happens.
  • vladx - Thursday, March 2, 2017 - link

    "Jim Killer"

    HA the sides
  • Joseph_Crox - Thursday, March 2, 2017 - link

    When apple ask to intel for a less hungry processors they say na! When people ask intel why that 5 % increments and tick tack tick tack strategy the say mmm. When AMD shows Ryzen at $499? I can't hear what they say.
  • versesuvius - Thursday, March 2, 2017 - link

    Bravo, AMD! That is more than good enough. Let's hope that Vega is just as impressive! (but darling please, don't make me wait too long :) )
  • Jimster480 - Thursday, March 2, 2017 - link

    Sadly this Ryzen release has turned into exactly what I said it would be many months ago.
    It doesn't really matter what AMD makes because Intel has the review sites in their pocket (for the most part) and comparisons will always be crafted to show AMD's weak points while highlighting Intel's strong points.
    The review from PC World on the front page and in emails is an already disgusting idea of this because they took the base Ryzen chip running at 3Ghz and put it against an OC'd Ivy Bridge @ 4.2Ghz in gaming.
    Something that we know mostly only uses 2-4 threads (for most titles) and is very clockspeed dependent and basically made the case that Ryzen isn't worth the money. Ofcourse without noting that the new KabyLake chips don't offer any improvements either in the same titles, or without showing that Broadwell-E in the same scenario would also produce poor FPS in these titles due to clockspeed.

    I think that so far this is the only site's review I have seen which seems to give Ryzen a fare shake in showing its overall prowess, while also noting its weaknesses.
  • RickyBaby - Thursday, March 2, 2017 - link

    I haven't read any of the other reviews ... but wouldn't be surprised at blatant Intel bias. Others have already said ... Ian did a great job here ... kept the bias way out of it either way. Ran a good fair shake of benchmarks and let the chips fall where they may. I for one am glad he didn't spend all his time on just gaming benchmarks. Great job by Ian/Anand and company ... Kudos from the community !
  • Alistair - Thursday, March 2, 2017 - link

    That PCWorld article was probably the worst i've seen so far on the internet. OC vs non OC. They also picked far cry primal and the division, the most single threaded games in digitalfoundry's benchmark as their test. Laughable.
  • nt300 - Thursday, March 2, 2017 - link

    Can you do an explination with regards to ZEN actually being more like a SOC versus what they previously have done in the past. Thank you,

    By the way, one of the best reviews and explanations online.
  • Breach1337 - Thursday, March 2, 2017 - link

    Well, nice scores rendering/encoding, etc., but price/value, gaming benchmarks are quite bad at pretty much any resolution, the more CPU-bound the worse. Still we need healthy competition to keep the Intel/nVidia dominium challenged.
  • RotJ2017 - Thursday, March 2, 2017 - link

    Why wasn't the test bed set up with a 960 EVO or Pro in the Turbo M2 slot which has direct lanes to the CPU? The SATA drive goes through the Chipset. This would have given a better look at the CPU rather than variances with the chipset.
  • Aerodrifting - Thursday, March 2, 2017 - link

    Come on, Where is the overclocking review that we actually care about! Everything else is pretty much the same from what we knew before NDA.
  • ThreeDee912 - Friday, March 3, 2017 - link

    Well at least we know AMD wasn't lying. Whatever they claimed before release was basically the same as the actual result.
  • HighTech4US - Friday, March 3, 2017 - link

    AMD wasn't lying - Really?

    gamersnexus outlined the "tricks?" used by AMD in benchmarking Ryzen before the launch:

    AMD inflated their numbers by doing a few things:

    In the Sniper Elite demo, AMD frequently looked at the skybox when reloading, and often kept more of the skybox in the frustum than on the side-by-side Intel processor. A skybox has no geometry, which is what loads a CPU with draw calls, and so it’ll inflate the framerate by nature of testing with chaotically conducted methodology. As for the Battlefield 1 benchmarks, AMD also conducted using chaotic methods wherein the AMD CPU would zoom / look at different intervals than the Intel CPU, making it effectively impossible to compare the two head-to-head.

    And, most importantly, all of these demos were run at 4K resolution. That creates a GPU bottleneck, meaning we are no longer observing true CPU performance. The analog would be to benchmark all GPUs at 720p, then declare they are equal (by way of tester-created CPU bottlenecks). There’s an argument to be made that low-end performance doesn’t matter if you’re stuck on the GPU, but that’s a bad argument: You don’t buy a worse-performing product for more money, especially when GPU upgrades will eventually out those limitations as bottlenecks external to the CPU vanish.

    As for Blender benchmarking, AMD’s demonstrated Blender benchmarks used different settings than what we would recommend. The values were deltas, so the presentation of data is sort of OK, but we prefer a more real-world render. In its Blender testing, AMD executes renders using just 150 samples per pixel, or what we consider to be “preview” quality (GN employs a 3D animator), and AMD runs slightly unoptimized 32x32 tile sizes, rendering out at 800x800. In our benchmark, we render using 400 samples per pixel for release candidate quality, 16x16 tiles, which is much faster for CPU rendering, and a 4K resolution. This means that our benchmarks are not comparable to AMD’s, but they are comparable against all the other CPUs we’ve tested. We also believe firmly that our benchmarks are a better representation of the real world. AMD still holds a lead in price-to-performance in our Blender benchmark, even when considering Intel’s significant overclocking capabilities (which do put the 6900K ahead, but don’t change its price).

    As for Cinebench, AMD ran those tests with the 6900K platform using memory in dual-channel, rather than its full quad-channel capabilities. That’s not to say that the results would drastically change, but it’s also not representative of how anyone would use an X99 platform.

    Conclusion:
    Regardless, Cinebench isn’t everything, and neither is core count. As software developers move to support more threads, if they ever do, perhaps AMD will pick up some steam – but the 1800X is not a good buy for gaming in today’s market, and is arguable in production workloads where the GPU is faster. Our Premiere benchmarks complete approximately 3x faster when pushed to a GPU, even when compared against the $1000 Intel 6900K. If you’re doing something truly software accelerated and cannot push to the GPU, then AMD is better at the price versus its Intel competition. AMD has done well with its 1800X strictly in this regard. You’ll just have to determine if you ever use software rendering, considering the workhorse that a modern GPU is when OpenCL/CUDA are present. If you know specific in stances where CPU acceleration is beneficial to your workflow or pipeline, consider the 1800X.

    For gaming, it’s a hard pass. We absolutely do not recommend the 1800X for gaming-focused users or builds, given i5-level performance at two times the price. An R7 1700 might make more sense, and we’ll soon be testing that.

    http://www.gamersnexus.net/hwreview...review-premi...
  • Meteor2 - Saturday, March 4, 2017 - link

    Now *that's* a good post.
  • Cooe - Monday, March 1, 2021 - link

    The i7-6900K was running wuad-
  • Cooe - Monday, March 1, 2021 - link

    The i9-6900K WAS running quad-channel in the Cinebench benchmark. It also has 2x as much RAM as the Ryzen machine (32GB vs 16GB). This post is nonsense.
  • webdoctors - Thursday, March 2, 2017 - link

    This looks like a great product, if all the AMD SKUs are 50% MSRP of equivalent Intel SKUs, its going to do amazing. The trolls here complaining about a 10% perf difference with a 50% difference in price are just plain nuts.

    The extra $$$ you save here will go into the GPU directly translating to a greater than 10% improvement in games, and for other workloads its not interesting at the consumer level.
  • lllllllllllll - Thursday, March 2, 2017 - link

    @IanCutress
    Can you elaborate on HEVC encoding results?
    Why 7700k is better then 1800x specifically in that benchmark?
  • Makaveli - Thursday, March 2, 2017 - link

    AVX?
  • Laststop311 - Thursday, March 2, 2017 - link

    Where is the max OC for each chip? Is the IHS soldered on? I demand more!
  • DigitalFreak - Thursday, March 2, 2017 - link

    IHS is soldered
  • mdocod - Thursday, March 2, 2017 - link

    I think the choice of CPU's to compare the new Ryzen chips to is a bit odd.

    I think this review would be more relevant and interesting if it included the following CPUs instead of those tested: FX-8320E, FX-8350, FX-9590, A10-7890K, Pentium G4560, i3-7100, i5-7500, i5-7600K, i7-7700, i7-7700K, i7-6850K, i7-6900K, i7-6950X, Ryzen 1700, 1700X, 1800X. Nothing more, nothing less.
  • oranos - Thursday, March 2, 2017 - link

    Short review: Ryzen underperforms in the only segment that matters for their customer demographics - gaming. Many benchmarks showing it trails Intel's Skylake/Kabylake. Not good.
  • hsir135 - Thursday, March 2, 2017 - link

    Actually quite good considering that AMD is playing catch-up of an eternity in the world of semi manufacturing. And to do so on a budget that is merely a rounding error on Intel's spreadsheet. It is quite a testament to a small group of talented individuals with I'm sure great stock options. This is also remarkable when you look at the business side of things with Intel such a huge bloated corporation now feeling the squeeze from a company that was treated like a mosquito at a BBQ. More annoyance than anything else until now. I think this mosquito bite is really going to swell up over time. Or perhaps Intel will simply go the Anti Trust lawsuit fee route again feeling that it is more cost effective than competing. What AMD has accomplished in such a short period of time is remarkable when you consider the company had been buried at sea by most not too long ago. MB refinement and CPU optimization will lessen the distance.
  • Meteor2 - Friday, March 3, 2017 - link

    It really is remarkable... But it's not enough.
  • nucas - Thursday, March 2, 2017 - link

    Not going into the details of price/performance between Ryzen and the Intel chips of the moment, I tell you this:
    I work in video editing, and for the first time I see in Ryzen a true option. Half the price or even less, if you go with the 7 1700, of the equivalent Intel chips. Yes, the X99 platform has more bells and whistles, but for some usage scenarios, it is realy good.
    For the first time in many years, I am now considering an AMD chip for my next build.
    Also, as said here by many, it will force Intel to move from their comfort zone.
  • mapesdhs - Thursday, March 2, 2017 - link

    Just curious, do you use multiple GPUs in your setup? With GPU acceleration? Or is it mostly CPU-based loads? What sw package? I keep finding nuances in these tasks, eg. Vegas is heavily dependent on fast storage, so an SM961 for the main working data area is a wise investment, ditto a cache for AE.
  • nucas - Friday, March 3, 2017 - link

    Hi mapesdhs,
    I mostly edit for h264 Full HD and DVD. Source video is AVCHD Full HD. Main box has 5930K, 32Gb ram, Samsung 950 Pro, a couple of Samsung 840 Evo and two GTX580. I use Adobe Premiere CC2015 and AME with CUDA on. When encoding for DVD especially with color correction and some other effects, the two GTX580 will be around 100% use. The 5930K around 75/85 %. One of these Ryzen 7 chips and a GTX1060/1070 would do a sweet machine as a second editing station and encoding box (provided the results we have seen so far in reviews are confirmed).
    Take care.
  • mapesdhs - Friday, March 3, 2017 - link

    Freakishly similar to an AE/CUDA research box I built (4x 900MHz 580 3GB, 3930K @ 4.7, H110, 64GB/2133), and a more recent X99/6850K setup (950 Pro, ASUS R5E) for which I've not yet chosen the GPUs (probably hunt for used 780 Ti or 980 Ti). Speaking of which, note that a 780 Ti is exactly twice as fast as a 580 for CUDA (I have a lot of 580s, done loads of tests). Of course a 700-type Titan would have twice the RAM, but Titans usually have much lower base clocks.

    Btw, if you want to step up from the 840 EVOs, can't go wrong with an SM961 on an adapter card. Same tech as the 960 Pro but much cheaper.
  • nucas - Monday, March 6, 2017 - link

    Hi mapesdhs,
    Thanks for the comments on 780 Ti and SM961.
  • iamserious - Thursday, March 2, 2017 - link

    Wow idk y I'm just now realizing amd is not releasing a 4 core zen core until possibly the end of the year! So far for the 8 core range of cpus AMD has definitely delivered on its promises so I hope the 4 cores will follow suit! Idk y so many people keep comparing a 4 core 7700k Intel cpu to an 8 core 1800x amd cpu in gaming benchmarks. Maybe they don't realize how more cores generally means lower clocks and that amd is performing as well as Intel with lower clock speeds in non gaming applications! Amazing!
  • Gigaplex - Thursday, March 2, 2017 - link

    I don't know why you keep abbreviating "I don't know why" to "idk y". Also, it should be pretty obvious that they're comparing the 1800X to the 7700k as they're in a similar price range and are the respective companies most popular gaming products.
  • sushukka - Thursday, March 2, 2017 - link

    Let's just not forget the market point of view: if there is a runner-up product arount the same level with around the same price you should of course choose the non-monopoly alternative. It's your only possible choice to make any difference. I assume no one can disagree that having Intel sitting in their current position is and have had a negative effect to the market and development. Look their revenue during the last years and say that their prices are fit. They have been caught many times playing dirty tactics and sure the majority will never been known. Basically Intel is doing the same they did before Athlon and back then they really had super extra premium in their prices.

    Luckily markets tend to generate opposite forces by blooming competive players even it takes some time. The point is that they will turn up even there would be no equal financial power behind them; it's more about cause/ideology which is driving that. I think this is what is happening here right now. Market is tired of Intel's grasp and business angels + the rare very talented individuals in the scene have been helping AMD to be the opposite force to Intel. We can see that this is happening also on mobile side, the competition there has been fierce against Intel and not only because everybody have been afraid that Intel would eat that market too.
    And how about Microsoft's famous dirty play through decades now? Closed standards, messing up with IE, Netscape and Web standards, Office open document format fighting, multiple cases of playing dirty against Linux in big deals etc. MS have made terrible profits which is of course their/businesses goal but to the overal market wealth it has been more than bad. That's the ground where Linux and open source culture have been nourishing and looking the situation now MS is loosing their ground faster than ever. Server share is declining rapidly, without DirectX or Office their desktop share would be quite different and their mobile journey was a complete disaster. The point is that when you loose the trust of the consumer it's extremely hard to gain back.
    This is why I more than gladly put my sparse money to AMD and I believe that there are lots of people like me doing the same.
  • cryosx - Thursday, March 2, 2017 - link

    yup, the BS Intel pulled to win the market just makes AMD the obvious choice when performance parity is reached.
  • TristanSDX - Thursday, March 2, 2017 - link

    Test it with Radeon, on lowest res and quality, NV drivers are optimized for Intel
  • MarkieGcolor - Thursday, March 2, 2017 - link

    As an r9 nano crossfire user, I am disappointed with Ryzen. Only 16 cpu pcie lanes is not enough for high end crossfire. I went from mainstream intel z97 to x79, both 4 core cpu, and saw drastic improvement. Not many need more cpu cores, but many more need more pcie lanes.
  • Gigaplex - Friday, March 3, 2017 - link

    16? It's got 24 (20 after using 4 to communicate with the chipset). Although you're right, even 24 is quite limiting.
  • Haawser - Friday, March 3, 2017 - link

    You do realize that PCIe3 has nearly double the bandwidth of PCIe2, yeah ? And that there isn't a graphics card made that has a problem with PCIe2 x 16 ?

    As long as your cards are PCIe3 you shouldn't have any problems with Xfire. Because PCIe3 x 10 is actually faster than PCIe2 x 16.
  • Notmyusualid - Friday, March 3, 2017 - link

    Here:

    https://www.techpowerup.com/reviews/AMD/R9_Fury_X_...

    And I say the small difference(s) are slightly more prounced with multiple GPUs and PCIe-based storage, as I have. So I'll stick with my 40 lanes thanks.
  • HerrKaLeun - Thursday, March 2, 2017 - link

    Ian, great review!

    congrats to AMD to become competitive again.
    However, I'm glad i bought my i7 7700K and for my usage scenario (Handbrake encoding is my multi-threaded scenario and office and browsing) the i7 seems a way better CPU. I realize for more multi-threaded applications the R7 seem to be good or better than Intel. For single-threaded apps intel still is better.

    But I'm glad there is some competition, and AMD really dug themselves out of a hole.
    Hope the R5/3 will perform well at good price points. I just saw at Newegg that boards are quite pricy, but over time prices likely will drop to Intel board prices.
  • negusp - Friday, March 3, 2017 - link

    How can you say it "seems better" when you haven't touched or so much as paid for a Ryzen CPU?

    Stop talking out of your ass. Ryzen is much better when it comes to encoding/MT.
  • Meteor2 - Friday, March 3, 2017 - link

    Well, apart from the fact it's slower at encoding HEVC, and that's all that matters, as h264 encoding is trivial and everyone's switching to HEVC.
  • DragoBanyan - Friday, March 3, 2017 - link

    I wonder about the encoding results. Even though Intel being better with AVX can explain some of it, maybe the cores aren't saturated and aren't doing a full work load. Also, some other reviews show Ryzen much faster in encoding and more in line where its Cinebench score would expect it to be.
  • Meteor2 - Friday, March 3, 2017 - link

    Funnily enough the developers of x264 and x265 claim to be experts at multicore optimisation, but those codes, particularly the latter, don't scale well.
  • ABR - Friday, March 3, 2017 - link

    It would be nice if you could add the number of cores and GHz for the CPUs in the graphs. Keeping track of all these arbitrary model numbers and what they actually mean in terms of hardware is a cognitive load best done without.
  • GeoffreyA - Friday, March 3, 2017 - link

    That's true. It's hard to figure out everything with only model numbers. But thanks, Ian, for the review! I still haven't really gone through the whole of it.
  • Meteor2 - Friday, March 3, 2017 - link

    I would very much like to see this too. Even took the time to write two sentences to say so rather than just a +1; that's how much I agree!
  • DragoBanyan - Friday, March 3, 2017 - link

    How about doing some multitasking benches? Live streamers and other people have multiple things running at the same time. Run Handbrake and a game at the same time and see how fast Handbrake still is and if the game stutters. i3s and i5s can put up good gaming numbers, but they can falter badly when doing anything else at the same time.
  • Icehawk - Saturday, March 4, 2017 - link

    Here, here! Not sure why in this age of multi-tasking multi-cores they aren't doing this. Even when I'm running a FPS or other game I don't quit every application.
  • Allan_Hundeboll - Wednesday, March 8, 2017 - link

    I think benchmarking with normal background tasks like av, email client, windows 10 housekeeping/personal data harvesting etc would resemble real world usage better. I realize it would be difficult to ensure such background activities behaved exactly the same way during all the benchmarks, but maybe anandtechs talented staff could simulate such task?
    It would be very interesting to see if Zen performs better or worse with a little background activity...
  • TheSingularity - Friday, March 3, 2017 - link

    AMD stock got hammered because of milquetoast Ryzen reviews partly based on hardball tactics by intel with reviewers. Where are the benchmarks using Adobe Premiere Pro with 4K footage...or OTHER real world programs that content creators actually USE ?? Testing which uses software optimized for intel and not AMD is not fully disclosed here. Independent benchmarking before the Ryzen release showed that the 1800X Ryzen is MOSTLY superior to the $1,000 6900K. This new CPU is far better than what the intel reviewers have reported. It is EASY to "cherry pick" tests which will favor intel. Those who created the massive sellout of all Ryzen CPUs in stock know BETTER the value of this new product and how it will work in REAL programs that they use...especially in content creation.
  • EasyListening - Friday, March 3, 2017 - link

    Since clocks and cores are inversely related I suppose the best matchup for the single threaded crown would be between a 7700K and a Ryzen 3. amirite?
  • Meteor2 - Friday, March 3, 2017 - link

    You would be if the lower-numbered chips had faster clocks, but they don't. Remember that GF's '14 nm' is what the process would be if they were using planar transistors instead of FinFETs. But they're not, and Intel still has a strong process lead. That shows in the IPC and the absolute clock speeds which can be obtained.
  • lilmoe - Friday, March 3, 2017 - link

    Buy the dips bro.
  • Meteor2 - Friday, March 3, 2017 - link

    +1. This is going to be a good data centre core, is going to be very good on 7 nm, and Vega might just beat Pascal. Today is a disappointment, but AMD is not done yet.
  • lilmoe - Saturday, March 4, 2017 - link

    lol why is it a disappointment??? They exceeded all my expectations honestly, their showings were really impressive. Gaming is <5% of the market. Yes it's the only growing market, but in a couple of years, DX12 and multi-threaded workloads WILL be the norm. Single threaded benchmarking (and hype) will be history.
  • Meteor2 - Saturday, March 4, 2017 - link

    I think I'm the 'enthusiast' space, where the likes of the i7 and now R5 play, gaming is the majority of the market. Everyone else is i3, Atom, A-series APUs, mobile i5s... Datacentres and workstations are all Xeon. HPC is a few Xeons and either Xeon Phi or GPGPU.
  • Meteor2 - Saturday, March 4, 2017 - link

    'in' not 'I'm'
  • prisonerX - Saturday, March 4, 2017 - link

    You're delusional. Enthusiasts, gamers and people with more money than sense are easily outnumbered by corporates, small businesses and the self-employed who buy high end machines routinely and in huge numbers.
  • Meteor2 - Monday, March 6, 2017 - link

    We buy the machines we need; i3s and i5 laptops. We haven't upgraded our workstations from Sandy bridge for the same reasons as everyone else.
  • zodiacfml - Friday, March 3, 2017 - link

    Whew! Going through the introduction is more than I could chew! Good job AT!
    After going through all the leaks and this one, there is one simple conclusion. It has Broadwell level of single core performance but AMD is providing more cores for the money. It is the same old story except that it doesn't consumer more power and is on par, finally, with Intel.

    This scenario seems too familiar to me with sports cars these days having too much power and a mortal will rarely use all that power except for specific, rare cases. I'll probably be fine with 4 cores from Intel for practical use while I will go with AMD's 6 core variants for fun of multi-tasking like playing a game while with Handbrake in the background.
  • EasyListening - Friday, March 3, 2017 - link

    Ryzen 7 isn't that much more than Ryzen 5 and besides do you really want to wait for Ryzen 5 to come out?
  • zodiacfml - Friday, March 3, 2017 - link

    That is the point, there's little difference but at least there's a difference in cost. AMD might have an upper hand versus Intel's i5's especially the low frequency parts.
  • GeoffreyA - Friday, March 3, 2017 - link

    I noticed that the Ryzen CPUs say "Diffused in USA." Does this mean that they were churned out at AMD's Austin plant, or something along those lines? What about GlobalFoundries?
  • Haawser - Friday, March 3, 2017 - link

    No. It means they were made at GloFo Fab 8, Albany, New York. AMD don't have any fabs.
  • GeoffreyA - Friday, March 3, 2017 - link

    Thanks.
  • Valis - Friday, March 3, 2017 - link

    Didn't AMD have a Fab in Germany, some years ago? Friend of my dad was sale manager at AMD here. He used to give me AMD CPUs.
  • GeoffreyA - Friday, March 3, 2017 - link

    Yes. I am not sure whether it's still online or what; but I suppose it is. This fab in Germany came online in the days of the Athlon. I think that it began producing copper-interconnect CPUs while the Austin plant was still using aluminium interconnect, or something like that.
  • Ryan Smith - Friday, March 3, 2017 - link

    These days the German fab is primarily used for FD-SOI. The New York fab is 14nm FinFET & such.
  • ABR - Friday, March 3, 2017 - link

    One point that could bear a bit more emphasis in the conclusion is that while single core performance may still be overall a bit behind (with significant exceptions in some areas), Zen has a scored a clear win in multicore scaling. The interconnect fabric has always been a strength of AMD's, and it's clear also they learned a lot from the experience with Bulldozer and APU design.
  • Meteor2 - Friday, March 3, 2017 - link

    +1. Naples could beat Xeon.
  • lilmoe - Saturday, March 4, 2017 - link

    it should, easily
  • charliebi - Friday, March 3, 2017 - link

    I don't really get your point here. Who exactly is going to spend $500 for a cpu to game at 1080? Nobody. If you look at 2560x1440 benchmarks Ryzen is on par with a core i7 6900k (or within 10% fps) at half the price.
  • theuglyman0war - Saturday, March 4, 2017 - link

    A person who also creates the assets for games on the same machine. Renders to texture.. lightmaps... etc..
    The army of creative professionals out there that wants a machine that reflects an end user experience and also has enough multithreaded rendering power to kill the amount of time suffering through progress bars.
  • charliebi - Friday, March 3, 2017 - link

    And in any case Ryzen as well as the 6900K cpu are not for gamers. Gamers need only a top Ghz cpu and a modern gpu and a 7700k is cheaper than Ryzen. This is a multicore CPU for serious computing tasks with also the capability to game at high levels but is not a product for gamers.
  • Murloc - Friday, March 3, 2017 - link

    that's right, R5 CPUs will decide whether AMD takes over the gaming market or earns limited marketshare.
  • Meteor2 - Friday, March 3, 2017 - link

    https://www.google.co.uk/amp/wccftech.com/amd-ryze...

    :(
  • Notmyusualid - Sunday, March 5, 2017 - link

    Both wrong, sorry. The numbers don't tie in with your remarks.

    I just searched TimeSpy over on 3Dmark, selected GPU as 1080 & looked through the first FIVE HUNDERED results, and most of the CPUs were 6900K / 6950X / other Ivy / Haswell / Broadwell-E.

    Here:
    http://www.3dmark.com/search#/?mode=basic&url=...

    I didn't see any quad cores e.g. 7700K, nor any AMD CPUs either, so I counter the 6900K is a great gamer's CPU, based on actual results, rather than conjecture.

    Like someone else said - numbers don't lie.
  • oneday_22 - Friday, March 3, 2017 - link

    I don't know how intel fanboy thinking they buy intel with 1000$ but when amd release a Cpu with same performance and half price they said oh thank you it's expensive I will stick with Intel or buy i7. I think they are stupids
  • euler007 - Friday, March 3, 2017 - link

    Everyone has different usage scenarios. If you spend most of the time encoding media or other multi threaded application Ryzen is a no brainer. In my case most of the software that I use gets bottlenecked on single threads, with some features using all cores (about 70% Single thread bottleneck, 30% multithread). The price / performance ratio is heavily skewed towards Intel from what I see in the benchmark, with 157$ Intel parts outperforming 499$ AMD parts.
  • Notmyusualid - Sunday, March 5, 2017 - link

    Anyone using the word 'stupids', is by defination, stupid.
  • oneday_22 - Friday, March 3, 2017 - link

    If you want from INTEL to reduce price you must buy AMD Ryzen
  • Notmyusualid - Sunday, March 5, 2017 - link

    Possibly true.
  • pkv - Friday, March 3, 2017 - link

    In this french review, the unexpected lower perfs in gaming are credited to the memory sub-system;
    latency can be quite high (for ddr4 2400 15-15-15-35 , 98s against 70s for 6900K whether in quad channel or dual channel), the bandwidth between CCX is only 22 GBs !!!(lower than the memory bandwidth). The figure was given to them by AMD. This is of course very detrimental when threads are moved between different CCX.
    A solution would be to change the scheduler to limit the threads moves out of a ccx.
    There is also a issue of cache access between the ccx (for details read the review)

    http://www.hardware.fr/articles/956-22/retour-sous...

    sorry , review in french only; i've emailed Ian to point to this review.
  • cjb110 - Friday, March 3, 2017 - link

    I do think AMD did themselves a disservice by aping the 3/5/7 classifications. It's almost a unconscious admission that Intel are the lead. Unless it was some marketing muppet that thought consumers know they want a i7, but if they could be mislead into a R7...
  • EasyListening - Friday, March 3, 2017 - link

    You call it aping but I call it AIMING. It's a conscious admission that Intel is the target. They are all up in Intel's grill with this one.
  • Murloc - Friday, March 3, 2017 - link

    it's easier for customers to understand their segmentation if they stick to the same convention and with odd numbers they avoid the number 4 which is better not to use because of chinese superstitions.
  • mapesdhs - Friday, March 3, 2017 - link

    Better still, adopt an approach of rationality and rasons, ditch such superstitions. I mean really, a highly technological product which someone might not buy because it has 4 in its product name? Still, general marketing-wise, people do tend to prefer odd numbers.
  • Chaser - Friday, March 3, 2017 - link

    I'm looking forward to the 1500X 4C/8T CPUs up against the 7700K.
  • HighTech4US - Friday, March 3, 2017 - link

    Why?

    The 1800X loses to the 7700K why would you think the 1500X would outperform the 1800X?
  • EasyListening - Friday, March 3, 2017 - link

    haha /facepalm. Because, padawan, with less cores, you can clock the cores higher, increasing single threaded performance. 1500X/1600X might be the sweet spot for Ryzen gaming. Time will tell.
  • Meteor2 - Friday, March 3, 2017 - link

    Do you know of any manufacturer which actually does that? Clocks CPUs with lower core counts higher? I don't.
  • phexac - Friday, March 3, 2017 - link

    "Do you know of any manufacturer which actually does that? Clocks CPUs with lower core counts higher? I don't."

    You mean besides Intel? Where their 4-core chips are clocked higher than 6-core, which is clocked higher than 8-core, which is clocked higher than 10-core?

    Like seriously, you can't think of a CPU manufacturer that does this?
  • Meteor2 - Saturday, March 4, 2017 - link

    Top i7 is the 7700K with a turbo max of 4.5 GHz. Top i5 is the 7600K with a turbo max of 4.2 GHz.

    The Xeon E5 v4s max out at 3.6 GHz regardless of core count.

    Intel's i7s and higher core-count Xeons also have larger caches.
  • Notmyusualid - Tuesday, March 7, 2017 - link

    @Meteor2:

    Except for the single-socket Xeons aka 'uni-processor' chips, which I'm reliably told are multiplier-unlocked. There are a few 8-core E5-1660 & E5-1680 chips out there at nearly 5GHz, if you can stomach the unbelievable price. And I've NEVER seen on on Flea-bay cheap either, or I'd have grabbed it immediately.

    http://ark.intel.com/products/92992
  • Ananke - Friday, March 3, 2017 - link

    Because, the 1500X will be 4.0GHz + part, and very likely performance will be higher than 7700k for less than half price. Ryzen architecture is similar to Intel's sandy bridge, i.e. performance per cycle is same, so for gaming higher clocked processors will be better. 1500X will be probably the best choice for gaming.
  • Meteor2 - Friday, March 3, 2017 - link

    It's not. The 1500X boosts to 3.8 GHz.

    https://www.google.co.uk/amp/wccftech.com/amd-ryze...
  • Meteor2 - Friday, March 3, 2017 - link

    Bit disappointed. In terms of performance against power consumption, Intel still wins. Probably because of their process lead.

    Zen keeps AMD in the game though, and I think we might see parity when GF reaches (a real) 7 nm in a couple of years.
  • MongGrel - Thursday, March 9, 2017 - link

    If you even imply that in the forums here you will get smacked to the curb, and eve get slapped to the curb more if you ask for a explanation why you were smacked to the curb.
  • zodiacfml - Friday, March 3, 2017 - link

    If there's one thing to get excited for are the upcomingAPUs. They will not be held that much anymore by the CPU and the graphics will show its true performance. The APU with HBM is drool worthy.
  • Demigod79 - Friday, March 3, 2017 - link

    The reviews are a bit disappointing gaming-wise but I'm very excited for Ryzen 7. I'm heavily into distributed computing and I'm looking forward to adding a second system based on Ryzen to accompany my primary Haswell (4970K). I might also game on it on the side but it's a secondary consideration (sadly, being in your mid-thirties doesn't leave much time for gaming). I'm just hoping there are no hiccups with installation or driver support.

    The fact that AMD offers a 16-thread CPU for half the price of an Intel one (and with comparable performance) is a major win. It seems that some people see the Ryzen 7 as a competitor to the 7700K (probably based on pricing) but I don't - why would anyone buy a 16-thread processor for gaming? (isn't that a massive waste of computing power?) When I first heard of the Ryzen 7 my immediate thought was grid computing, not GTA V. The Ryzen 5 and 3 would be better suited for gaming and general-purpose PC use.
  • Meteor2 - Friday, March 3, 2017 - link

    If you're into DC why aren't you buying GPUs?
  • just4U - Friday, March 3, 2017 - link

    What interests me about this review is what's not there... The mid/lower end range should be even more competitive once their stuff gets to market.. 6 Core stuff competing with the i5 and 4 core with the i3-pentium.

    It's the first time in years where we can actually start to really get interested in the CPU segment again..as it should push intel to be more competitive as well.
  • GeoffreyA - Friday, March 3, 2017 - link

    Yes. I am not sure whether it's still online or what; but I suppose it is. This fab in Germany came online in the days of the Athlon. I think that it began producing copper-interconnect CPUs while the Austin plant was still using aluminium interconnect, or something like that.
  • GeoffreyA - Friday, March 3, 2017 - link

    (Sorry about the last comment. I was replying to another comment, but it went here instead.)
  • gnawrot - Friday, March 3, 2017 - link

    This CPU is a high end workstation CPU. It is tailored for that and closer to future server CPU from AMD. AMD expressed desire to gain higher server CPU market share. AMD might customize their CPUs for gaming later. It is hard to tackle so many projects with such a budget. Frankly, I am impressed what AMD has done lately. They have done as much as NVidia and Intel combined if not more.
  • charliebi - Friday, March 3, 2017 - link

    I am really sick of hardware constantly being evaluated against gamers objectives. I understand gamers are a very vocal minority online but still a niche. This particular CPU is not aimed at gamers and thanks god there are still products that are not meant for gamers even if it seems that everything should be defined in terms of FPS. And a big part of this flawed situation is due to reviewers that encourage this habit. Who gives a s*** if a 8 core CPU meant for workstations does not run doom at 250FPS.
  • Meteor2 - Friday, March 3, 2017 - link

    I think gamers are the large majority of consumers who buy $150+ CPUs.
  • prisonerX - Saturday, March 4, 2017 - link

    Nah, gamers are just self important twits.
  • Notmyusualid - Sunday, March 5, 2017 - link

    @ prisonerX

    Possibly, but with dollars in our pockets, that others want. Hence this product release.
  • prisonerX - Sunday, March 5, 2017 - link

    Yes, obviously this product release was just for gamers. Thank you for proving my point.
  • Notmyusualid - Monday, March 6, 2017 - link

    You are most welcome!
  • divertedpanda - Saturday, March 4, 2017 - link

    I doubt gamers make a majority of the people who can afford $150+ CPUs... Content Creators/Prosumers probably make the bank for these kind of purchases....
  • Notmyusualid - Monday, March 6, 2017 - link

    @ Meteor2

    I do too.
  • nobodyblog - Friday, March 3, 2017 - link

    AMD claimed that Every core is really one core, but now, we know it is at least two cores, because everything is more...
    It won't be able to be well in different scenarios specially in less threads and even gaming. I doubt their patch works.. It is very bad in IPC, and performance wise it is a garbage in 16 nm...

    Thanks!

    Thanks!
  • nobodyblog - Friday, March 3, 2017 - link

    I mean 14 nm FinFet..
  • charliebi - Friday, March 3, 2017 - link

    want to setup a gaming rig? Buy a 7700K, or better save something and buy a 7500 or 7600 and put the savings on a gtx 1080 ti. That's all gamers need to know.
  • 007ELmO - Friday, March 3, 2017 - link

    what if I want to build 4 gaming rigs for a LAN? does the 1080ti and AMD chip run under 500W power requirement?
  • Outlander_04 - Saturday, March 4, 2017 - link

    Gamers , like everyone else, need to buy using their brains and not prejudices.

    First up decide if you are going to use a 1080p/ 60 Hz monitor . If you are then you do not need either an i7 7700 or a 1080ti .
    If you want that resolution and you have a 144 hz monitor then there is a case for using an intel quad.
    If you are gaming at 4K, 1440p or with a high resolution ultra wide then Ryzen will also do the job very well and be a far better encoder. For those users the AMD chip looks to be very very good v
    value
  • mikeZZZ - Friday, March 3, 2017 - link

    Anadtech, can we please run closer to real life scenarios such as a gaming benchmark with a file compression benchmark running at the same time. Even gaming enthusiasts run more than one program at a time. For example, file decompression in the background while playing a game, or baseball game streaming in a small window while playing a game. You already have many individual benchmarks, so why not go the extra but significant benchmark of running two? We know this favors the higher core CPUs (maybe even Ryzen 7 1700 over all other lower core ones CPUs) but that is closer to real life and should be very meaningful to someone wanting to make an informed purchase.
  • ValiumMm - Saturday, March 4, 2017 - link

    Would also like to see this
  • UrQuan3 - Friday, March 3, 2017 - link

    Just want to put out a quick comment about benchmarking with Handbrake. In dealing with Broadwell-E, and especially ThunderX, I've found that Handbrake often doesn't scale well past about 10 cores, and really doesn't scale well past 16 or so. What seems to happen is that the single-threaded parts of Handbrake tend to dominate the encode time. In extreme cases, ultra-fast and placebo will take almost the same amount of time as x264 is consuming input faster than the rest of Handbrake can generate it. On ThunderX, I've found I can complete four 1080p placebo encodes in the same amount of time that I can complete one. I would expect a similar result on a 48 core Intel, though I do not have access to one beyond 24 cores. Turbo boost would hide this effect a bit.

    I am not knocking using Handbrake for benchmarking. The Handbrake and ray-trace results are the two that I care about most. I just thought I'd give a heads up about this limitation. You can check CPU usage statistics to get an indication of when you are running up against this limit.

    Oh, and I am very excited to see multiple ray-tracers in your runs. Please continue.
  • Meteor2 - Saturday, March 4, 2017 - link

    Presumably though you can have several x264 jobs running simultaneously on that hardware? So while your time to encode a certain piece doesn't decrease, you have more total-throughput (e.g. encoding several different bitrates for adaptive streaming). Should give good efficiency too on a larger Broadwell-E or a ThunderX.
  • UrQuan3 - Tuesday, March 7, 2017 - link

    Exactly. It's the first time I've thought about installing a queue manager for a single computer.
  • jade5419 - Saturday, March 4, 2017 - link

    I agree with this. In my experience Handbrake has a core / thread limit.

    I have a Z600 system with dual Xeon 5570 @ 2.93GHz, 6 core / 12 threads (total 24 threads), 48GB of RAM and a Z620 system with dual Xeon E5-2690 @ 2.9GHz 8 core / 16 threads (total 32 threads), 64GB RAM.

    The two systems transcode video at the same speed using Handbrake 1.0.3. Monitoring CPU usage shows all threads of the Z600 at 100% utilization whereas the CPU utilization on the Z620 is approximately 80%.
  • Notmyusualid - Sunday, March 5, 2017 - link

    Ever tried running GTA5 on 28 cores?

    It doesn't work. You have to adjust the game 'launchers' core affinity to < 26 cores or it won't even load.

    Given this discovery, I expect there are many more applications out there, that may crap-out as we see more and more cores come into the mainstream.

    Just a thought.
  • mapesdhs - Sunday, March 5, 2017 - link

    I'd love to know why this happens. I'm guessing something dumb within Windows.
  • Outlander_04 - Friday, March 3, 2017 - link

    There is more than enough good news to make me want to buy a 6 core Ryzen when they become available .
    Likely that will be the sweet spot for gamers
  • 0ldman79 - Saturday, March 4, 2017 - link

    I'm looking forward to seeing Ryzen updated in the bench.

    There aren't any apps or benchmarks that cross over between the FX series and the Ryzen series, so we can't do any side by side comparison.

    Great review guys. Looking forward to the six core Ryzen. I think just like the FX series the six core will be the sweet spot.
  • theuglyman0war - Saturday, March 4, 2017 - link

    I'd like to see a lot more older i7 extreme editions covered all the way to westmere so I can sell clients on new builds with such a comparison.
  • mapesdhs - Sunday, March 5, 2017 - link

    Which older i7s interest you specifically?
  • theuglyman0war - Saturday, March 4, 2017 - link

    Checking what I paid last month for i7-7700k at Microcenter...
    Although I did get the motherboard combo price sale they "usually" offer...
    The supposed $60 off for $319 is the cheapest price I found with a quick survey of new egg, amazon etc... And only $20 less then what I paid! Hardly A slashed priced answer shot across the bow by Intel! Not by a long shot!
    I thought I was going to recommend the new cheap price to all my customer's new builds but I am pushing RYZEN and AM4 for a real combined price that makes a difference. ( the cheap price for enthusiast Am4 is enticing but the loss of PCI lanes is of concern for extreme cpu comparison anyway. Not so much compared to i7-7700k though which brings the comparison back to 16 lane parity! )
  • theuglyman0war - Saturday, March 4, 2017 - link

    Could anyone actually point me to the amazing slashed deals that "BEAT" what I couldn't get last month by a long shot?

    ( which was $349 BEFORE rebate. In other words it's not like there were not sales last month as well. And I see nothing now that really amounts to AMAZING compared to last month? )

    Pretty dam insulting from somewhere in the pipe? Not sure if it's Intel. Or it's resellers clinging on to greedy margins not reflecting the savings to save their own ass's and bottom line due to stock considerations? Which iz no excuse considering the writing was on the wall. Someone needs to do a lot better. A heck of a lot better. Particularly considering I was thinking I could jes laff off AMD with an Intel savings and now have egg on my face! :)
  • rpns - Saturday, March 4, 2017 - link

    The 'Test Bed Setup' section could do with some more details. E.g. what BIOS version? Windows 10 build version? Any notable driver versions?

    These details aren't useful just now, but also when looking back at the review a few months down the line.
  • jorkevyn - Saturday, March 4, 2017 - link

    why they don't get 4 channel for DDR4 memory? I think, if you get that you will may be the real I7 6950K Killer
  • sedra - Saturday, March 4, 2017 - link

    have a look at this:
    "Many software programmers consider Intel's compiler the best optimizing compiler on the market, and it is often the preferred compiler for the most critical applications. Likewise, Intel is supplying a lot of highly optimized function libraries for many different technical and scientific applications. In many cases, there are no good alternatives to Intel's function libraries.

    Unfortunately, software compiled with the Intel compiler or the Intel function libraries has inferior performance on AMD and VIA processors. The reason is that the compiler or library can make multiple versions of a piece of code, each optimized for a certain processor and instruction set, for example SSE2, SSE3, etc. The system includes a function that detects which type of CPU it is running on and chooses the optimal code path for that CPU. This is called a CPU dispatcher. However, the Intel CPU dispatcher does not only check which instruction set is supported by the CPU, it also checks the vendor ID string. If the vendor string says "GenuineIntel" then it uses the optimal code path. If the CPU is not from Intel then, in most cases, it will run the slowest possible version of the code, even if the CPU is fully compatible with a better version."

    http://www.agner.org/optimize/blog/read.php?i=49&a...
  • HomeworldFound - Saturday, March 4, 2017 - link

    Everyone here already knew that ten years ago.
  • Notmyusualid - Sunday, March 5, 2017 - link

    Indeed it was.
  • sedra - Sunday, March 5, 2017 - link

    it is worth to bring it up now.
  • mapesdhs - Sunday, March 5, 2017 - link

    Yet another example of manipulation which wouldn't be tolerated in other areas of commercial product. I keep coming across examples in the tech world where products are deliberately crippled, prices get hiked, etc., but because it's tech stuff, nobody cares. Media never mentions it.

    Last week I asked a seller site about why a particular 32GB 3200MHz DDR4 kit they had listed (awaiting an ETA) was so much cheaper than the official kits for Ryzen (same brand of RAM please note). Overnight, the seller site changed the ETA to next week but also increased the price by a whopping 80%, making it completely irrelevant. I've seen this happen three times with different products in the last 2 weeks.

    Ian.
  • HomeworldFound - Sunday, March 5, 2017 - link

    If they were pretty cheap then use your logic, placeholder prices happen. If they had no ETA the chances is that they had no prices. I don't see a shortage of decent DDR4 so it definitely isn't a supply and demand problem. Perhaps you need to talk to the manufacturer to get their guideline prices.
  • HomeworldFound - Sunday, March 5, 2017 - link

    Not really. If developers wanted to enhance AMD platforms, or it was actually worth it they'd have done it by now. It's now just an excuse to explain either underperformance or an inability to work with the industry.
  • Notmyusualid - Tuesday, March 7, 2017 - link

    @ sedra

    It certainly should not be forgotten, that is for sure.
  • Rene23 - Monday, March 6, 2017 - link

    yet people here mentioned multiple times "settled in 2009"; pretending it is not happening anymore, sick :-/
  • GeoffreyA - Monday, March 6, 2017 - link

    I kind of vaguely knew that benchmarks were often unfairly optimised for Intel CPUs; but I never knew this detailed information before, and from such a reputable source: Agner Fog. I know that he's an authority on CPU microarchitectures and things like that. Intel is evil. Even now with Ryzen, it seems the whole software ecosystem is somewhat suboptimal on it, because of software being tuned over the last decade for the Core microarchitecture. Yet, despite all that, Ryzen is still smashing Intel in many of the benchmarks.
  • Outlander_04 - Monday, March 6, 2017 - link

    Settled in 2009 .
    Not relevant to optimisation for Ryzen in any way
  • Rene23 - Monday, March 6, 2017 - link

    settled in 2009 does not mean their current compiler and libraries are not doing it anymore, e.g. it could simply not run the best SSE/AVX code path disguised as simply not matching new AMD cpus properly.
  • cocochanel - Saturday, March 4, 2017 - link

    One thing that is not being mentioned by many is the increase in savings when you buy a CPU + mobo. Intel knows how to milk the consumer. On their 6-8 core flagships, a mobo with a top chipset will set you back 300-400 $ or even more. That's a lot for a mobo. Add the overpriced CPU. I expect AMD mobos to offer better value. Historically, they always did.
    On top of that, a VEGA GPU will probably be a better match for Ryzen than an Nvidia card, but I say probably and not certainly.
    If I were to replace my aging gaming rig for Christmas, this would be my first choice.
  • mapesdhs - Sunday, March 5, 2017 - link

    Bang goes the saving when one asks about a RAM kit awaiting an ETA and the seller hikes the price by 80% overnight (see my comment above).
  • bobsta22 - Saturday, March 4, 2017 - link

    Office with 20 PCs - all developers - loads of VMs and containers.

    All the PCs are due a CPU/Gfx refresh, but ITX mobos required.

    Cant wait tbh. This is a game changer.
  • prisonerX - Saturday, March 4, 2017 - link

    What if they come out with a 16 core line next year!
  • bobsta22 - Saturday, March 4, 2017 - link

    What?
  • lilmoe - Tuesday, March 7, 2017 - link

    It really is. As a freelance developer, I can't wait.
  • ericgl21 - Saturday, March 4, 2017 - link

    For me, the more important thing to see from AMD is if they can come up with a chip that can beat the mobile Core i7-7820HQ (4c/8t no ECC) & the Xeon E3-1575M v5 (4c/8t with ECC), for less money.
    And the number of PCIe gen3 lanes is very important, especially with the rise of M.2 NVMe storage sticks.
  • cmagic - Sunday, March 5, 2017 - link

    Will anandtech review Ryzen in gaming? I would really like Anandtech view, since I don't really trust other sites especially those "entertainment" sites. Want to see how Anandtech dive into its main cause.
  • Tchamber - Sunday, March 5, 2017 - link

    @cmagic
    Page 15
    2017 GPU
    The bad news for our Ryzen review is that our new 2017 GPU testing stack not yet complete. We recieved our Ryzen CPU samples on February 21st, and tested in the hotel at the event for 6hr before flying back to Europe.

    I just ordered my 1700X, I plan to keep it for at least 5 years, as my needs don't change much. My current Intel 6 core is coming up on 7 years old now. I like to buy high end and use it a long time.
  • Lazlo Panaflex - Monday, March 6, 2017 - link

    Same here...probably gonna grab a 1700 at some point and put this here i5-2500 non-k in the kids computer.
  • asH98 - Sunday, March 5, 2017 - link

    '''The BIG QUESTION is WHY are the HEDT benchmarks (professional ie Blender) fairer than gaming benchmarks??

    Bottom line is that CUTTING-EDGE CODING is happening NOW in AI, HPC, data, and AV/AR, game coders because of $$$ are the last to change or learn unless forced (great for NVidia Intel) so most of the game coding is stuck in yesteryear- Bethesda will be the test bed for game coders to move forward
    Hence the difference in game benchmarks vs 'professional' (HEDT) benchmarks. Game coders can get stuck using yesterday's code without repercussions and consequences as long as old hardware dominates and there are no incentives to change or learn new skills. The same Cant happen in the Professional area where speed is tantamount to performance and $$$
  • TheJian - Sunday, March 5, 2017 - link

    I hope you're going to test a dozen games at 1080p where most of us run for article #2 and the GAMING article should come in a week not 1/2 year later like 1080/1070 gtx reviews...LOL. As this article just seems like AMD told you "guys, please don't run any games so we can sell some chips to suckers before they figure out games suck". And you listened. No point in testing 1440p or 4k for CPU, and 95% of us run 1920x1200 and BELOW so you should be testing your games there for a CPU test.

    The fact they are talking Zen2 instead of fixing Zen1 kind of makes me think most of the gaming is NOT going to be fixable.
    http://www.legitreviews.com/amd-ryzen-7-1800x-1700...
    149fps for 7700 in theif vs. 108 for 1800x? JEEZ. GTA5 again, 163 to 138. Deus ex MD 127 to 103. These are massive losses to Intel's chip and Deus was clearly gpu bound as many of Intel's chips hit the same 127fps including my old 4790k :( OUCH AMD.

    https://www.guru3d.com/articles_pages/amd_ryzen_7_...
    Tombraider same 7700k vs. 1800x 132fps to 114 (never mind 6850 scoring 140fps). This will probably get worse as we move to 1080ti, vega, nvidia refresh for xmas, Volta, 10nm etc. If you were using a faster gpu the cpus will separate even more especially if people are mostly gaming at 1080p. Even if many move to 1440p, that maybe fixes some games (tombraider is one with 1080 regular that hits a wall at 90fps), but again goes back to major losses as we move to 10nm etc. We get 10nm chips for mobile now and gpus probably next year at 12nm (real? fake 12nm? Either way) and might squeak into 2017 (volta, TSMC). 10nm gpus will likely come 2018 at the latest. Those gpus will make 1440p look like 1080p today surely and cpus will again spread out (and no, we won't all be running 4k then...LOL). You could see cpus smaller than 10nm BEFORE you upgrade your cpu again if you buy this year. That could get pretty ugly if the benchmarks around the web for gaming are not going to improve. One more point you'll likely be looking at GDDR6 (16Gbps probably) for vid cards allowing them to possibly stretch their legs even more if needed. Again, all not good for a gamer here IMHO.

    “But Senior Engineer Mike Clark says he knows where the easy gains are for Zen 2, and they're already working through the list”

    So maybe no fix in sight for Zen1? Just excuses like "run higher res, and code right guys"...I hope that isn't the best they've got. I could go on about games, but most should get the point. I was going to buy ryzen purely for Handbrake, but I'll need to see motherboard improvements and at least some movement on gaming VERY soon.

    One more ouch statement from pcper.
    https://www.pcper.com/news/Processors/AMD-responds...
    "For buyers today that are gaming at 1080p, the situation is likely to remain as we have presented it going forward."
    So they don't think a fix is coming based on AMD info and as noted as gpus get much faster (along with their memory speeds) expect 1440p to look like today's 1080p benchmarks at least to some extent.

    The board part is of major interest to me, so I can wait a bit and also see Intel's response. So AMD has be hanging for a bit here, but not for too long. I do like the pro side of these though (handbrake especially, just not quite enough).
  • ABR - Sunday, March 5, 2017 - link

    Are there any examples of games at 1080p where this actually matters? (I.e., not a drop from 132 to 108 fps, but from 65 to 53 or 42 to 34?)
  • ABR - Monday, March 6, 2017 - link

    I mean at 1080p. (Edit, edit...)
  • 0ldman79 - Monday, March 6, 2017 - link

    That's my thought as well.

    Seriously, it isn't like we're talking unplayable, it is still ridiculous gaming levels. It is almost guaranteed to be a scheduler problem in Windows judging by the performance deficit compared to other applications. If it isn't, it is still running very, very well.

    Hell, I can play practically anything I can think of on my FX 6300, I don't really *need* a better CPU right now, I'm just really, really tempted and looking for excuses (I can't encode at the same speed in software as my Nvidia encoder, damn, I need to upgrade...)
  • Outlander_04 - Monday, March 6, 2017 - link

    Do you think anyone building a computer with a $500 US chip is going to just be spending $120 on a 1080p monitor?
    More likely they will be building it for higher resolutions
  • Notmyusualid - Tuesday, March 7, 2017 - link

    I've seen it happen...
  • mdriftmeyer - Tuesday, March 7, 2017 - link

    Who gives a crap if you've seen it happen. Your experience is an anomaly relative to the totality of statistical data.
  • Notmyusualid - Wednesday, March 8, 2017 - link

    Or somebody was just happy with their existing screen?

    I can actually point to two friends with 1080 screens, both lovely water cooled rigs, one is determined to keep his high-freq 1080 screen, and the other one just doesn't care. So facts is facts son.

    I guess it is YOU that gives that crap afterall.
  • Zaggulor - Thursday, March 9, 2017 - link

    Statistical data suggests that people don't actually often get a new display when they change a GPU and quite often that same display will be moved to a new rig too.

    Average upgrade times for components are:

    CPU: ~4.5 years
    GPU: ~2.5 years
    Display: ~7 years

    These days you can also use any unused GPU resources for downsampling even if your CPU can't push any more frames. Both GPU vendors have build in support for it (VSR/DSR).
  • hyno111 - Wednesday, March 8, 2017 - link

    Or a $200 1080p/144Hz/Freesync monitor.
  • Marburg U - Sunday, March 5, 2017 - link

    I guess it's time to retire my Core 2 Quad.
  • mapesdhs - Sunday, March 5, 2017 - link

    If you have a Q6600, I can understand that, but the QX9650 ain't too bad. ;)
  • Marburg U - Monday, March 6, 2017 - link

    I'm on a Q9550 running at 3.8 for the past 6 years. I could still run modern games at 1050p, with a r9 270x, but that's the best i can squeeze out of it. Mind that i'm still on DDR2 (my motherboard turns 10 in a few months). I really want to embrace a ultra wide monitor.
  • mapesdhs - Monday, March 6, 2017 - link

    Moving up to 2560x1440 may indeed benefit from faster RAM, but it probably depends on the game. Likewise, CPU dependencies vary, and they can lessen at higher resolutions, though this isn't always the case. Still, good point about DDR2 there. To what kind of GPU were you thinking of upgrading? Highend like 1080 Ti? Mid-range? Used GTX 980s are a good deal these days, and a bunch of used 980 Tis will likely hit the market shortly. I've tested 980 SLI with older platforms, actually not too bad, though I've not done tests with my QX9650 yet, started off at the low end to get through the pain. :D (P4/3.4 on an ASUS Striker II Extreme, it's almost embarassing)

    Ian.
  • Meditari - Monday, March 6, 2017 - link

    I'm actually using a Q9550 that's running at 3.8 as well. I have a 980ti and it can do 4k, albeit at 25-30fps in newer games like Witcher 3. Fairly certain a 1080ti would work great with a Q9550, but I feel like the time for these chips is coming to an end. Still incredible that a 8 year old chip can still hold it's own by just upgrading the GPU
  • mapesdhs - Tuesday, March 7, 2017 - link

    Intriguing! Many people don't even try to use such a card on an older mbd, they just assume from sites reviews that it's not worth doing. Can you run 3DMark11/13? What results do you get? You won't be able to cite the URLs here directly, but you can mention the submission numbers and I can compare them to my 980 Ti running on newer CPUs (the first tests I do with every GPU I obtain are with a 5GHz 2700K, at which speed it has the same multithreaded performance as a stock 6700K).

    What do you get for CB 11.5 and CB R15 single/multi?

    What mbd are you using? I ask because some later S775 mbds did use DDR3, albeit not at quite the speeds possible with Z68, etc. In other words, you could move the parts on a better mbd as an intermediate step, though finding such a board could be difficult. Hmm, given the value often placed on such boards, it'd probably be easier to pick up a used 3930K and a board to go with it, that would be fairly low cost.

    Or of course just splash for a 1700X. 8)

    Ian.
  • Notmyusualid - Tuesday, March 7, 2017 - link

    Welcome to the 21:9 fan club brother.

    But be careful of the 1920x1080 screens, my brother's 21:9 doesn't look half as good as my 3440x1440 screen.. It just needs that little bit more verticle resoultion.

    My pals 4k screen is lovely, and brings his 4GB 980 GTX to its knees. Worse aspect ratio (in my opinion), and too many pixels (for now) to draw.

    Careful of second-hand purchases too, many panels with backlight-bleed issues out there, and they are returns for that reason, again, in my opinion.
  • AnnonymousCoward - Monday, March 6, 2017 - link

    Long story short:
    20% lower single-thread than Intel
    70% higher multi-thread due to 8 cores
    $330-$500
  • Mugur - Tuesday, March 7, 2017 - link

    Actually, on average -6.8% IPC versus Kaby Lake (at the same frequency) - I believe this came directly from AMD. Add to this a lower grade 14nm process (GF again) that is biting AMD again and again (see last year RX480). Motherboard issues (memory, HPET), OS/application issues (SMT, lack of optimizations).

    All in all, I'm really impressed of what they achieved with such obstacles.
  • AnnonymousCoward - Tuesday, March 7, 2017 - link

    Just looking at CineBench at a given TDP and price, AMD is 20% lower. That's the high level answer, regardless of IPC * clock frequency. I agree it's a huge win for AMD, and for users who need multicore performance.
  • Cooe - Monday, March 1, 2021 - link

    Maybe compare to Intel's Broadwell-E chips with actually similar core counts.... -_-
  • nt300 - Saturday, March 11, 2017 - link

    If AMD hadn't gone with GF's 14nm process, ZEN would probably have been delayed. I think as soon as Ryzen Optimizations come out, these chips will further outperform.
  • MongGrel - Thursday, March 9, 2017 - link


    For some reason making a casual comment about anything bad about the chip will get you banned at the drop of a hat on the tech forums, and then if you call him out they will ban you more.

    https://arstechnica.com/gadgets/2017/03/amds-momen...

  • MongGrel - Thursday, March 9, 2017 - link

    For some reason, MarkFW seems to thinks he is the reincarnation of Kyle Bennet, and whines a lot before retreating to his safe space.
  • nt300 - Saturday, March 11, 2017 - link

    I've noticed in the past that AMD has an issue with increasing L3 cache speed and/or Latencies. Hopefully they start tightening the L3 as much as possible. Can Anandtech do a comparison between Ryzen before Optimizations and after Optimizations. Ty
  • alpha754293 - Friday, March 17, 2017 - link

    Looks like that for a lot of the compute-intensive benchmarks, the new Ryzen isn't that much better than say a Core i5-7700K.

    That's quite a bit disappointing.

    AMD needs to up their FLOPS/cycle game in order to be able to compete in that space.

    Such a pity because the original Opterons were a great value proposition vs. the Intels. Now, it doesn't even come close.
  • deltaFx2 - Saturday, March 25, 2017 - link

    @Ian Cutress: When you do test gaming, if you can, I'd love to have the hypothesis behind the 'generally accepted methodology' tested out. The methodology being, to test it at lowest resolution. The hypothesis is that this stresses the CPU, and that a future, higher performance GPU will be bottlenecked by the slower CPU. Sounds logical, but is it?

    Here's the thing: Typically, when given more computing resources, people scale up their problem to utilize those resources. In other words, if I give you a more powerful GPU, games will scale up their perf requirements to match it, by doing stuff that were not possible/practical in earlier GPUs. Today's games are far more 'realistic' and are played at much higher resolutions than say 5 years ago. In which case, the GPU is always the limiting factor no matter what (unless one insists on playing 5 year old games on the biggest, baddest GPU). And I fully expect that the games of today are built to max out current GPUs, so hardware lags software.

    This has parallels with what happens in HPC: when you get more compute nodes for HPC problems, people scale up the complexity of their simulations rather than running the old, simplified simulations. Amdahl's law is still not a limiting factor for HPC, and we seem to be talking about Exascale machines now. Clearly, there's life in HPC beyond what a myopic view through the Amdahl law lens would indicate.

    Just a thought :) Clearly, core count requirements have gone up over the last decade, but is it true that a 4c/8t sandy bridge paired up with Nvidia's latest and greatest is CPU-bottlenecked at likely resolutions?
  • wavelength - Friday, March 31, 2017 - link

    I would love to see Anand test against AdoredTV's most recent findings on Ryzen https://www.youtube.com/watch?v=0tfTZjugDeg
  • LawJikal - Friday, April 21, 2017 - link

    What I'm surprised to see missing... in virtually all reviews across the web... is any discussion (by a publication or its readers) on the AM4 platform's longevity and upgradability (in addition to its cost, which is readily discussed).

    Any Intel Platform - is almost guaranteed to not accommodate a new or significantly revised microarchitecture... beyond the mere "tick". In order to enjoy a "tock", one MUST purchase a new motherboard (if historical precedent is maintained).

    AMD AM4 Platform - is almost guaranteed to, AT LEAST, accommodate Ryzen "II" and quite possibly Ryzen "III" processors. And, in such cases, only a new processor and BIOS update will be necessary to do so.

    This is not an insignificant point of differentiation.
  • PeterCordes - Monday, June 5, 2017 - link

    The uArch comparison table has some errors for the Intel columns. Dispatch/cycle: Skylake can read 6 uops per clock from the uop cache into the issue queue, but the issue stage itself is still only 4 uops wide. You've labelled Even running from the loop buffer (LSD), it can only sustain a throughput of 4 uops per clock, same 4-wide pipeline width it has been since Core2. (pre-Haswell it has to be a mix of ALU and some store or load to sustain that throughput without bottlenecking on the execution ports.) Skylake's improved decode and uop-cache bandwidth lets it refill the uop queue (IDQ) after bubbles in earlier stages, keeping the issue stage fed (since the back-end is often able to actually keep up).

    Ryzen is 6-wide, but I think I've read that it can only issue 6 uops per clock if some of them are from "double instructions". e.g. 256-bit AVX like VADDPS ymm0, ymm1, ymm2 that decodes to two separate 128-bit uops. Running code with only single-uop instructions, the Ryzen's front-end throughput is 5 uops per clock.

    In Intel terminology, "dispatch" is when the scheduler (aka Reservation Station) sends uops to the execution units. The row you've labelled "dispatch / cycle" is clearly the throughput for issuing uops from the front-end into the out-of-order core, though. (Putting them into the ROB and Reservation Station). Some computer-architecture people call that "dispatch", but it's probably not a good idea in an x86 context. (Unless AMD uses that terminology; I'm mostly familiar with Intel).

    ----

    You list the uop queue size at 128 for Skylake. This is bogus. It's always 64 per thread, with or without hyperthreading. Intel has alternated in SnB/IvB/HSW/SKL between this and letting one thread use both queues as a single big queue. HSW/BDW statically partition their 56-entry queue into two 28-entry halves when two threads are active, otherwise it's a 56-entry queue. (Not 64). Agner Fog's microarch pdf and Intel's optmization manual both confirm this (in Section 2.1.1 about Skylake's front-end improvements over previous generations).

    Also, the 4-uop per clock issue width is 4 fused-domain uops, so I was able to construct a loop that runs 7 unfused-domain uops per clock (http://www.agner.org/optimize/blog/read.php?i=415#... with 2 micro-fused ALU+load, one micro-fused store, and a dec/branch. AMD doesn't talk about "unfused" uops because it doesn't use a unified scheduler, IIRC, so memory source operands always stay with the ALU uop.

    Also, you mentioned it in the text, but the L1d change from write-through to write-back is worth a table row. IIRC, Bulldozer's L1d write-back has a small buffer or something to absorb repeated writes of the same lines, so it's not quite as bad as a classic write-through cache would be for L2 speed/power requirements, but Ryzen is still a big improvement.

Log in

Don't have an account? Sign up now