Comments Locked

274 Comments

Back to Article

  • Ryan Smith - Friday, October 19, 2018 - link

    For once, we're going to do the first comment!

    (What does everyone think of the article, and Intel's new CPU?)
  • DanNeely - Friday, October 19, 2018 - link

    [thoughts]
  • Ryan Smith - Friday, October 19, 2018 - link

    Okay. That's well-played...
  • nathanddrews - Friday, October 19, 2018 - link

    My take on your data: worth it if you have something a lot faster than a GTX 1080 since it looks GPU-bound for most of the gaming benchmarks at med-high resolutions. 2080Ti users and SLI users will probably get the most out of it from a gaming perspective.

    Skylake-X with that AVX512 perf, though...
  • Ian Cutress - Friday, October 19, 2018 - link

    One issue we always have every generation is sourcing GPUs. Going up to a vendor and asking for 3-4 cards is typically a no go. This is why I've done a range of resolutions/settings for each game, so cover everyone who wants to see CPU limited scenarios, and others that might be more real-world oriented.
  • 3dGfx - Friday, October 19, 2018 - link

    Ian, how can you claim 9900k is the best when you never tested the HEDT parts in gaming? Making such claims really makes anandtech look bad because it sounds like a sales pitch and you omitted that entire HEDT platform from the results. I hope you fix this oversight so skyX can be compared properly to 9900K and the upcoming skyX refresh parts! And of course, AMD HEDT parts.

    There was supposed to be a part2 to the i9-7980XE review and it never happened, so gaming benchmarks were never done, and i9-7940X and i9-7920X weren't tested either. HEDT is a gaming platform since it has no ECC support and isn't marketed as a workstation platform.

    IF intel says the 8-core part is now "the best" you ought to be testing their flagship HEDT parts which also were claimed to be the best.
  • 3dGfx - Friday, October 19, 2018 - link

    p.s. It would be nice if you can also do Zbrush benchmarking for all the cpu reviews. it runs entirely on the CPU with no GPU accelerations and it comes with a benchmark test/score built into the app. Zbrush is a very common 3d app these days. Also its useful to mention in a review how many polygons or subdivision levels can be displayed in zbrush by the cpu before you see a slowdown. thanks.
  • Ryan Smith - Friday, October 19, 2018 - link

    "Ian, how can you claim 9900k is the best when you never tested the HEDT parts in gaming?"

    Beg your pardon? We have the 7900X, 7820X, and a couple of Threadrippers for good measure. Past that, the farther up the ladder you go in Intel HEDT, the lower the turbo clockspeeds go, which diminishes gaming performance.
  • 3dGfx - Friday, October 19, 2018 - link

    sorry, I was mainly just looking for the flagship products and they have no gaming benches at all, 2990WX, 2950X, and 7980XE, these top end "best" parts have no gaming benchmarks. I wanted to see how they compare to the 9900k or to the refreshed skylakeX which will come out. if for example someone wants to buy a chip that is good for both raytrace rendering and games (game developers, etc.) they will want to see all these benches.
  • Makaveli - Friday, October 19, 2018 - link

    Why would you buy a 2990WX, 2950X, and 7980XE

    to play games on?
  • 3dGfx - Friday, October 19, 2018 - link

    game developers like to build and test on the same machine
  • mr_tawan - Saturday, October 20, 2018 - link

    > game developers like to build and test on the same machine

    Oh I thought they use remote debugging.
  • 12345 - Wednesday, March 27, 2019 - link

    Only thing I can think of as a gaming use for those would be to pass through a gpu each to several VMs.
  • close - Saturday, October 20, 2018 - link

    @Ryan, "There’s no way around it, in almost every scenario it was either top or within variance of being the best processor in every test (except Ashes at 4K). Intel has built the world’s best gaming processor (again)."

    Am I reading the iGPU page wrong? The occasional 100+% handicap does not seem to be "within variance".
  • daxpax - Saturday, October 20, 2018 - link

    if you noticed 2700x is faster in half benchmarks for games but they didnt include it
  • nathanddrews - Friday, October 19, 2018 - link

    That wasn't a negative critique of the review, just the opposite in fact: from the selection of benchmarks you provided, it is EASY to see that given more GPU power, the new Intel chips will clearly outperform AMD most of the time - generally with average, but specifically minimum frames. From where I'm sitting - 3570K+1080Ti - I think I could save a lot of money by getting a 2600X/2700X OC setup and not miss out on too many fpses.
  • philehidiot - Friday, October 19, 2018 - link

    I think anyone with any sense (and the constraints of a budget / missus) will be stupid to buy this CPU for gaming. The sensible thing to do is to buy the AMD chip that provides 99% of the gaming performance for half the price (even better value when you factor in the mobo) and then to plough that money into a better GPU, more RAM and / or a better SSD. The savings from the CPU alone will allow you to invest a useful amount more into ALL of those areas. There are people who do need a chip like this but they are not gamers. Intel are pushing hard with both the limitations of their tech (see: stupid temperatures) and their marketing BS (see: outright lies) because they know they're currently being held by the short and curlies. My 4 year old i5 may well score within 90% of these gaming benchmarks because the limitation in gaming these days is the GPU. Sorry, Intel, wrong market to aim at.
  • imaheadcase - Saturday, October 20, 2018 - link

    I like how you said limitations in tech and point to temps, like any gamer cares about that. Every game wants raw performance, and the fact remains intel systems are still easier to go about it. The reason is simple, most gamers will upgrade from another intel system and use lots of parts from it that work with current generation stuff.

    Its like the whole Gsync vs non gsync. Its a stupid arguement, its not a tax on gsync when you are buying the best monitor anyways.
  • philehidiot - Saturday, October 20, 2018 - link

    Those limitations affect overclocking and therefore available performance. Which is hardly different to much cheaper chips. You're right about upgrading though.
  • emn13 - Saturday, October 20, 2018 - link

    The AVX 512 numbers look suspicious. Both common sense and other examples online suggest that AVX512 should improve performance by much less than a factor 2. Additionally, AVX-512 causes varying amounts of frequency throttling; so you;re not going to get the full factor 2.

    This suggests to me that your baseline is somehow misleading. Are you comparing AVX512 to ancient SSE? To no vectorization at all? Something's not right there.
  • Ian Cutress - Monday, October 22, 2018 - link

    Emn13: Base code with compiler optimizations only, such as those a non-CompSci scientist would use, as was the original intention of the 3DPM test, vs hand tuned AVX/AVX2/AVX512 code.
  • just4U - Saturday, October 20, 2018 - link

    The only problem I really have with the product is for the price it should have come with a nice fancy cooler like the 2700x which is in it's own right a stellar product at close to 60% of the cost. Not sure what intel's game plan is with this but It's priced close to a second gen entry threadripper and for it's cost you might as well just make the leap for a little more.
  • khanikun - Monday, October 22, 2018 - link

    I'm the other way. I'd much rather they lower the cost and have no cooler. Although, Intel doesn't decrease the cost without the cooler, which sucks.

    I'm either getting a new waterblock or drilling holes in the waterblock bracket to make it fit. Well I just upgraded, so I'm not in the market for any of these procs.
  • brunis.dk - Saturday, October 20, 2018 - link

    no prayers for AMD?
  • ingwe - Friday, October 19, 2018 - link

    I don't see the value in it though I understand that this isn't sold as a value proposition--it is sold for performance. Seems to do the job it sets out to do but isn't spectacularly exciting to me.
  • jospoortvliet - Saturday, October 20, 2018 - link

    Given how the quoted prices ignore the fact that right now Intel CPU prices art 30-50% higher than MSRP, yes, nobody thinking about value for money buys these...
  • DanNeely - Friday, October 19, 2018 - link

    Seriously though, I'm wondering about the handful of benchmarks that showed the i7 beating the i9 by significant amounts. 1-2% I assume is sampling noise in cases where the two are tied, but flipping through the article I saw a few where the i7 won by significant margins.
  • Ian Cutress - Friday, October 19, 2018 - link

    Certain benchmarks seem to be core-resource bound. In HT mode, certain elements of the core are statically partitioned, giving each thread half, and if only one thread is there, you still only get half. With no HT, a thread gets the full core to work with.
  • 0ldman79 - Friday, October 19, 2018 - link

    I'd love to see some low level data on the i5 vs i7 on that topic.

    If the i5 is only missing HT then the i7 without HT should score identically (more or less) with the i5 winning on occasion vs the HT enabled i7. I always figured there was a significant bit of idle resources (ALU pipelines) in the i5 vs the i7, HT allowed 100% (or as close as possible) usage of all of the pipelines.

    I wish Intel would release detailed info on that.
  • abufrejoval - Friday, October 19, 2018 - link

    Well I guess you should be able to measure, if you have the chips. My understanding has alway been, that i7/i5 differentiation is all about voltage levels with i5 parts needing too much voltage/power to pass the TDP restrictions rather than defective logic precluding the use of 'one hyperthread'. I find it hard to imagine managing defects via partitions in the register file or by disabling certain ALUs: If core CPU logic is hit with a defect it's dead, because you can't isolate and route around the defective part at that granularity. It's the voltage levels on the long wires that determine a CPUs fate AFAIK.

    It's a free choice between a lower clock and HT or the higher clock without HT at the binning point and Intel will determine the fate of a chips on sales opportunities rather than hardware. And it's somewhat similar with the fully enabled lower power -T parts and the high-frequency -K parts, which are most likely the same (or very similar) top tier bins, sold at two distinct voltage levels yet rather similar premium prices, because you trade power and clocks and pay premium for efficiency.

    Real chips defects can only be 'compensated' via cutting off cache blocks or whole cores, but again I'd tend to think that even that will be more driven by voltage considerations than 'hairs in the soup': With all this multi-patterning and multi-masking going on and the 3D structures they are lovingly creating for every FinFeT their control over the basic structures is so great, that it's mainly the layer alignment/conductivity that's challenging the yields.
  • GreenReaper - Friday, October 19, 2018 - link

    The answer is "yes, with a but". Certain things scale really well with hyperthreading. Other things can see a severe regression, as it thrashes between one workload and another and/or overheats the CPU, reducing its ability to boost.

    Cache contention can be an issue: the i9-9900K has only 33% more cache than the i7-9700K, not 100% (and even if there were, it wouldn't have the same behaviour unless it was strictly partitioned). Memory bandwidth contention is a thing, too. And within the CPU, some parts can not be partitioned - it just relies on them running fast enough to supplky the parts which can.

    And clearly hyperthreading has an impact on overclocking ability. It might be interesting to see the gaming graphs with the i7-9700K@5.3Ghz vs. i9-9900K@5.0Ghz (or, if you want to save 50W, i7-9700K@5.0Ghz vs. i9-9900K@4.7Ghz - basically the i9-9900K's default all-core boost, but 400Mhz above the i7-9700K's 4.6Ghz all-core default, both for the same power).
  • NaterGator - Friday, October 19, 2018 - link

    Any chance y'all would be willing to run those HT-bound tests with the 9900K's HT disabled in the BIOS?
  • ekidhardt - Friday, October 19, 2018 - link

    Thanks for the review!

    I think far too much emphasis has been placed on 'value'. I simply want the fastest, most powerful CPU that isn't priced absurdly high.

    While the 9900k msrp is high, it's not in the realm of irrational spending, it's a few hundred dollars more. For a person that upgrades once every 5-6 years--a few hundred extra is not that important to me.

    I'd also like to argue against those protesting pre-order logic. I pre-ordered. And my logic is this: intel has a CLEAR track record of great CPU's. There hasn't been any surprisingly terrible CPU's released. They're consistently reliable.

    Anyway! I'm happy I pre-ordered and don't care that it costs a little bit extra; I've got a fast 8 core 16 thread CPU that should last quite a while.
  • Schmich - Friday, October 19, 2018 - link

    You have the numbers anyway. Not everyone buys the highest end and then wait many years to upgrade. That isn't the smartest choice because you spend so much money and then after 2-3 years you're just a mid-ranger.

    For those who want high-end they can still get a 2700x today, and then the 3700x next year with most likely better performance than your 9900k due to 7nm, PLUS have money over PLUS a spare 2700x they can sell.

    Same thing for GPU except for this gen. I never understood those who buy the xx80Ti version and then upgrade after 5 years. Your overall experience would be better only getting the xx70 but upgrading more often.
  • Spunjji - Monday, October 22, 2018 - link

    This is what actual logic looks like!
  • Gastec - Sunday, November 4, 2018 - link

    Basically "The more you buy, the more you save" :-\
  • shaolin95 - Friday, October 19, 2018 - link

    Exactly. I think the ones beating the value dead horse are mainly AMD fanboys defending their 2700x purchase
  • eva02langley - Friday, October 19, 2018 - link

    Sorry, value is a huge aspect. The reason why RTX is such an issue. Also, at this price point, I would go HEDT if compute was really that important for me.

    It is not with 10-15% performance increase over a 2700x at 1080p with a damn 1080 TI that I will see a justified purchase.
  • Arbie - Friday, October 19, 2018 - link

    Gratuitous trolling, drags down thread quality. Do you really still need to be told what AMD has done for this market? Do you even think this product would exist without them - except at maybe twice the already high price? Go pick on someone that deserves your scorn, such as ... Intel.
  • Great_Scott - Friday, October 19, 2018 - link

    What a mess. I guess gaming really doesn't depend on the CPU any more. Those Ryzen machines were running at a 1Ghz+ speed deficit and still do decently.

    Intel needs a new core design and AMD needs a new fab.
  • Targon - Friday, October 19, 2018 - link

    TSMC will do the job for AMD, and in March/April, we should be seeing AMD release the 3700X and/or 3800X that will be hitting the same clock speeds as the 9900k, but with a better IPC.
  • BurntMyBacon - Friday, October 19, 2018 - link

    I am certainly happy that AMD regained competitiveness. I grabbed an R7 1700X early on for thread heavy tasks while retaining use of my i7-6700K in a gaming PC. That said, I can't credit them with everything good that comes out of Intel. To say that Intel would not have released an 8 core processor without AMD is probably inaccurate. They haven't released a new architecture since Skylake and they are still on a 14nm class process. They had to come up with some reason for customers to buy new processors rather than sit on older models. Clock speeds kinda worked for Kaby Lake, but they need more for Coffee Lake. Small, fixed function add-ons that only affect a small portion of the market probably weren't enough. A six core chip on the mainstream platform may have been inevitable. Going yet another round without a major architecture update or new process node, it is entirely possible that the 8-core processor on the mainstream platform was also inevitable. I give AMD credit for speeding up the release schedule, though.

    As to claims that the GF manufacturing is responsible for the entire 1GHz+ frequency deficit, that is only partially true. It is very likely that some inferior characteristics of the node are reducing the potential maximum frequency achievable. However, much of the limitations on frequency also depends on how AMD layed out the nodes. More capacitance on a node makes switching slower. More logic between flip-flops require more switches to resolve before the final result is presented to the flip-flops. There is a trade-off between the number of buffers you can put on a transmission line as reducing input to output capacitance ratios will speed up individual switch speeds, but they will also increase the number of switches that need to occur. Adding more flip-flops increases the depth of the pipeline (think pentium 4) and increases the penalty for branch misses as well as making clock distribution more complicated. These are just a few of the most basic design considerations that can affect maximum attainable frequency that AMD can control.

    Consequently, there is no guarantee that AMD will be able to match Intel's clock speeds even on TSMC's 7nm process. Also, given that AMD's current IPC is more similar to Haswell and still behind Skylake, it is not certain that they next processors will have better IPC than Intel either. I very much hope one or the other ends up true, but unrealistic expectations won't help the situation. I'd rather be pleasantly surprised than disappointed. As such, I expect that AMD will remain competitive. I expect that they will close the gaming performance gap until Intel releases a new architecture. I expect that regardless of how AMD's 7nm processors stack against Intel's best performance-wise, I expect that AMD likely bring better value at least until Intel gets their 10nm node fully online.
  • Spunjji - Monday, October 22, 2018 - link

    "To say that Intel would not have released an 8 core processor without AMD is probably inaccurate."
    It's technically inaccurate to say they would have never made any kind of 8-core processor, sure, but nobody's saying that. That's a straw man. What they are saying is that Intel showed no signs whatsoever of being willing to do it until Ryzen landed at their doorstep.

    To be clear, the evidence is years of Intel making physically smaller and smaller quad-core chips for the mainstream market and pocketing the profit margins, followed by a sudden and hastily-rescheduled grab for the "HEDT" desktop market the second Ryzen came out, followed by a rapid succession of "new" CPU lines with ever-increasing core counts.

    You're also wrong about AMD's IPC, which is very clearly ahead of Haswell. The evidence is here in this very article where you can see the difference in performance between AMD and Intel is mostly a function of the clock speeds they attain. Ryzen was already above Haswell for the 1000 series (more like Broadwell) and the 2000 series brought surprisingly significant steps.
  • khanikun - Tuesday, October 23, 2018 - link

    " What they are saying is that Intel showed no signs whatsoever of being willing to do it until Ryzen landed at their doorstep."

    Intel released an 8 core what? 3 years before Ryzen. Sure, it was one of their super expensive Extreme procs, but they still did it. They were slowly ramping up cores for the HEDT market, while slowly bringing them to more normal consumer prices. 3 years before Ryzen, you could get a 6 core i7 for $400 or less. A year before that it was like $550-600. A 1-2 years before that, a 6 core would be $1000+. 8 cores were slowly coming.

    What Ryzen did was speed up Intel's timeframe. They would have came and came at a price point that normal consumers would be purchasing them. If I had to guess, we're probably 2-3 years ahead of what Intel probably wanted to do.

    Now would Ryzen exist, if not for Intel? Core for core, AMD has nothing that can compete with Intel. So...ramp up the core count. We really don't see Intel going away from a unified die design, so that's the best way AMD has to fight Intel. I'm personally surprised AMD didn't push their MCM design years ago. Maybe they didn't want to cannibalize Opteron sales, bad yields, I don't know. Must have been some reason.
  • Cooe - Friday, October 19, 2018 - link

    Rofl, delusional poster is delusional. And anyone who bought a 2700X sure as shit doesn't need to do anything to "defend their purchase" to themselves hahaha.
  • evernessince - Saturday, October 20, 2018 - link

    Got on my level newb. The 9900K is a pittance compared to my Xeon 8176. I hope you realized that was sarcasm and how stupid it is to put people down for wanting value.
  • JoeyJoJo123 - Friday, October 19, 2018 - link

    >I think far too much emphasis has been placed on 'value'.

    Then buy the most expensive thing. There's no real need to read reviews at that point either. You just want the best, money is no object to you, and you don't care, cool. Just go down the line and put the most expensive part for each part of the PC build as you browse through Newegg/Amazon/whatever, and you'll have the best of the best.

    For everyone else, where money is a fixed and limited resource, reading reviews MATTERS because we can't afford to buy into something that doesn't perform adequately for the cost investment.

    So yes, Anandtech, keep making reviews to be value-oriented. The fools will be departed with their money either way, value-oriented review or not.
  • Arbie - Friday, October 19, 2018 - link

    They'll be parted, yes - and we can hope for departed.
  • GreenReaper - Saturday, October 20, 2018 - link

    Don't be *too* harsh. They're paying the premium to cover lower-level chips which may be barely making back the cost of manufacturing, thus making them a good deal. (Of course, that also helps preserve the monopoly/duopoly by making it harder for others to break in...)
  • Spunjji - Monday, October 22, 2018 - link

    Yeah, to be honest the negatives of idiots buying overpriced "prestige" products tend to outweigh the "trickle down" positives for everyone else. See the product history of nVidia for the past 5 years for reference :/
  • evernessince - Saturday, October 20, 2018 - link

    I'm sure for him money is a fixed resource, he is just really bad at managing it. You'd have to be crazy to blow money on the 9900K when the 8700K is $200 cheaper and the 2700X is half the price.
  • Dug - Monday, October 22, 2018 - link

    Relative to how much you make or have. $200 isn't some life threatening amount that makes them crazy because they spent it on a product that they will enjoy. We spend more than that going out for a weekend (and usually don't have anything to show for it). If an extra 200 is threatening to your lively hood, you shouldn't be shopping for new cpu's anyway.
  • close - Saturday, October 20, 2018 - link

    @ekidhardt: "I think far too much emphasis has been placed on 'value'. I simply want the fastest, most powerful CPU that isn't priced absurdly high."

    That, my good man, is the very definition of value. It happens automatically when you decide to take price into consideration the price. I also don't care about value, I just want a CPU with a good performance to price ratio. See what I did there? :)
  • evernessince - Saturday, October 20, 2018 - link

    A little bit extra? It's $200 more then the 8700K, that's not a little.
  • mapesdhs - Sunday, October 21, 2018 - link


    The key point being, for gaming, use the difference to buy a better GPU, whether one gets an 8700K or 2700X (or indeed any one of a plethora of options really, right back to an old 4930K). It's only at 1080p and high refresh rates where strong CPU performance stands out, something DX12 should help more with as time goes by (the obsession with high refresh rates is amusing given NVIDIA's focus shift back to sub-60Hz being touted once more as ok). For gaming at 1440p or higher, one can get a faster system by choosing a cheaper CPU and better GPU.

    There are two exceptions: those for whom money is literally no object, and certain production workloads that still favour frequency/IPC and are not yet well optimised for more than 6 cores (Premiere is probably the best example). Someone mentioned pro tasks being irrelevant because ECC is not supported, but many solo pros can't afford XEON class hw (I mean the proper dual socket setups) even if the initial higher outlay would eventually pay for itself.

    What we're going to see with the 9900K for gaming is a small minority of people taking Intel's mantra of "the best" and running with it. Technically, they're correct, but most normal people have budgets and other expenses to consider, including wives/gfs with their own cost tolerance limits. :D

    If someone can genuinely afford it then who cares, in the end it's their money, but as a choice for gaming it really only makes sense via the same rationale if they've also then bought a 2080 Ti to go with it, though even there one could retort that two used 1080 TIs would be cheaper & faster (at least for those titles where SLI is functional).

    If anything good has come from this and the RTX launch, it's the move away from the supposed social benefit of having "the best"; the street cred is gone, now it just makes one look like a fool who was easily parted from his money.
  • Spunjji - Monday, October 22, 2018 - link

    Word.
  • Total Meltdowner - Sunday, October 21, 2018 - link

    This comment reads like shilling so hard. So hard. Please try harder to not be so obvious.
  • Spunjji - Monday, October 22, 2018 - link

    I think they placed just the right amount of emphasis on "value". Your post basically explains why it's not relevant for you in terms of you being an Intel fanboy with cash to burn. I'll elaborate.

    The MSRP is in the realm of irrational spending for a huge number of people. "Rational" here meaning "do I get out anything like what I put in", wherein the answer in all metrics is an obvious no.

    Following that, there are a HUGE number of reasons not to pre-order a high-end CPU, especially before proper results are out. Pre-ordering *anything* computer related is a dubious prospect, doubly so when the company selling it paid good money to paint a deceptive picture of their product's performance.

    Your assertion that Intel have never launched a bad CPU is false and either ignorance or wilful bias on your part. They have launched a whole bunch of terrible CPUs, from the P3 1.2Ghz that never worked, through the P4 Emergency Edition and the early "dual-core" P4 processors, all the way through to this i9 9900K which is the thirstiest "95W" CPU I've ever seen. Their notebook CPUs are now segregated in such a way that you have to read a review to find out how they will perform, because so much is left on the table in terms of achievable turbo boost limits.

    Sorry, I know I replied just to disagree which may seem argumentative, but you posted a bunch of nonsense and half-turths passed off as common-sense and/or logic. It's just bias; none of it does any harm but you could at least be up-front that you prefer Intel. That in itself (I like Intel and am happy to spend top dollar) is a perfectly legitimate reason for everything you did. Just be open and don't actively mislead people who know less than you do.
  • chris.london - Friday, October 19, 2018 - link

    Hey Ryan. Thanks for the review.

    Would it be possible to check power consumption in a test in which the 2700x and 9900k perform similarly (maybe in a game)? POV-Ray seems like a good way to test for maximum power draw but it makes the 9900k look extremely inefficient (especially compared to the 9600k). It would be lovely to have another reference point.
  • 0ldman79 - Friday, October 19, 2018 - link

    I'm legitimately surprised.

    The 9900k is starving for bandwidth, needs more cache or something. I never expected it to *not* win the CPU benchmarks vs the 9700k. I honestly expected the 9700k to be the odd one out, more expensive than the i5 and slower than the 9900k. This isn't the case. Apparently SMT isn't enabling 100% usage of the CPU's resources, it is allowing a bottleneck due to fighting over resources. I'd love to see the 9900K run against it's brethren with HT disabled.
  • deil - Friday, October 19, 2018 - link

    Very nice. All I wanted to know. 220 W on 95 TDP LOL.
  • AutomaticTaco - Friday, October 19, 2018 - link

    Note the revised table. The need for the revision was due to the first motherboard they tested with being severely over-voltage. Not only is it lower the better news, to me at least, is the Overclock at 4.7GHz and 4.8GHz actual reduce the power consumption and operating temperature.
  • Spunjji - Monday, October 22, 2018 - link

    Good to see it's "only" 166W on a 95W TDP instead :D
  • Cellar Door - Friday, October 19, 2018 - link

    Ryan what is your opinion on the 9700k vs 8700k?
  • Icehawk - Saturday, October 20, 2018 - link

    I was “worried” I would be bummed that I bought an 8700 but the price:perf and delta between them ileaves me feeling just fine.

    I too would like to see the 9900 run w/o HT - it *should* perform like a 9700 but would be interesting to see if there are any oddities.
  • SaturnusDK - Friday, October 19, 2018 - link

    Good review. Too bad the subject is pretty lackluster.

    To sum up the 9900K in one word: Meh!
  • eva02langley - Friday, October 19, 2018 - link

    The CPU is going for 540$+ and the motherboard Toms used is a 600$ motherboard.

    Performance are awesome, handown, but this is not a 2700x competitor. The only these thing are having in common are the number of cores/threads and their platform.

    At this price, I would get a 2950x, hand downs.
  • eva02langley - Friday, October 19, 2018 - link

    Performance are awesome, hand down, but this is not a 2700x competitor. The only things these two are having in common, are the number of cores/threads and their mainstream platform... however the Z390 is more expensive than the X499 which offer way better specs.
  • mapesdhs - Sunday, October 21, 2018 - link

    In the UK is 9900K is closer to the equivalent of $800. Outside the US the absolute cost levels are often a lot worse. Where I am, the 2700X is half the price of a 9900K, the saving being more than enough to afford a much better GPU.
  • Total Meltdowner - Sunday, October 21, 2018 - link

    Good, F U foreigners wanted out superior tech.
  • Total Meltdowner - Sunday, October 21, 2018 - link

    Those typoes..

    "Good, F U foreigners who want our superior tech."
  • muziqaz - Monday, October 22, 2018 - link

    Same to you, who still thinks that Intel CPUs are made purely in USA :D
  • Hifihedgehog - Friday, October 19, 2018 - link

    What do I think? That it is a deliberate act of desperation. It looks like it may draw more power than a 32-Core ThreadRipper per your own charts.

    https://i.redd.it/iq1mz5bfi5t11.jpg
  • AutomaticTaco - Saturday, October 20, 2018 - link

    Revised
    https://www.anandtech.com/show/13400/intel-9th-gen...

    The motherboard in question was using an insane 1.47v
    https://twitter.com/IanCutress/status/105342741705...
    https://twitter.com/IanCutress/status/105339755111...
  • edzieba - Friday, October 19, 2018 - link

    For the last decade, you've had the choice between "I want really fast cores!" and "I want lots of cores!". This is the 'now you can have both' CPU, and it's surprisingly not in the HEDT realm.
  • evernessince - Saturday, October 20, 2018 - link

    It's priced like HEDT though. It's priced well into HEDT. FYI, you could have had both of those when the 1800X dropped.
  • mapesdhs - Sunday, October 21, 2018 - link

    I noticed initially in the UK the pricing of the 9900K was very close to the 7820X, but now pricing for the latter has often been replaced on retail sites with CALL. Coincidence? It's almost as if Intel is trying to hide that even Intel has better options at this price level.
  • iwod - Friday, October 19, 2018 - link

    Nothing unexpected really. 5Ghz with "better" node that is tuned for higher Frequency. The TDP was the real surprise though, I knew the TDP were fake, but 95 > 220W? I am pretty sure in some countries ( um... EU ) people can start suing Intel for misleading customers.

    For the AVX test, did the program really use AMD's AVX unit? or was it not optimised for AMD 's AVX, given AMD has a slightly different ( I say saner ) implementation. And if they did, the difference shouldn't be that big.

    I continue to believe there is a huge market for iGPU, and I think AMD has the biggest chance to capture it, just looking at those totally playable 1080P frame-rate, if they could double the iGPU die size budget with 7nm Ryzen it would be all good.

    Now we are just waiting for Zen 2.
  • GreenReaper - Friday, October 19, 2018 - link

    It's using it. You can see points increased in both cases. But AMD implemented AVX on the cheap. It takes twice the cycles to execute AVX operations involving 256-bit data, because (AFAIK) it's implemented using 128-bit registers, with pairs of units that can only do multiplies or adds, not both.

    That may be the smart choice; it probably saves significant space and power. It might also work faster with SSE[2/3/4] code, still heavily used (in part because Intel has disabled AVX support on its lower-end chips). But some workloads just won't perform as well vs. Intel's flexible, wider units. The same is true for AVX-512, where the workstation chips run away with it.

    It's like the difference between using a short bus, a full-sized school bus, and a double decker - or a train. If you can actually fill the train on a regular basis, are going to go a long way on it, and are willing to pay for the track, it works best. Oh, and if developers are optimizing AVX code for *any* CPU, it's almost certainly Intel, at least first. This might change in the future, but don't count on it.
  • emn13 - Saturday, October 20, 2018 - link

    Those AVX numbers look like they're measuing something else; not just AVX512. You'd expect performance to increase (compared to AVX256) by around 50%, give or take quite a margin of error. It should *never* be more than a factor 2 faster. So ignore AMD; their AVX implementation is wonky, sure - but those intel numbers almost have to be wrong. I think the baseline isn't vectorized at all, or something like that - that would explain the huge jump.

    Of course, AVX512 is fairly complicated, and it's more than just wider - but these results seem extraordinary; and there' just not enough evidence the effect is real, not just some quirk of how the variations were compiled.
  • AutomaticTaco - Saturday, October 20, 2018 - link

    Revised. TDP is still some generic average not true max. Regardless, not 220w.
    https://www.anandtech.com/show/13400/intel-9th-gen...

    The motherboard in question was using an insane 1.47v
    https://twitter.com/IanCutress/status/105342741705...
    https://twitter.com/IanCutress/status/105339755111...
  • dezonio2 - Friday, October 19, 2018 - link

    I would love to see overclocking performance of the 9600k. It would show exactly how much of a difference the upgraded TIM makes if compared to 8600k.
  • emn13 - Friday, October 19, 2018 - link

    That power consumption seems pretty crazy. Going from 4.5 to 5Gz for +56% powerdraw? or worse, from 5.0 to 5.3GHz for 6% clock boost and +40% powerdraw?

    This proc looks like it makes sense at 4.5GHz; beyond that - not much. I mean going from 4.5 to 5.3 isn't nothing - 18% more clocks! But that's going to translate into less-than-that performance gain, and even 18%, while admirable and all, is often not actually all that noticeable - unlike that powerdraw, which you'll likely notice in terms of noise and effort to get the system cooled at all.

    I don't know; this proc looks... cool... but borderline. I'm not sure I'd buy it, even if money were no object (and since I'd consider this for work - it basically isn't).
  • Tkan2155 - Saturday, October 20, 2018 - link

    Yes bill add up this prepare big wallet . amd can overclock higher but it's better at stock . intel is going over limit because they want to show the world they are the best
  • mapesdhs - Sunday, October 21, 2018 - link

    But then, the candle that burns twice as brightly burns half as long. :)
  • MonkeyPaw - Friday, October 19, 2018 - link

    In regards to TDP, I say use your own methodology and ratings if Intel and AMD can’t arrive at a standard measure. Based on how the i9 truly performs in this regard, the 95W rating is just shy of disingenuous. When real world values are applied it does change where this CPU sits in regard to its overall value. Lots of performance? Yes, but it comes at a significant cost. These CPUs aren’t like GPUs, where the cooling solution is designed to match the limits of the GPU. No, Intel doesn’t even bundle a cooler, because they know they have nothing to offer to hit boost speeds, and let’s be real—it’s the boost speeds that help sell this product and yield bragging rights.
  • pavag - Friday, October 19, 2018 - link

    It doesn't have a price/performance chart, so it is hard to tell how justifiable is to spend on this processor, compared to alternatives, and that's the main purpose of reading this kind of articles.

    Here is one from TomsHardware, for reference:
    https://img.purch.com/r/711x457/aHR0cDovL21lZGlhLm...

    It makes clear that is little to gain from a cheap i5-8400 to an i9-9900K, and it also tells which processors are better performing at a given price, or cheaper at a given performance. At least from an average FPS gaming viewpoint.
  • WinterCharm - Friday, October 19, 2018 - link

    Well written. Great article, and I enjoyed it thoroughly :)
  • Machinus - Friday, October 19, 2018 - link

    Can you test the power draw and temperatures of the 9900 with HT disabled, and compare that to the 9700 under the same conditions?
  • Felice - Saturday, October 20, 2018 - link

    Ryan--

    Any chance of you doing the same run with the 9900K's hyperthreading disabled? A lot of gamers find they get better performance without hyperthreading.
  • leexgx - Saturday, October 20, 2018 - link

    Can you please stop your website playing silent audio, very annoying as it stops playback on my other phone (dual connection headset)
  • moozooh - Sunday, October 21, 2018 - link

    To be fair, the 9900K seems like a suboptimal choice for a gaming rig despite the claims—the extra performance is marginal and comes at a very heavy price. Consider that in all the CPU-bound 95th percentile graphs (which are the only important ones in this context)—even in the more CPU-intensive games—the 9700K was within 5% of the 9900K, sometimes noticeably faster (e.g. Civ6 Low). And its overclocking potential is just *so* much better—all of this at ~3/4 the price and power consumption (and hence more relaxed cooling requirements and lower noise). I cannot possibly envision a scenario where a rational choice, all this considered, would point to 9900K for a gaming machine. The at most 5% extra performance just isn't worth the downsides.

    On a sidenote, I'd actually like to see how an overclocked 9700K fares against overclocked 8700K/8086K (delidded for fair comparison—you seem to have had at least one of those, no?) with regards to frame times/worst performance. For my current home PC I chose a delidded 8350K running at 4.9 GHz on 1–2 cores and at 4.7 GHz on 3–4, which I considered the optimal choice for my typical usage, where the emphasis lies on non-RTS games, general/web/office performance, emulation, demoscene, some Avisynth—basically all of the tasks that heavily favor per-thread performance and don't scale well with HT. In most of the gaming tests the OC 8350K showed frame times about on par with the twice more expensive 8700K at stock settings, so it made perfect sense as a mid-tier gaming CPU. It appears that 9700K would be an optimal and safe drop-in replacement for it as it would double the number of cores while enabling even better per-thread performance without putting too much strain on the cooler. But then again I'd be probably better off waiting for its Ice Lake counterpart with full (?) hardware Spectre mitigation, which should result in a "free" minor performance bump if nothing else. At least assuming it will still use the same socket, which you never can tell with Intel...
  • R0H1T - Sunday, October 21, 2018 - link

    Ryan & Ian, I see that the last few pages have included a note about Z390 used because the Z370 board was over-volting the chip? Yet on the Overclocking page we see the Z370 listed with max CPU package power at 168 Watts? Could you list the (default) auto voltage applied by the Asrock Z370 & if appropriate update the charts on OCing page with the Z390 as well?
  • Total Meltdowner - Sunday, October 21, 2018 - link

    Ryan, you do great work. Please don't let all these haters in the comments who constantly berate you over grammar and typos get you down.
  • Icehawk - Saturday, October 27, 2018 - link

    Ryan, I still haven't been able to find an answer to this - what are your actual HEVC settings? Because I've got an 8700 @4.5 no offset and it does 1080p at "1080p60 HEVC at 3500 kbps variable bit rate, fast setting, main profile" with passthrough audio and I get ~40fps not the 175 you achieved - how on earth are you getting over 4x the performance??? The only way I can get remotely close would be to use NVENC or QuickSync neither of which are acceptable to me.
  • phinnvr6@gmail.com - Wednesday, October 31, 2018 - link

    My thoughts are why would anyone recommend the 9900K over the 9700K? It's absurdly priced, draws an insane amount of power, and performs roughly identical.
  • DanNeely - Friday, October 19, 2018 - link

    Have any mobo makers published block diagrams for their Z390 boards? I'm wondering if the 10GB USB3.1 ports are using 2 HSIO lanes as speculated in the mobo preview article, or if Intel has 6 lanes that can run at 10gbps instead of the normal 8 so that they only need one lane each.
  • repoman27 - Friday, October 19, 2018 - link

    They absolutely do not use 2 HSIO lanes. That was a total brain fart in the other article. The datasheet for the other 300 series chipsets is available on ARK, and the HSIO configuration of the Z390 can easily be extrapolated from that.

    HSIO lanes are just external connections to differential signaling pairs that are connected internally to either various controllers or a PCIe switch via muxes. They’re analog interfaces connected to PHYs. They operate at whatever signaling rate and encoding scheme the selected PHY operates at. There is no logic to perform any type of channel bonding between the PCH and any connected ports or devices.
  • TEAMSWITCHER - Friday, October 19, 2018 - link

    My big question ... Could there be an 8 core Mobile part on the way?
  • Ryan Smith - Friday, October 19, 2018 - link

    We don't have it plotted since we haven't taken enough samples for a good graph, but CFL-R is showing a pretty steep power/frequency curve towards the tail-end. That means power consumption drops by a lot just by backing off of the frequency a little.

    So while it's still more power-hungry than the 6-cores at the same frequencies, it's not out of the realm of possibility. Though base clocks (which are TDP guaranteed) will almost certainly have to drop to compensate.
  • 0ldman79 - Friday, October 19, 2018 - link

    There are certainly occasions where more cores are better than clock speed.

    Just look at certain mining apps. You can drop the power usage by half and only lose a little processing speed, but drop them to 2 cores at full power instead of 4 and it is a *huge* drop. Been playing with the CPU max speed in Windows power management on my various laptops. The Skylake i5 6300HQ can go down to some seriously low power levels if you play with it a bit. The recent Windows updates have lost a lot of the Intel Dynamic Thermal control though. That's a shame.
  • Makaveli - Friday, October 19, 2018 - link

    Power consumption rules on mobiles parts why would they release an 8 core model?
  • notashill - Friday, October 19, 2018 - link

    Because you get more performance at the same power level using more cores at lower clocks. The additional cores are power gated when not in use.
  • evernessince - Saturday, October 20, 2018 - link

    Not judging by the power consumption and heat output displayed here.
  • mkaibear - Friday, October 19, 2018 - link

    9700K is definitely the way to go on the non-HEDT. 9900K is technically impressive but the heat? Gosh.

    It's definitely made me consider waiting for the 9800X though - if the 7820X full load power is 145W ("TDP" 140W) at 3.6/4.3, then the 9800X isn't likely to be too much higher than that at 3.8/4.5.

    Hrm.
  • Cooe - Friday, October 19, 2018 - link

    "9700K is definitely the way to go on the non-HEDT."

    I think you meant to say "Ryzen 5 2600 unless your GPU's so fast, it'll HEAVILY CPU-bind you in gaming" but spelt it wrong ;). The 9700K is a vey good CPU, no doubt, but to claim it the undisputed mainstream champ at it's currently mediocre bang/$ value (so important for the mainstream market) doesn't make any sense, or accurately represent what people in the mainstream are ACTUALLY buying (lots of Ryzen 5 2600's & i5-8400's; both with a MUCH saner claim to the "best overall mainstream CPU" title).
  • mkaibear - Saturday, October 20, 2018 - link

    No, I meant to say "9700K is definitely the way to go on the non-HEDT".

    Don't put words in people's mouth. I don't just game. The video encoding tests in particular are telling - I can get almost a third better performance with the 9700K than I can the r5 2600x.

    >"best overall mainstream CPU" title

    Please don't straw man either. Nowhere did I say that it was the best overall mainstream CPU (that's the R7 2700X in my opinion), but for my particular use case the 9700K or the 9800X are better suited at present.
  • koaschten - Friday, October 19, 2018 - link

    Uhm yeah... so where are the 9900k overclocking results the article claims are currently being uploaded? :)
  • watzupken - Friday, October 19, 2018 - link

    The i9 processor is expected to be quite impressive in performance. However this review also reveals that Intel is struggling to pull more tricks out of their current 14nm and Skylake architect. The lack of IPC improvement over the last few generations is just forcing them to up the clockspeed to continue to cling on to their edge. Considering that they are launching the new series this late in the year, they are at risk of AMD springing a surprise with their 7nm Zen 2 slated to launch next year.
  • SquarePeg - Friday, October 19, 2018 - link

    If the rumored 13% IPC and minimum 500mhz uplift are for real with Zen 2 then AMD would take the performance crown. I'm not expecting very high clocks from Intel's relaxed 10nm process so it remains to be seen what kind of IPC gain they can pull with Ice Lake. It wouldn't surprise me if they had a mild performance regression because of how long they had to optimize 14nm for clock speed. Either way I'm all in on a new Ryzen 3 build next year.
  • 0ldman79 - Friday, October 19, 2018 - link

    AMD needs to improve their AVX processing as well, but they've got Intel in a bit of a predicament.
  • Hifihedgehog - Friday, October 19, 2018 - link

    Ladies and gentlemen, I present to you...

    Intel’s FX 9000 series.

    Now even hotter and more power hungry than ever!
  • mapesdhs - Sunday, October 21, 2018 - link

    It reminds me a lot of the P4 days when Intel just had to shove clocks through the roof to remain relevant. And I don't know why tech sites are salivating so much on oc levels that are barely any better than a chip's max turbo, it's a far cry from the days of SB, especially since one can run a 2700K at 5GHz with sensible voltage and good temps using a simple air cooler (ordinary TRUE works fine) and one fan, without high noise (I know, I've built seven of them so far). To me, the oc'ing potential of the 9K series is just boring, especially since the cost is so high that for gaming one is far better off buying a 2700X, 8700K (or many other options) and using the save to get a better GPU.
  • sgeocla - Friday, October 19, 2018 - link

    Why compare to the TR 1920x ($799) and not to the TR 2950X ($899)?
    The TR 2950X kills it in almost every productivity benchmark even against i-9 9900k.
    Not even mentioning the 9th gen power consumption.
  • Yorgos - Friday, October 19, 2018 - link

    don't bother with the review.
    They show you the results that makes intel seem good.
    Intel/Purch media have failed to show to the people that they exceed even Threadripper's TDP in order to fight Zen.
    Desperate moves for desperate times.
    Better look somewhere else for an unbiased review.
  • mkaibear - Friday, October 19, 2018 - link

    What, you mean apart from page 21 where it shows that it almost doubles Threadripper's TDP for the same core count CPU and is 50% greater than the one which has 50% more cores than it does?

    Some reading comprehension lessons needed I think.
  • yeeeeman - Friday, October 19, 2018 - link

    The 9900K looks like a nice CPU, but damn that power consumption is stupidly high. It is almost twice what the 2700X consumes.
  • Hifihedgehog - Friday, October 19, 2018 - link

    *High-end AIO required.
  • AGS3 - Friday, October 19, 2018 - link

    Twice the CPU - 8 cores over 5Ghz :)
  • AutomaticTaco - Saturday, October 20, 2018 - link

    Revised down. The first motherboard they used was extremely higher voltage settings.
    https://www.anandtech.com/show/13400/intel-9th-gen...
  • AutomaticTaco - Saturday, October 20, 2018 - link

    Revised. TDP is still some generic average not true max. Regardless, not 220w.
    https://www.anandtech.com/show/13400/intel-9th-gen...

    The motherboard in question was using an insane 1.47v
    https://twitter.com/IanCutress/status/105342741705...
    https://twitter.com/IanCutress/status/105339755111...
  • AGS3 - Friday, October 19, 2018 - link

    Can't wait to get one - thanks. I may have missed it but what cooler did you use for overclock?
  • 5080 - Friday, October 19, 2018 - link

    To summarize Intel's new i9-9900K = Incredible high power usage and heat generation at a high price with no real advantage = market fail.
  • AutomaticTaco - Saturday, October 20, 2018 - link

    Revised power consumption. First motherboard was over-voltage.
    https://www.anandtech.com/show/13400/intel-9th-gen...
  • SaturnusDK - Sunday, October 21, 2018 - link

    Doesn't change the above conclusion of the intel "FX" 9000-series.
  • hnlog - Friday, October 19, 2018 - link

    Why TRUE Copper is chosen for Intel new CPUs and HEDT?
  • shabby - Friday, October 19, 2018 - link

    Ya I wonder if the 9900k will hit 4.7ghz on all cores with a stockish heatsink, it almost seems like Intel is cheating here, 200+ watts on a 95w cpu. I have a feeling amd will follow suit in the next round by letting the turbo mode suck as much juice as possible.
  • Spoelie - Friday, October 19, 2018 - link

    Seconded, can we see what the processors will do with the same cooling capacity provided?
  • ThaSpacePope - Friday, October 19, 2018 - link

    Looks like i'm keeping my i7-9700k pre-order for $399. In 2012 I paid $190ish (in 2012 dollars) for my i5-3570k which is 4 cores running at 4.5ghz for 6 years now. In 2018 today I'll pay $399 for 8 cores running at 5.3ghz (it appears) and a roughly 40% IPC improvement. Feels like a good value, thank you AMD. Anyone recommend a good Z390 board to go along with it?
  • mapesdhs - Sunday, October 21, 2018 - link

    Thank you AMD? This is why AMD has given up on high end GPUs.
  • ChefJoe - Friday, October 19, 2018 - link

    I have two wants.

    1 - I really want to see the overclocked 9600k vs overclocked 8600k, as the chart differences of it in this early draft of your 9900k-focused review are likely the wildly different clock speeds of the 86 and 96 parts.

    2 - I still want to hear what happens when you drop one of these refresh parts in an older z370 board with an older bios. Do boards that were ok with 8600k refuse to boot a 9600k?
  • ChefJoe - Friday, October 19, 2018 - link

    ack, 9700k-focused at this point. The 9900k overclock part of the review (and presumably 9600k eventually) is still pending.
  • Ghan - Friday, October 19, 2018 - link

    My plan was to upgrade from my current i7 6700k to the i7 9700k, and this article seems to confirm that my plan is a decent one. Doubling the core count from 4 to 8 is a decent value. I don't really see the point in paying an extra $100+ just for HT and slightly more cache.

    This release seems a bit tarnished by the fact that it is still the same process node we've had for years now. Addition of cores is great, but it's not without some cost. Still, perhaps we wouldn't even have this improvement if it weren't for AMD's strong return to the enthusiast CPU market. Hopefully the next year will be even more interesting.
  • Arbie - Friday, October 19, 2018 - link

    "Addition of cores is great, but it's not without some cost. Still, perhaps we wouldn't even have this improvement if it weren't for AMD's strong return to the enthusiast CPU market."

    It's actually with a LOT of cost. And you should consider whom you're going to reward with your business: the big fat company that milked us for ten years and did everything legal and illegal to crush their competition, or the struggling firm that miraculously came from behind and reignited the market. Make your own choice, but if you buy Intel merely to have the fastest today, you're voting for sad tomorrows.
  • Lazlo Panaflex - Friday, October 19, 2018 - link

    Well said, Arbie. Ryzen 2600 (non X) with decent stock cooler for $160 at Newegg = epic win.
  • mapesdhs - Sunday, October 21, 2018 - link

    My next new build will definitely be AMD. Looking forward to it.
  • billin30 - Friday, October 19, 2018 - link

    Maybe I am just slow in my upgrading, but my 4770k is still going strong. I am in the market for an upgrade, but I would like to see what sort of difference in performance I can expect. Its nice to see all the latest CPU's on this list, but you don't get a ton of deviation when you have CPU's that are so close in performance. It would interesting to see some benchmarks based on the previous generations top performing CPU's so we can see what sort of performance improvements we would get when moving up from past generations. I feel like a lot of people hang onto their core system components for many generations and it would be beneficial for those people to see these numbers.
  • DanNeely - Friday, October 19, 2018 - link

    This is a new set of CPU benchmarks and Ian hasn't had time to retest his other 50+ CPUs yet. From prior history that should happen as he has time and will show up as additional data points in bench.

    I don't think you're particularly slow about upgrading. For gaming purposes a high end CPU is reasonable to keep for 6-10 years now; possibly even a bit longer if you're only using a midrange GPU and are willing to accept the higher risk of having to build a new system with zero notice because something dies unexpectedly. I'm in a similar spot with my 4790k; and unless games needing more than 4/8 cores start becoming common am planning to keep it for at least 2 or 3 more years.

    That should hopefully be long enough that Spectre stops generating frequent new exploits and mitigation is fully in hardware, that PCIe4 (or 5), DDR5, and significant numbers of USB-C ports are available. Also possibly out by then, widespread TB3, or DMI being less of a potential bottleneck on intel CPUs (either a major speedup or additional PCIe for SSDs on the CPU). Also by then either Intel should finally have it's manufacturing unfubarred or if not, AMD will likely have captured the single threaded performance crown while holding onto the multi-threaded one meaning I can have both the ST perf that many games still benefit from and the MT perf for my non-gaming uses that can go really wide.
  • wintermute000 - Saturday, October 20, 2018 - link

    I'm haswell at 1440p too and the charts have confirmed that I'm holding on for another generation. No sense paying 1500 (32gb RAM) for a platform upgrade to get a few % more frames (and it's fine for my productivity tasks, still faster than new laptops lol)
  • Icehawk - Saturday, October 20, 2018 - link

    I only upgraded from my 4770 to an 8700 because my wife’s i5 4xxx rig died and it gave me an excuse to upgrade my encoding power. I see no difference gaming with a 970. Also I don’t notice increased performance really anywhere except encoding and decompressing during my daily use.
  • mapesdhs - Sunday, October 21, 2018 - link

    The funny part is that, for productivity, one can pick up used top-end older hw for a pittance, have the best of both worlds. I was building an oc'd 3930K setup for someone (back when RAM prices were still sensible, 32GB DDR3/2400 kit only cost me 115 UKP), replaced the chip with a 10-core XEON E5-2680 v2 which was cheap, works great and way better for productivity. Lower single-threaded speed of course, but still respectable and in most cases it doesn't matter. Also far better heat, noise and power consumption behaviour.

    Intel is already competing with both itself (7820X) and AMD with the 9K series; add in used options and Intel's new stuff (like NVIDIA) is even less appealing. I bagged a used 1080 Ti for 450 UKP, very happy. :)
  • vanilla_gorilla - Friday, October 19, 2018 - link

    So the "Best Gaming CPU" really only has an advantage when gaming at 1080p or less? Who spends this much money on a CPU to game at 1080p? What is the point of this thing?
  • TEAMSWITCHER - Friday, October 19, 2018 - link

    Many benchmarks show the 9900k coming "oh so close" to the 10-core 7900X. I'm thinking that the "Best Gaming CPU" is Intel's wishful thinking for Enthusiasts to spend hundreds more for their X299 platform.
  • HStewart - Friday, October 19, 2018 - link

    Of course at higher resolution it depends on GPU - but from the list of games only Ashes is one stated not top of class for 4k.

    If you look at conclusion in article you will notice that most games got "Best CPU or near top in all" which also means 4k CIV 6 was interesting with "Best CPU at IGP, a bit behind at 4K, top class at 8K/16K" which tells me even though it 4k was not so great - but it was even better at 8k/16k
  • vanilla_gorilla - Friday, October 19, 2018 - link

    At 4K every CPU performs at almost the exact same frame rate. Within 1fps. Why would anyone pay this much for a "gaming CPU" that has no advantage compared to CPUs half the price over 1080p? This is insanity.

    If you are a gamer, save your money, buy a two year old intel or Ryzen CPU and spend the rest on a 4K monitor!
  • CPUGuy - Friday, October 19, 2018 - link

    This CPU is going to be amazing at 10nm.
  • eastcoast_pete - Friday, October 19, 2018 - link

    Yes, a fast chip, but those thermals?! This is the silicon equivalent to boosting an engine's performance with nitrous: you'll get the power, but at what cost? I agree with Ian and others here that this is the chip to get if a. bragging rights (fastest gaming CPU) are really important and b. money is no objective. In its intended use, I'd strongly suggest to budget at least $ 2500 -3000, including a custom liquid-cooling solution for both the 9900K and the graphics card, presumably a 2080.
    In the meantime, the rest of us can hope that AMD will keep Intel's prices for the i7 9700 in check.
  • Arbie - Friday, October 19, 2018 - link

    In the meantime, the rest of us can buy AMD, as anyone should do who doesn't require a chip like this for some professional need.
  • eastcoast_pete - Friday, October 19, 2018 - link

    @Arbie: I agree. If I would be putting a system right now, I would give first consideration to a Ryzen Threadripper 1920X. The MoBos are a bit pricey, but Amazon, Newegg and others have the 1920x on sale at around $470 or so, and its 12 cores/24 threads are enough for even very demanding applications. To me, the only reason to still look at Intel ( i7 8700) is the superior AVX performance that Intel still offers vs. AMD. For some video editing programs, it can make a sizable difference. For general productivity though, a 1920x system at current discounts is the ruling Mid/High End Desktop value king.
  • mapesdhs - Sunday, October 21, 2018 - link

    The exception is Premiere which is still horribly optimised.
  • eastcoast_pete - Sunday, October 21, 2018 - link

    Yes; unfortunately, that's a major exception, and annoying to somebody like me who'd actually recommend AMD otherwise. I really hope that AMD improves it's AVX/AVX2 implementation and makes it truly 256 bit wide. If I remember correctly, the lag of Ryzen chips in 256 bit AVX vs. Intel is due to AMD using a 2 x 128 bit implementation (workaround, really), which is just nowhere near as fast as real 256 bit AVX. So, I hope that AMD gives their next Ryzen generation full 256 bit AVX, not the 2 x 128 bit workaround.
  • mapesdhs - Sunday, October 21, 2018 - link

    It's actually worse than that with pro apps. Even if AMD hugely improved their AVX, it won't help as much as it could so long as apps like Premiere remain so poorly coded. AE even has plugins that are still single-threaded from more than a decade ago. There are also several CAD apps that only use a single core. I once sold a 5GHz 2700K system to an engineering company for use with Majix, it absolutely blew the socks off their far more expensive XEON system (another largely single-threaded app, though not entirely IIRC).

    Makes me wonder what they're teaching sw engineering students these days; parallel coding and design concepts (hw and sw) was a large part of the comp sci stuff I did 25 years ago. Has it fallen out of favour because there aren't skilled lectures to teach it? Or students don't like tackling the hard stuff? Bot of both? Some of it was certainly difficult to grasp at first, but even back then there was a lot of emphasis on multi-threaded systems, or systems that consisted of multiple separate functional units governed by some kind of management engine (not unlike a modern game I suppose), at the time coding emphasis being on derivatives of C++. It's bizarre that after so long, Premiere inparticular is still so inefficient, ditto AE. One wonders if companies like Adobe simply rely on improving hw trends to provide customers with performance gains, instead of improving the code, though this would fly in the face of their claim a couple of years ago that they would spend a whole year focusing on improving performance since that's what users wanted more than anything else (I remember the survey results being discussed on creativcow).
  • eastcoast_pete - Sunday, October 21, 2018 - link

    Fully agree! Part of the problem is that the re-coding single-thread routines that could really benefit from parallel/multi-thread execution costs the Adobes of this world money, especially if one wants it done right. However, I believe that the biggest reason why so many programs, in full or in part, are solidly stuck in the last century is that their customers simply don't know what they are missing out on. Once volume licensees start asking their software supplier's sales engineers (i.e. sales people) "Yes, nice new interface. But, does this version now fully support multithreaded execution, and, if not, why not?", Adobe and others will give this the priority it should have had all along.
  • repoman27 - Friday, October 19, 2018 - link

    USB Type-C ports don't necessarily require a re-timer or re-driver (especially if they’re only using Gen 1 5 Gbit/s signaling), but they do require a USB Type-C Port Controller.

    The function of that chip is rather different though. Its job is to utilize the CC pins to perform device attach / detach detection, plug orientation detection, establish the initial power and data roles, and advertise available USB Type-C current levels. The port controller also generally includes a high-speed mux to steer the SuperSpeed signals to whichever pins are being used depending on the plug orientation. Referring to a USB Type-C Port Controller as a re-driver is both inaccurate and confusing to readers.
  • willis936 - Friday, October 19, 2018 - link

    Holy damn that's a lot of juice. 220W? That's 60 watts more than a 14x3GHz core IVB E5.

    They had better top charts with that kind of power draw. I have serious reservations about believing two DDR4 memory channels is enough to feed 8x5GHz cores. I would be interested in a study of memory scaling on this chip specifically, since it's the corner case for the question "Is two memory channels enough in 2018?".
  • DominionSeraph - Friday, October 19, 2018 - link

    This chip would be faster in everything than a 14 core IVB E5, while being over 50% faster in single-threaded tasks.
    Also, Intel is VERY generous with voltage in turbo. Note the 9700K at stock takes 156W in Blender for a time of 305, but when they dialed it in at 1.025V at 4.6GHz it took 87W for an improved time of 301, and they don't hit the stock wattage until they've hit 5.2GHz. When they get the 9900K scores up I expect that 220W number to be cut nearly in half by a proper voltage setting.
  • 3dGfx - Friday, October 19, 2018 - link

    How can you claim 9900k is the best when you never tested the HEDT parts in gaming? Making such claims really makes anandtech look bad. I hope you fix this oversight so skyX can be compared properly to 9900K and the skyX refresh parts!!! -- There was supposed to be a part2 to the i9-7980XE review and it never happened, so gaming benchmarks were never done, and i9-7940X and i9-7920X weren't tested either. HEDT is a gaming platform since it has no ECC support and isn't marketed as a workstation platform. Curious that intel says the 8-core part is now "the best" and you just go along with that without testing their flagship HEDT in games.
  • DannyH246 - Friday, October 19, 2018 - link

    If you want an unbiased review go here...

    https://www.extremetech.com/computing/279165-intel...

    Anandtech is a joke. Has been for years. Everyone knows it.
  • TEAMSWITCHER - Friday, October 19, 2018 - link

    Thanks... but no thanks. Why did you even come here? Just to post this? WEAK!
  • Arbie - Friday, October 19, 2018 - link

    What a stupid remark. And BTW Extremetech's conclusion is practically the same as AT's. The bias here is yours.
  • Hxx - Friday, October 19, 2018 - link

    i came in here to see how my now mid range 8700k OC at 5GHZ stacks against the 9900k. It holds its own but my the 9900k is impressive. the 9700k hits 5.4ghz im hoping on a custom loop the 9900k/9700k to hit 5.5ghz without too much fuss.
  • Alistair - Saturday, October 20, 2018 - link

    You taking that on faith? Way too much heat to cross 5.2ghz.
  • mapesdhs - Sunday, October 21, 2018 - link

    5.5 without too much fuss? I guess he hasn't watched der8auer's updates on the 9900K. :D
  • edwpang - Friday, October 19, 2018 - link

    This i9 9900k makes me remember the Prescott which is very hot and power consuming.
  • GNUminex_l_cowsay - Friday, October 19, 2018 - link

    What does "IGP" mean? If it means low enough to run on the integrated graphics then include the actual integrated graphics performance.

    Why are AMD cpus missing from half the gaming benchmarks. If you don't have enough time to test all the cpus then cut out the redundant components like 6700k or 7700k, or get rid of the stuff no one bought like the 8086k.
  • IndianaKrom - Friday, October 19, 2018 - link

    Well, now we know why they soldered the chip to the heat spreader: Passing 220w through ~177 mm2 of thermal paste is an exercise in futility. Without a doubt, those all core turbo frequencies are impossible to sustain for more than a fraction of a second without either liquid metal or solder as the transfer compound. Its probably even worse than that because the power and heat isn't going to be spread evenly over the entire die, so it may be exceeding 2-3w/mm2 in places. It would be a whole different type of "Meltdown" flaw if they were still using paste with that kind of power density.
  • mapesdhs - Sunday, October 21, 2018 - link

    See:

    https://www.youtube.com/watch?v=r5Doo-zgyQs
  • nowayout99 - Friday, October 19, 2018 - link

    On a scale of 1-10, I give the 9900K a 14+++++.
  • ToTTenTranz - Friday, October 19, 2018 - link

    There's some dedication to "value" in the article, yet all the graphs only show the launch MSRP for each CPU. The TR 1920X costs around $400 right now, about half of the price that appears in the graph.

    Also, those 2700X scores on Ashes look strange. Slower than the 2600X in a game that tends to like more cores? Something's missing..
  • isabirov - Friday, October 19, 2018 - link

    Why power consumption of 8700k and 8086k differs so much? Shouldn't 8086k be higher than 8700k?
  • BloodyBunnySlippers - Friday, October 19, 2018 - link

    The big take away for me: As resolution rises above 1080P, the performance differences narrows to almost nothing. And there is the Ryzen 5 2600x beating the 2700x (in gaming). I can get that at Micro Center for $190. That looks like a great performance/price ratio there.
  • Achaios - Friday, October 19, 2018 - link

    QUOTE So we are on Skylake Refresh Refresh Refresh UNQUOTE

    Wake me up when they release new tech.

    /thread
  • dan_ger - Friday, October 19, 2018 - link

    It is just stupid to pay an extra $200 to have the best 1080p frame rates. What idiot buys an 9900k to game at 1080? Anyone with 1/2 a brain puts the extra $200 toward a better video card and plays at higher resolutions. The bottleneck here is the graphics card, not the cpu. A 2700x and a better graphic card is the best value.
  • GreenReaper - Friday, October 19, 2018 - link

    The idea with those tests is that they are trying to identify how the CPU performs in games when the video card is not a bottleneck, on the grounds that this may reflect performance after a future GPU upgrade.
  • FlanK3r - Friday, October 19, 2018 - link

    Ian, u forgot on graphs with Cinebench! :)
    Good review, as always. Glad you found same power consumption results as me (and not as many web magaiznes and bulshits in graphs :-) )
  • lefty2 - Friday, October 19, 2018 - link

    Does anyone know why the Chrome compile results bounce around so much? In the original 2700x review the 2700x trounced the 8700k, but then those results were "corrected" and then the 8700k beat the 2700X (23.7 to 21.68). Now they are both at 21.9.

    Now, these new results show the 2700X and 8700K with same performance
  • zangheiv - Friday, October 19, 2018 - link

    You have to be either be a total fanboy, or be fooled into buying this CPU. i9 9900K is made for the intel fanboys that are willing to pay a $300 premium for a soon to be obsolete part. It's definitely not a winner in the 'productivity' department since it's far better to go with a lower cost threadripper 1920X and have a tremendous upgrade path. Unless you have a high-end Crossfire or SLI setup for your GPUs and are playing older titles that have no interest in multi-core support, the i9 9900K is the choice, albeit a very idiotic one still. It's a desperate attempt by Intel catering to the desperate fan-club.
  • eva02langley - Friday, October 19, 2018 - link

    Review shall never used the MSRP, it shall use the street prices. It gives bogus sense of value.

    The street price is 580$.
  • mapesdhs - Sunday, October 21, 2018 - link

    Almost $800 equivalent in the UK. 2700X costs 50% less even from the same seller.
  • BOBOSTRUMF - Friday, October 19, 2018 - link

    I would love that you retest the CPU with a 95 watt cooler as advertised by Intel :)
  • AshlayW - Friday, October 19, 2018 - link

    Huh. It's definitely the 'fastest gaming processor' but not even by that much. I will use Far Cry 5 as a comparison as it's a game I play a lot these days and is even quite a intel-leaning game as it prefers clock speeds in most cases. the i9 9900K is only 19% faster than the Ryzen 5 2600X according to your data. Taking a modest OC on a Ryzen 5 2600 and you are at that level of performance. In the UK right now, the i9 9900K is 300% more expensive than the Ryzen 5 2600. But only 19% faster in that game.

    Even if we drop it to MSRP (prices here for Intel CPUs are insanely high) it is still 200% more expensive than the Ryzen 5. I know it's a Halo product for the 'Simply the best' crowd, and yes it does that, i get that. But this 'intel tax' for this product is getting silly now. I made an investment with my 2600 vs the 8600K or waiting for the 9600K (which are both nearly £100 more expensive!) and I got twice the threads and comparable gaming performance. -shrug-

    9th gen core parts are not even slightly appealing to me.
  • mapesdhs - Sunday, October 21, 2018 - link

    Wise investment, you should be good for a while, especially if you move up resolutions where the GPU becomes the limiting factor, and even then you're going to have good future CPU options.
  • VirpZ - Friday, October 19, 2018 - link

    Why there are no temperature charts or any mention to cooling solution used ?
  • odellus - Friday, October 19, 2018 - link

    all of that work on the gaming benchmarks and you still somehow don't understand that benchmarking a CPU by actually benchmarking the GPU is probably not a good indicator of CPU performance. 8K in a CPU benchmark? is this a joke? christ.
  • odellus - Friday, October 19, 2018 - link

    and why are the actual-low settings benches labeled "IGP" if you're using a 1080? and why a 1080 and not a 1080 Ti or 2080 Ti? why limit the CPUs?
  • svan1971 - Saturday, October 20, 2018 - link

    why not allah or buddha ? why do they always pick my lord and savior to curse with?
  • mapesdhs - Sunday, October 21, 2018 - link

    Well because he's the best of course. 8)
  • whatever223 - Friday, October 19, 2018 - link

    You have "smart sound" twice in the "Chipset Comparison" table.
  • GreenReaper - Friday, October 19, 2018 - link

    It's in stereo!
  • Holliday75 - Friday, October 19, 2018 - link

    My review of the comments section.

    *Crying*

    The end.
  • vext - Friday, October 19, 2018 - link

    Very good article, but here are my beefs.

    Why is there no mention of temperatures?

    According to Techspot the 9900k runs ridiculously hot under heavy loads. At stock clocks under a heavy Blender load it reaches 85C with a Corsair H100i Pro, or Noctua NH-D15. Pushed to 5Ghz, it hits 100C. At 5.1 Ghz it FAILS. I suggest that Anandtech has failed by not discussing this.

    Techspot says:

    "There’s simply no way you’re going to avoid thermal throttling without spending around $100 on the cooler, at least without your PC sounding like a jet about to take off. Throw in the Corsair H100i Pro and the 9900K now costs $700 and you still can’t overclock, at least not without running at dangerously high temperatures."

    Why the focus on single threaded benchmarks? For the most part they are irrelevant. Yet they are posted in their own graph, at the front of each testing section, as though they were the most important data point. Just include them as a separate bar with the multi-thread benchmarks. Good Grief!

    Why post MSRP prices in every single benchmark? You can't even buy them for MSRP. There should be a single chart at the front of the article with a rough retail estimate for each processor, and links to the retailers. If the MSRP is necessary, then just add a column to the chart. Sheesh.

    Why no in depth cost/benefit comparison? A Ryzen 2600 with included cooler at $160 costs only one quarter of a 9900k with an aio cooler at $700. The $540 difference would buy a new RTX 2070 video card. Or three more Ryzen 2600's. For crying out loud.

    I like the 9900k, it's a good processor. It's intended for hobbyists that can play with custom loop cooling. But it's not realistic for most people.
  • mapesdhs - Sunday, October 21, 2018 - link

    All good questions... the silence is deafening. Thankfully, there's plenty of commentary on the value equation to be found. A small channel atm, but I like this guy's vids:

    https://www.youtube.com/watch?v=EWO5A9VMcyY
  • abufrejoval - Friday, October 19, 2018 - link

    I needed something a little bigger for my lab two or three years ago and came across an E5-2696v3 on eBay from China, a Haswell generation 18-core at $700.

    That chips didn't officially exist, but after digging a little deeper I found it's basically an E5-2699v3 which clocks a little higher (3.8 instead of 3.6GHz) with 1-2 cores active. So it's basically a better chip for a fraction of the going price of the lesser one (E5-2699v3 is still listed at €4649 by my favorite e-tailer). And yes, it's a perfect chip, Prime95'd it for hours, POVrayd and Blendered for days until I was absolutely sure it was a prime quality chip.

    Officially it has 145Watts TDP, but I've only ever seen it go to 110Watts on HWiNFO with Prime95 in its meanest settings: It must be a perfect bin. With the particle pusher it's never more than 93Watts while no part of the CPU exceeds 54°C with a Noctua 140mm fan practically inaudible at 1000rpm cooling it: That because the 18 cores and 36 threads never run faster than 2.8GHz fully loaded. They also don't drop below it (except for idle, 1.855 Watts minimum btw.), so you can pretty much forget about the 2.3GHz 'nominal' speed.

    It gets 2968.245803 on that benchmark, slightly above the i9-9900k, somewhat below the ThreadRipper. That's 22nm Haswell against 14++/12nm current and 18 vs 8/12 cores.

    This is rather typical for highly-threaded workloads: It's either cores or clocks and when the power ceiling is fixed you get higher throughput and energy efficiency when you can throw cores instead of clocks at the problem.

    I think it's a data point worth highlighting in this crazy clock race somewhat reminiscent of Pentium 4 days, heat vs. efficiency, a four year old chip beating the newcomer in performance and almost 3:1 in efficiency at far too similar prices.

    Yet, this specific chip will clock pretty high for a server chip, easily doing 3.6 GHz with eight cores seeing action from your game engine, while the remaining ten are often ignored: Perhaps that's a Ryzen effect, it used to be 4:14 earlier.

    I've done BCLK overclock of 1.08 to have it reach the magic 4GHz at maximum turbo, but it's not noticeable in real-life neck-to-neck to an E3-1276v3 which also turbos to 4GHz on three cores out of four available, 3.9 at 4/4 with HT.
  • abufrejoval - Friday, October 19, 2018 - link

    2968.245803 on the particle pusher benchmark... need edit
  • icoreaudience - Friday, October 19, 2018 - link

    Move away from rar/lzma : the new darling of data compression is called Zstandard :
    https://www.zstd.net

    It comes with a nice integrated benchmark, which can easily ramp up with multithreading :
    zstd -b -1 -T8 fileToTest # benchmark level one on fileToTest using 8 threads

    Windows user can even download a pre-compiled binary directly in the release notice :
    https://github.com/facebook/zstd/releases/latest

    It would be great to see some numbers using this compressor on latest Intel cores !
  • Kaihekoa - Friday, October 19, 2018 - link

    Looks like all your gaming benchmarks are GPU bound and there pointless. Why not use a 2080 Ti to eliminate/reduce GPU bottleneck?
  • Kaihekoa - Friday, October 19, 2018 - link

    therefore*
  • palladium - Friday, October 19, 2018 - link

    Can you please run some SPEC2006 benchmarks and see if Apple's SOC really has caught on to Intel's performance (per core), as mentioned in Andrei in his iPhone XS review? Thanks
  • VirpZ - Friday, October 19, 2018 - link

    Apart from blender, your review is full Intel biased software for rendering.
  • Hifihedgehog - Friday, October 19, 2018 - link

    Hey Ian. I see your updated full load power consumptions results. Question: Why is it that the six-core i7-8086K is drawing so little power in comparison to everything else including the quad-cores? Is this due to its better binning or is this simply an error that crept in?
  • 29a - Saturday, October 20, 2018 - link

    "The $374 suggested retail price is a bit easier to digest for sure, with the user safe in the knowledge that no two threads are sharing resources on a single core."

    If that statement isn't putting lipstick on a pig then I don't know what is. That is some major spin right there, you should think about being a politician. I generally feel safe that the scheduler will take care of what treads go to which core.
  • Nikorasu95 - Saturday, October 20, 2018 - link

    Did I just fu*king downgrade by purchasing the i9 9900K when I have the i7 8700K? Like WTF? Some gaming results show the i7 is beating the i9. Like what is going on here? The i9 should be ahead of both the i7 8700K, and 8086K in all gaming tests considering it has 2 extra cores. Once again WTF is going on here with these results? They are inconsistent and make no sense!
  • mapesdhs - Sunday, October 21, 2018 - link

    Honestly this is why one should never preorder, wait for reviews. You could also just do a return, go back to 8700K, save the money for a future GPU upgrade which would be better for gaming anyway.
  • dustwalker13 - Saturday, October 20, 2018 - link

    i9900K is a strange animal.

    if i want workloads, i can get a threadripper for basically the same price with better performance in that area.
    if i want gaming i can get a 2700X for much less (plus savings on motherboard and cooler) and get a better gpu for that money, netting me higher fps total.

    this part only makes sense if i want to check one single box: get all the parts that net me the absolute highest fps in gaming exclusively, without any compromise, no regard for cost/performance ratios and no other usage scenario like productivity in mind.

    the potential customer group seems very limited in that respect. the i9900k just does not make sense for anyone but a statistics crazy gamer with too much money on his hand. for everything else - and especially anyone who does a basic value comparison even on the high end side of gaming - the 9700K and especially 2700X are just hands down the better picks.
  • jabber - Saturday, October 20, 2018 - link

    Yeah to be honest Intel is just redundant price wise. As you say I'd rather save $200-$250 and put the money into say an extra $50/$60 each on Ram/GPU/Motherboard and SSD.
  • jabber - Saturday, October 20, 2018 - link

    So looking at those graphs, AMD at around $360 is the sweetspot.
  • daxpax - Saturday, October 20, 2018 - link

    its funny how there is no 2700x included in benchmarks where it tops Intel. this is as deceptive as previous principle technologies benchmark. haha i thought you were transparent reviwer
  • daxpax - Saturday, October 20, 2018 - link

    this is clearly intel paid article and you at anand tech should told us this is paid article. shame on you
  • AutomaticTaco - Saturday, October 20, 2018 - link

    Seems like a balanced article to me.
  • mapesdhs - Sunday, October 21, 2018 - link

    Do you think it's balanced to refer to MSRP rather than typical retail pricing?
  • mkaibear - Sunday, October 28, 2018 - link

    Yes. Because MSRP is set by the manufacturer and the retail price is set by the retailer. And otherwise they'd have to update the article every single time a price changes.
  • Outlander_04 - Saturday, October 20, 2018 - link

    So not value for money, definitely not value for money for gamers, and TWO HUNDRED AND TEN INSANE WATTS OF POWER DRAW.
    Funniest thing I have heard for a while now
  • Tkan2155 - Saturday, October 20, 2018 - link

    No way im getting this 180 watts. 7nm will help to save energy. Amd need to take down intel. Lets do it together. I cannot stand intel anymore.
  • AutomaticTaco - Saturday, October 20, 2018 - link

    Revised power consumption. First motherboard was over-voltage.
    https://www.anandtech.com/show/13400/intel-9th-gen...

    Also, when Overclocked, and set to 1.075v for CPU the consumption actually dropped to 127W max.
    https://www.anandtech.com/show/13400/intel-9th-gen...
  • WannaBeOCer - Saturday, October 20, 2018 - link

    Can you post power consumption with MCE Off? The 9900K is a 4.3GHz processor not a 4.7GHz. MCE Auto on Asus boards boost the CPU to 4.7GHz on all 8 cores.
  • mapesdhs - Sunday, October 21, 2018 - link

    I remember there was much debate a year or so ago about that, but the whole issue seems to have faded away.
  • Synomenon - Saturday, October 20, 2018 - link

    Will the "Thermalright TRUE Spirit 120M BW Rev.A" with push / pull fans be enough to cool the 9900K?

    http://thermalright.com/product/true-spirit-120m-b...
  • mapesdhs - Sunday, October 21, 2018 - link

    At stock, maybe. Oc'd, almost certainly not.
  • daxpax - Saturday, October 20, 2018 - link

    funny there's not 2700x included in benchmarks where AMD has advantage. clearly intel sponsored article
  • kaosou - Saturday, October 20, 2018 - link

    I have a bit of a problem with the price you put on all the charts for ThreadRipper 1920X, at the time the article is posted, you can find TR 1920X at around USD 400, and the USD 799 price you put in the chart is very misleading.
  • mapesdhs - Sunday, October 21, 2018 - link

    Tell that to AutomaticTaco, his posts read like a shill mission atm.
  • PG - Saturday, October 20, 2018 - link

    How is the 2600X beating the 2700X in Ashes ?
    How is the 1800X beating the 2700X in AES?
    2700x results are too low in some areas.
  • Nikorasu95 - Saturday, October 20, 2018 - link

    Did I just fu*king downgrade by purchasing the i9 9900K when I have the i7 8700K? Like WTF? Some gaming results show the i7 is beating the i9. Like what is going on here? The i9 should be ahead of both the i7 8700K, and 8086K in all gaming tests considering it has 2 extra cores. Once again WTF is going on here with these results? They are inconsistent and make no sense!
  • eastcoast_pete - Saturday, October 20, 2018 - link

    @Ian / Anandtech: With the high premium over the MSRP for a 9900K, the difference vs. an 8700K is easily $ 200 as of now. So, here a suggested comparison that even stays in the Intel family: A comparison of a system with the 9900K with the (obligatory) high-end air cooler (so, another $ 100) vs. an 8700K based system at the same price point. Both with the identical graphics card (1080 GTX or 2070), but with the money saved with the 8700K then spent on delidding, a nice liquid cooler AND really fast DDR4? I believe that latter could really make a difference: While Intel's memory controller specifies rather slow DDR4 RAM, it's well known that one can effectively make use of much faster DDR4 RAM, and that has been shown repeatedly at least for the 8700/8700K. So, in a dollar-for-dollar matched comparison, would the 9900K then still be the king of the hill? I, for one, doubt it.
  • eastcoast_pete - Sunday, October 21, 2018 - link

    I have to recall my own comment, after checking prices at Newegg and Amazon. The current Intel 14 nm shortage has now also driven 8700/8700K prices far above their MSRP. This invalidates the performance/price = value equation my comment was based on, although the 8700K is still notably less than the even more overpriced (and out of stock) 9900K. Right now, building an Intel i7 rig is really questionable, unless one really, really wants (thinks one needs) those last few fps in some games and has plenty of money to burn. Assuming one uses the same video card, a Ryzen 2700 (or 2700x) setup with 16 GB of fast DDR4 RAM is cheaper, and if overclocking is on your mind, spend the difference to an 8700 (K or not) on a good liquid cooling setup.
  • mapesdhs - Sunday, October 21, 2018 - link

    For gaming, what it effectively does is push the "on the same budget" equation firmly into the camp of buying a 2700X and using the saving to get a better GPU. Only time this wouldn't apply is if someone does not have any kind of budget limit, but that has to be a tiny and largely irrelevant minority.
  • SaturnusDK - Tuesday, October 23, 2018 - link

    If you're planning to have a decent GPU and game at 1440p or higher then absolutely no Intel CPUs, at any price point, at the moment makes sense to buy. The 2700X is less than $300 at the moment, about half the price of a 9900K, and the 2600 is $160 at the moment, about half the price of a 8700K. Both AMD CPUs match or is only marginally behind the respective core/thread Intel equivalent at double the price.
  • coburn_c - Saturday, October 20, 2018 - link

    Under the Mozilla Kraken label you have a power consumption graph.
  • Rumpelstiltstein - Saturday, October 20, 2018 - link

    "Intel Core i9 9900K: The fastest gaming CPU"

    Uh, really Intel? Looks like that's the 9700K.
  • The Original Ralph - Saturday, October 20, 2018 - link

    Looks like all this might be a moot point for awhile: Amazon hasn't started shipping, Newegg is not only stating "out of stock" but "NOT AVAILABLE" and B&H photo is showing availability date as "JAN 1, 2010" - i kid not. Suspect there's an issue with intel deliveries
  • The Original Ralph - Saturday, October 20, 2018 - link

    sorry, B&H's availability date should be JAN 1, 2100
  • eastcoast_pete - Saturday, October 20, 2018 - link

    JAN 1, 2100? Intel's manufacturing problems must be at lot more serious than we knew (:
    I wonder if the 9900K will be supported by "Windows 21" when they finally ship?
  • cubebomb - Saturday, October 20, 2018 - link

    you guys need to stop posting 1080p benchmarks for games already. come on now.
  • gammaray - Sunday, October 21, 2018 - link

    I agree, 1440p and higher, especially with the top CPUs
  • mapesdhs - Sunday, October 21, 2018 - link

    They would of course respond that they have to show 1080p in order to reveal CPU differences, even if the frame rates are so high that most people wouldn't care anyway. I suppose those who do game at 1080p on high refresh monitors would say they care about the data, but then the foundation of the RTX launch is a new pressure to move away from high refresh rates, something the aforementioned group of gamers physically cannot do.
  • piroroadkill - Monday, October 22, 2018 - link

    They need to show a meaningful difference between CPUs. setting a higher resolution makes the tests worthless, as you'll just be GPU bottlenecked.
  • eva02langley - Monday, October 22, 2018 - link

    They are important since they bring in perspective CPU bottleneck, however it is widely overpreached.

    1080p, 1440p and 2160p at max settings... enough said. Without multiple resolutions benchmarks, it is impossible to get a clear picture of the real performances to expect from a potential system.

    However, basically, a value rating system is now MANDATORY. It doesn't make any sense that the 9900k received 90% + score on Toms and WCCF. They offer abysmal value for gamers, so it is not "The Best Gaming CPU", however it is the "strongest"
  • DominionSeraph - Monday, October 22, 2018 - link

    It's $110 over the i7. If you're looking at a $2500 i7 rig, going to $2610 with an i9 is a 4% increase in price. Looks to me like it generally wins by over 4%. That's a really good value for a content creator since it stomps the i7 by over 20%.
  • Chestertonian - Wednesday, February 27, 2019 - link

    No kidding. Why are there barely any 1440p benchmarks, but there are tons of 8k benchmarks? I don't get it.
  • avatar-ds - Sunday, October 21, 2018 - link

    Something's fishy with the 8086k consistently underperforming the 8700k in many (most?) gaming tests by more than a margin of error where differences are significant enough. Undermines credibility of the whole thing.
  • DominionSeraph - Sunday, October 21, 2018 - link

    The 8700k is also pulling 150W while the 8086k is 95W. Something's not right there.
  • _mat - Wednesday, November 7, 2018 - link

    There can be two reasons why that is the case:

    1) The mainboard settings for Power Limits were different.
    2) The 8086K ran into Power Limit 1 while the 8700K was not.

    Whatever is the case here, it is no doubt that the 8086K did run into Power Limit 1 after the "Time Above PL1" (= power budget) was depleted. The 95 Watts are exactly the specified TDP of the CPU and Intel recommends this as Power Limit 1 value.

    So the problem here is that the Power Limits and Current Limits of the mainboard are not properly documented and seem to differ between the test candidates. While the 8086K obviously had Power Limits in place, the 9th gen CPUs were benched with no limits at all (only temperature limit at 100 °C on a core).

    Also, the whole page on power consumption needs rework. The TDP does matter depending on the board and its default settings.
  • ballsystemlord - Sunday, October 21, 2018 - link

    Ian! Many of your tests ( Y-Cruncher multithreaded, apptimer, FCAT - ROTR, WinRAR ), are taking too short of a time. You need some differentiation here! Please make them harder.
  • R0H1T - Sunday, October 21, 2018 - link

    >In case the previous comment was missed.

    I see that the last few pages have included a note about Z390 used because the Z370 board was over-volting the chip? Yet on the Overclocking page we see the Z370 listed with max CPU package power at 168 Watts? Could you list the (default) auto voltage applied by the Asrock Z370 & if appropriate update the charts on OCing page with the Z390 as well?
  • mapesdhs - Sunday, October 21, 2018 - link

    "Intel has promised that its 10nm manufacturing process will ramp through 2019, ..."

    Ian, what promises did Intel make 2 years ago about what they would be supplying now?
  • eastcoast_pete - Sunday, October 21, 2018 - link

    My guess is that Intel is now printing those promises in 10 nm font size (easily readable with a standard electron microscope). See, they moved to 10 nm by 2018!
  • ballsystemlord - Sunday, October 21, 2018 - link

    Actually, fonts are measured in points. So, it's 10pt, and it's rather legible.
    But, as for products, I don't see any either.
  • darkos - Sunday, October 21, 2018 - link

    nice review, but: please add a flight simulation such as x-plane and prepar3d or fsx. this is an area that is sadly, missing from your reviews.
  • kasboh - Monday, October 22, 2018 - link

    Do I see it correctly that there is little benefit of HyperThreading with 8 core CPUs?
  • eXterminuss - Monday, October 22, 2018 - link

    I am quiet shocked to see that Anandtech is using a vastly outdated and in parts plainly wrong description for World of tanks:
    1. The enCore engine ist being used in world of tanks for quiet a while now (10 month)
    2.World of tanks is a free to play game, no elements hiden behind a paywall, e. g. no more features for a paying customer than for a freelooter.
    3. Since the outadted EnCore benchmark was used, i would have at least expected to see the Results of that benchmark being posted aswell.
    Sincerly yours,
    eXterminuss a World of Tanks Player
  • muziqaz - Monday, October 22, 2018 - link

    I love the price of $488 stamped all over each of the test results, while over here in UK I see price of £599 and newegg quotes $580. Even your linked amazon has it at $580. And conclusion is awesome with: "At $488 SEP, plus a bit more for 'on-shelf price'..." Since when is extra 100 bucks a bit more? :D
  • compudaze - Monday, October 22, 2018 - link

    What was the actual vcore for your overclocks?
  • HardwareDufus - Monday, October 22, 2018 - link

    I7-9700k.... an I7 that isn't hyperthreaded.... let's totally muddy the waters now Intel.... Guess they had to save some feature for the I9's $100+ surcharge...… Good grief.
  • bogda - Tuesday, October 23, 2018 - link

    How pointless is reviewers comment: "... World of Tanks gives the 9900K some room to stretch its legs..."?
    Difference between two chips in discussion is between 712fps and 681fps! Not even Neo from Matrix could note the difference.

    How pointless is discussing top of the line CPU gaming performance in 720p in any game??

    How pointless is marketing 8C/16T CPU for gamers???
  • sseyler - Tuesday, October 23, 2018 - link

    Not sure whether this has been pointed out yet, but the Threadripper prices need to be updated. For example, the 1920X is now well under $500 as advertised even on AMD's website and the 1900X goes for $350 on Newegg.
  • dlum - Tuesday, October 23, 2018 - link

    For me, listing the long-obsolete prices for AMD processors (still initial, long-outdated MSRP for 1920x $799 - whereas a simple amazon search confirms it's now for just over half of that ($431)) is clearly disrespectful and shamefull practice for a reviewer.

    It's very sad such dishonest practices found their way to Anandtech and they are so prominent here.

    Probably that's also why no one answers nor fixes those clearly misleading figures.

    (Maybe that's the cost of being able to read such anyway valuable reviews for free :)
  • sseyler - Thursday, October 25, 2018 - link

    Well, to be fair, I'm sure the editors didn't dig this deeply through the comments. They're busy people.

    Also, I think I heard something mentioned before about their graphs having some semi-automatic mechanism for listing prices and the like. I don't remember exactly, but it probably has something to do with pulling MSRP data and it's difficult to change given the way the templated graphs are generated from the benchmarks.

    I imagine it was done something like this for consistency across the site as well as not biasing prices according to specific vendors. Given the first reason, I don't know why it'd be difficult for individual editors to customize/tweak certain aspects, but maybe that needs to be revised to be more flexible. As for the second reason, there are clearly reasonable solutions, like finding the *current* MSRP (rather than the release MSRP), or selecting the lowest/median/average price among a pool of selected retailers.

    Anyway, it doesn't make much sense to me to characterize this as an instance of dishonesty, but rather a technical detail that's important enough to invest the time in it's improvement.
  • sseyler - Thursday, October 25, 2018 - link

    its*
  • zodiacfml - Wednesday, October 24, 2018 - link

    Meh. Intel owner could simply delidd and approach these kinds of performance.
    Resolution above 1080p, AMDs parts have better value.
  • zodiacfml - Wednesday, October 24, 2018 - link

    Made the comment without reading the review. The difference is a lot smaller than I expected where the only useful difference is in Ashes where AMD usually dominates due to sheer core count.
    I'd be fine with that 6 core CPU from AMD.
  • SanX - Thursday, October 25, 2018 - link

    How come i7-7800x outperforms i9-9900 by the killing factor of 3-4 in particle movement? Is it not as "hand tunable" as older gen chips?
  • davidk3501 - Thursday, October 25, 2018 - link

    This is an overclockable processor, allowing users to push the frequency if the cooling is sufficient, and despite the memory controller still rated at DDR4-2666, higher speed memory should work in almost every chip. The Core i9-9900K also gets a fully-enabled cache, with 2 MB available per core for a chip-wide total of 16 MB
  • ashlord - Thursday, October 25, 2018 - link

    My son's 4690K just blew up at such a shitty time. 8th gen 8400 is a decent replacement but 9th gen is out, so I don't really want to buy a previous gen item. I am guessing the '9400' will be out in a month or two. Going the AMD route has its issues too. It seems that AMD processors still have some issues with virtual appliances built using an older kernel. And in the past 30 years of computer ownership, I have never upgraded the processor. Components like motherboard or ram usually fail way before the CPU goes poof.

    In my country, R5 2600 w/Gigabyte Aorus B450M, 16GB of TridentZ RGB and a Cryorig M9+ goes for S$751. 8400 with MSI H310M Pro-M2, G.Skill Ripjaws V2400 and the same cooler goes for S$710.

    ARgh!!! Don't know what to choose! Or maybe I should just give him my 6700K and get myself a new shiny toy.
  • nukunukoo - Friday, October 26, 2018 - link

    I'm glad competition from AMD is back. Just a little over three years ago, an 8-core Intel would be a Xeon costing an arm and a leg!
  • Dragonrider - Monday, October 29, 2018 - link

    Just a note re the IGP. If you are going to try to watch 4k Blu-ray on your computer, you NEED that Intel IGP. I don't think there is any other solution to the DRM. For some, that alone would be a reason to get the Intel processor, all else being in the same ballpark.
  • y2k1 - Wednesday, October 31, 2018 - link

    What about performance pet watt? Is it basically the same as last gen?
  • hanselltc - Thursday, November 1, 2018 - link

    wat bout 9700k vs 9900k in gaming tho
  • Always_winter - Wednesday, November 28, 2018 - link

    what cpu cooler did you use
  • poohbear - Monday, December 10, 2018 - link

    Wow that 10nm CPU is taking forever eh? AMD is to release 7nm CPUs next month, and intel can't produce 10nm in 2019? What happened exactly?
  • ROGnation7 - Saturday, February 23, 2019 - link

    Watching all these benchmarks nowadays and taking count on how well optimised games are these days , at last the AAA titles , makes you think if it even worth it to spend more than 300-350 bucks on CPUs for gaming . Just look at i5-9600k and r5 2600x going toe to toe with high end CPUs with a decent graphics card.
  • hikaru00007 - Thursday, February 28, 2019 - link

    just if someone want to hire me as PC product tester ... :| i really love tech evolution, but i can't touch them, i can't afford it. dollar currency in Indonesia way toooooo high.
  • sys - Friday, October 18, 2019 - link

    It is very funny to talk about power consumption without referring to the temperature.
  • ferit - Sunday, December 8, 2019 - link

    you may think nothing matters after the 6th gen, but for me, the support for vp9 decoding makes 7th gen the most important one.

Log in

Don't have an account? Sign up now