Ummm...there are lots of uses for more PCIe besides SLI ! Remember that while people do play games on these platforms, it would not make any sense to buy one of these for the purpose of playing games. You buy it for work and if it happens to game OK then great.
By that logic, Intel CPUs have no PCIE slots, as there are LGA 1151 Mini STX motherboards with no x16 slot at all. I think a good compromise would be to list the CPU as having 16+4 PCIE slots.
for clarity's sake they should report the 9900k at 250Watt TDP. selective clarity is purch media's approach, though.
2700x has 20 pcie lanes, period. if some motherboard manufacturers use it for nvme or as an extra x4 pcie slot, it's not up to debate for a "journalist" to include it or exclude it, it's fucking there. unless the money are good ofc... everyone has their price.
Technically the silicon of each die has total of 128 PCI-E lanes. Each die on Ryzen Threadripper and Epyc has 64 lanes for external buses and 64 lanes for IF. Therefore, the total is 128 lanes. They just have it limited to 20 lanes for consumer grade CPUs.
EPYC 7601 is 2.2 GHz base, 3.2 GHz Turbo, at 180W, fighting against 4.2+ GHz Turbo parts at 250W. Also the memory we have to use is server ECC memory, which has worse latencies than consumer memory. I've got a few EPYC chips in, and will be testing them in due course.
ECC RAM typically runs slower, yes. It's correctness that you're looking for first and foremost, and high speeds are harder to guarantee against glitches, particularly if you're trying to calculate or transfer or compare parity at the same time.
Why are there so many game tests with Threadripper? It should be clear by now that this CPU is not for gamers. I would rather see more tests with other professional software such as Autoform, Catia and other demanding apps.
The CPU Suite is a standard set of tests for all chips Ian tests from a lowly atom, all the way up to top end Xeon/Epyc chips; not something bespoke for each article which would limit the ability to compare results from one to the next. The limited number of "pro level" applications tested is addressed in the article at the bottom of page 4.
"A side note on software packages: we have had requests for tests on software such as ANSYS, or other professional grade software. The downside of testing this software is licensing and scale. Most of these companies do not particularly care about us running tests, and state it’s not part of their goals. Others, like Agisoft, are more than willing to help. If you are involved in these software packages, the best way to see us benchmark them is to reach out. We have special versions of software for some of our tests, and if we can get something that works, and relevant to the audience, then we shouldn’t have too much difficulty adding it to the suite."
TL;DR: The vendors of the software aren't interested in helping people use their stuff for benchmarks.
ANSYS is terrible from a licensing standpoint even though their software is very nice for FEA. COMSOL could be a much better alternative for high-end computational software. I have found the COMSOL representatives to be much more agreeable to product testing and the support lines are much better, both in responsiveness and content help.
Indeed, ANSYS is expensive, and it's also rather unique in that it cares far more about memory capacity (and hence I expect bandwidth) than cores/frequency. Before x86 found its legs, an SGI/ANSYS user told me his ideal machine would be one good CPU and 1TB RAM, and that was almost 20 years ago.
Instead of all the 10+ pages of gaming benchmarks and client side javascript for a platform that most probably won't be used solely for gaming or casual content works, wouldn't it be better to have a suit of server side based benchmarks that are more server oriented? These platforms are becoming very attractive for development and testing of server side applications:
- gzip - pdf conversion - database transactions - modern web services - node.js etc, etc...
I really see no real value in gaming benchmarks. Not for this platform.
You might not see the value, but your desire does not reflect that of others, and there's no harm in the data points. You're right though that server side testing would be good, but is this site really the right place for that kind of testing? And from what I've read in the past it can be rather more complicated to run those kinds of tests. AT has a wide audience; they have to think more broadly about to whom they can or should appeal.
Howeverm you're wrong in one regard, the cost of the 12-core inparticular to me looks like a rather nice alternative for those wanting decent gaming performance at 1440p or higher, but also good productivity potential. Given its cost, seems like an ideal streaming/gaming/productivity all-rounder to me.
i9 9900k would be a better choice. It splits the heavily multithreaded benchmarks with the 12 core, is $160 cheaper for the CPU, and doesn't require a $400 motherboard.
''We didn’t have time to retest the Core i9-7900X, but I can assure you with the data we have on hand the 2920X also dominates that part as well, mostly because the 10-core Intel CPU costs over 40% more. That just leaves the 9900K, and honestly, if productivity tasks are the focus then we believe the 2920X is the smarter buy. It will end up costing a little more overall but for applications that utilize the 12-core Threadripper CPU well, a heavily overclocked 9900K will melt trying to keep up.''
The i9 9900k would spend its time melting down under water cooling attempting to keep up, while costing more after the cooling solution then threadripper costs.
Please provide your full Handbrake settings (IMO it should be linked in the article), you get about 3x faster encoding than I do at “Fast, Main, 3500kbs”. I’d love to triple my throughput.
It's amazing how some options in Handbrake can cut performane in half. I've been meddling with it a lot today, certain filters can really slow things down.
With all these threadripper tests, how come we never see any reference or use case scenarios for Virtualization. Those CPUs with with this amount of cores, can easily be used to host multiple VMs, etc... yet all the testing is mainly on Office Apps, Gaming and 3D but never on virtualization and the advantage of having such a CPU would do for these scenarios... I'm certain that there are tons of people using those chips to run VMware & Hyper-V, etc...
Never mentioned mission critical systems. As hone or power user. A cpu like 2990w or 2970w will easily let you have 60+ vms running in parallel to do your own testing and lab environment. While buying an equivalent from intel for same price range (not talking about Xeon) wont let u make half as much VMs. You can even probably run an azure stack on it for testing purposes. So the use of such a CPU is huge for an IT Pro for instance.
You would be far to limited with RAM to run 60 VMs on that system. I've got 80 on dual Dell 7425's with dual 24 Core Epycs and 512GB RAM and I'm already getting RAM limited. Again I wouldn't install ESXi on these. Use Win 10 and Workstation for your test/dev and you will have a more agile system. If you don't need it for testing that day you still have Windows. FYI I'm VMware Admin.
All depends...in my home lab environment (which lets me test things at will and do whatever I want as opposed to at work where even the lab is more locked down) . For me, the Threadrippers would be great...but extreme overkill. I actually use old FX8320's which I bought when they were dirt cheap and DDR3 RAM was cheap too. The free version of ESXi works fine for me too. For my purposes the threadrippers would be really cool but more expensive than they would be worth.
I would love one of these high cores boxes for our test lab, using W10 and VM on my desktop is very limiting for me (work rig is 7700 & 32gb) - one of these would let me put plenty of resources onboard. Currently my lab runs off a G6 Dell server which is totally fine but if I could get myself a new, personal, lab I'd want a TR rig since it can host a lot more RAM than Intel's option.
With security enhancement moving to sandbox/VM (Application Guard, Sandboxed Defender in 19H1) virtualization scenario will be more prevalent beyond developper or test scenarios.
One major disappointment is that after 12+ months since GA there is no support for nested virtualization for TR/TR2 ?, Ryzen ? Epyc ?.
This issue seems to be general and not limited to hyper-v (KVM, etc..).
This is strange since EPYC made is way through Azure or Oracle Cloud catalog. During Ignite 2018 there was a demo with an EPYC box (VM or Server).
It wouldn't surprise me if they ran into too many problems to want to push out a solution. And Intel has had issues here too - most recently L1 Terminal Fault relating to EPT: https://www.redhat.com/en/blog/understanding-l1-te...
If people buy enough of them, and there is a performance benefit or it otherwise becomes a feature differentiator, support will doubtless be developed. Chicken and egg, I know.
Hi, Thanks four your inputs. This feature is handy if you want to build advanced lab scenarios while preserving your work environment or avoid the hassle to use dual boot. Maybe this feature will be enabled with the 2019 Epyc / TR iteration.
And if the the socket and compatibility promises is kept by AMD refreshing my setup will do it and put those extra pcie lanes to use (upgrading storage as well). At least the 7mm process will help to kept the power compatibility in line.
For the chart on the last page, the "12-core Battle" it would be interesting to see a "similar price battle" of like the 9900k vs 7820X vs 2920X. I suspect the 9900k would hold up rather well especially once it returns to its SRP
A battle for what? If it's gaming, get the far cheaper 2700X and using the difference to buy a better GPU, giving better gaming results by default (some niche cases at 1080p, but in general the 9900K is a poor value option for gaming, except for those who've gone the NPC route into high refresh displays from which there's no way back, ironic now NVIDIA has decided to move backwards to sub-60Hz 1080p with RTX).
That's a lot of compute for a home server. Home servers (outside of those used for the development of professional skills or to test software outside of a setting where there are office usage policies) serve very limited useful purposes. They're mainly a solution looking for a problem or just fun to mess around with. I have an old C2D E8400-powered desktop PC with 8GB of RAM that I just recently put online as a local file, media, and internal web server connected via a cheap TPLink PCI (non-e) wifi card. There's nothing that the kids and I have done to it yet that brings it anywhere close to its knees. Even streaming videos from it to three other systems at once is a non-issue and all of those files are stored on a single 1TB 5400 RPM 2.5 inch mechanical HDD. TR is extreme overkill for a toy server at home. Literally any old scavenged desktop or laptop can act as a home server.
Hey guys, just a quick correction, World of Tanks has been using the Encore engine for six months now. So its not an unreleased engine. But it is a great engine, incredible performance for the graphics that it offers.
Can you make 1080p the new 720p, drop 720p altogether, but add in1440p? I feel that's a pretty common resolution these days, and affordable high-frequency screens with FreeSync and G-SYNC are available. I think it would mean more to people than the mostly artificial 720p.
720p may not have many real-world usecases anymore, however it does clearly show CPU performance scaling in games while removing most GPU bottlenecks entirely. Its definitely an interesting metric on that alone.
People argue this one a lot; some will say 720p is so unrealistic that what's the point? It shows differences at a resolution that virtually nobody uses, so who cares? Anyone buying this class of hw is far more likely to be gaming at least at 1080p and more likely 1440p or higher. Others say by using a low resolution it allows the test to be used as a psuedo CPU test, but it's hard to escape the criticism that such testing is still not real-world in any useful sense. Interesting from a technical perspective perhaps, but not *useful* when it comes to making a purchasing decision.
It helps if you plan to keep the CPU around for when you buy your next video card, which might *then* be CPU-limited when running 1440p. You're basically finding what happens when the CPU is the bottleneck, which it might be in the future. For example, people who upgraded i7-3770K systems with modern video cards. AMD chips of that era (e.g. FX-8370) haven't held up so well.
At the same time, if you plan to hand down the system to someone else and get a new one in three year's time, or repurpose it as a home server, the future potential may not matter to you at all.
I am not clear on this: can I get a 4-active-die TR for rendering and then turn off the 2 parasite dies when they are a disadvantage. Say make the 2990X operate as a 2950X with the same performance and power?
I am not clear if that is what the dynamic local mode is offering. I’d like to be able to do that, whether there is an official AMD path, or if the community finds another way.
<blockquote>Please note, if you plan to share out the Compression graph, please include the Decompression one. Otherwise you’re only presenting half a picture.</blockquote>
Many moons ago I made a request to internal IT to adopt 7-zip so that I could save on bandwidth whenever I needed to pull a largish database (this was several years before GDPR obviously).
No go. It turned out that compressing the backups every night eats a lot of time. (decompressing these files was very fast regardless of setup) Well, actually they did use 7z.exe, but only as a normal zipper.
So sometimes the only relevant part of the equation is the compression time. (I do plan on purchasing AMD regardless for my next upgrade)
Use a threading-capable version of xz with the -T parameter so it uses all available threads and you'll find it flies on the default compression settings. It has a Windows version, too: https://tukaani.org/xz/
Incidentally, you can probably run it something like xz < "input command" > output.xz, which should mean you don't actually have to write the dumps out, just the compressed version.
I need 13 cores and 26 threads. Now what? I returned the 32 cores 64 threads one since it could not run FAR CRY at 60fps. But boy could it blend! Sarcasm aside, I write multi-threaded server software and unless I code an infinite loop by mistake (I'm NOT admitting to it) I can never max out 8 threads before hitting I/O limitations (on NVMe PCIe disk). But I can see how some number crunching parallel software would go to town with it.
"I can never max out 8 threads before hitting I/O limitations (on NVMe PCIe disk)"
Do you know these are IO limitations or do you assume this? Because lack of scaling after 8 threads does not mean IO limit at all. For example, if you write in Java/C#/Python/JS etc (heap-mandatory languages), or even use heap alloc/dealloc in critical thread sections in fast languages like C++, this is what you are going to get (heap mutex = no scalability). And this is just 1 of a thousand pitfalls of massive threading.
No locks, every client call gets its own thread (REST- IIS -WebAPI -.NET "stateless" server - Entity Framework - SQL Server with read committed snapshot isolation). Async all the way down. Under load I can see the disk active >50% and write speed maxes out at 7 MB/s (Toshiba NVMe PCIe 1TB SSD M2). All processes running on the same PC (i7 6700k - 32GB RAM): server, test clients, SQL server. Plenty of free ram. Of course performance optimization is in the details and I was referring to a specific write intensive test case. My point is that parallel scaling is not easy and may stop sooner than expected (for many reasons). On the other hand, I can always use faster single thread performance...
You don't have real workstation tests except for Chromium compile, and even that is apparently broken (for example, no /Gm on the projects or something like that).
Your ads are one of the worst of tech blogs. Distracting ads with moving items. Dynamic resizing of the slow loading header ads, so by the time you want to click on something you've clicked on something else. Autoplaying videos that follow you down as you read the article. No wonder people install adblock but strangely blogs call the readers a problem.
Can someone test the those in-memory business application like Qlikview? It should be very interesting whether TR2 can replace the developer machine who are crunching large amount of dataset to build dashboard or analytic.
AMD's AVX units are limited due to the Zen architecture. Basically, they cut stuff down into 128-bit chunks but only certain modules can do certain things. AVX2 requires work over two instructions. And it can't do AVX-512 yet. This might well have been the appropriate decision - after all, wider units means more to go wrong, and more power. But it limits performance on AVX workloads.
I wonder when they'll include a few high performance cores for single core heavy tasks. Kinda ridiculous that an iPad Pro / iPhone XR can get +33 to +50% better performance on Speedometer 2.0
It could be cool to throw in a 4-core Intel Core i7-7740X, which appears to be fairly efficient in multicore performance. I wouldn't be surprised if it held up decently at the bottom spot, but using much less cores.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
69 Comments
Back to Article
snowranger13 - Monday, October 29, 2018 - link
On the AMD SKUs slide you show Ryzen 7 2700X has 16 PCI-E lanes. It actually has 20 (16 to PCI-E slots + 4 to 1x M.2)Ian Cutress - Monday, October 29, 2018 - link
Only 16 for graphics use. We've had this discussion many times before. Technically the silicon has 32.Nioktefe - Monday, October 29, 2018 - link
Many motherboards can use that 4 additionnal lanes as classic pci-ehttps://www.asrock.com/mb/AMD/B450%20Pro4/index.as...
mapesdhs - Monday, October 29, 2018 - link
Sure, but not for SLI. It's best for clarity's sake to exclude chipset PCIe in the lane count, otherwise we'll have no end of PR spin madness.Ratman6161 - Monday, October 29, 2018 - link
Ummm...there are lots of uses for more PCIe besides SLI ! Remember that while people do play games on these platforms, it would not make any sense to buy one of these for the purpose of playing games. You buy it for work and if it happens to game OK then great.TheinsanegamerN - Tuesday, October 30, 2018 - link
Is it guaranteed to be wired up to a physical slot?No?
then it is optional, and advertising it as being guaranteed available for expansion would be false advertising.
TechnicallyLogic - Thursday, February 28, 2019 - link
By that logic, Intel CPUs have no PCIE slots, as there are LGA 1151 Mini STX motherboards with no x16 slot at all. I think a good compromise would be to list the CPU as having 16+4 PCIE slots.Yorgos - Friday, November 2, 2018 - link
for clarity's sake they should report the 9900k at 250Watt TDP.selective clarity is purch media's approach, though.
2700x has 20 pcie lanes, period. if some motherboard manufacturers use it for nvme or as an extra x4 pcie slot, it's not up to debate for a "journalist" to include it or exclude it, it's fucking there.
unless the money are good ofc... everyone has their price.
TheGiantRat - Monday, October 29, 2018 - link
Technically the silicon of each die has total of 128 PCI-E lanes. Each die on Ryzen Threadripper and Epyc has 64 lanes for external buses and 64 lanes for IF. Therefore, the total is 128 lanes. They just have it limited to 20 lanes for consumer grade CPUs.atragorn - Monday, October 29, 2018 - link
Why are the epyc scores so low across the board? I dont expect it to game well but it was at the bottom or close to it for everything it seemedIan Cutress - Monday, October 29, 2018 - link
EPYC 7601 is 2.2 GHz base, 3.2 GHz Turbo, at 180W, fighting against 4.2+ GHz Turbo parts at 250W. Also the memory we have to use is server ECC memory, which has worse latencies than consumer memory. I've got a few EPYC chips in, and will be testing them in due course.mapesdhs - Monday, October 29, 2018 - link
Does the server memory for EPYC run at lower clocks aswell?GreenReaper - Wednesday, October 31, 2018 - link
ECC RAM typically runs slower, yes. It's correctness that you're looking for first and foremost, and high speeds are harder to guarantee against glitches, particularly if you're trying to calculate or transfer or compare parity at the same time.iwod - Monday, October 29, 2018 - link
Waiting for Zen2Boxie - Monday, October 29, 2018 - link
only Zen2? Psshh - it was announced ages ago... /me is waiting ofr Zen5 :Pwolfemane - Monday, October 29, 2018 - link
*nods in agreement* me to, I hear good things about Zen5. Going to be epyc!5080 - Monday, October 29, 2018 - link
Why are there so many game tests with Threadripper? It should be clear by now that this CPU is not for gamers. I would rather see more tests with other professional software such as Autoform, Catia and other demanding apps.DanNeely - Monday, October 29, 2018 - link
The CPU Suite is a standard set of tests for all chips Ian tests from a lowly atom, all the way up to top end Xeon/Epyc chips; not something bespoke for each article which would limit the ability to compare results from one to the next. The limited number of "pro level" applications tested is addressed in the article at the bottom of page 4."A side note on software packages: we have had requests for tests on software such as ANSYS, or other professional grade software. The downside of testing this software is licensing and scale. Most of these companies do not particularly care about us running tests, and state it’s not part of their goals. Others, like Agisoft, are more than willing to help. If you are involved in these software packages, the best way to see us benchmark them is to reach out. We have special versions of software for some of our tests, and if we can get something that works, and relevant to the audience, then we shouldn’t have too much difficulty adding it to the suite."
TL;DR: The vendors of the software aren't interested in helping people use their stuff for benchmarks.
Ninhalem - Monday, October 29, 2018 - link
ANSYS is terrible from a licensing standpoint even though their software is very nice for FEA. COMSOL could be a much better alternative for high-end computational software. I have found the COMSOL representatives to be much more agreeable to product testing and the support lines are much better, both in responsiveness and content help.mapesdhs - Monday, October 29, 2018 - link
Indeed, ANSYS is expensive, and it's also rather unique in that it cares far more about memory capacity (and hence I expect bandwidth) than cores/frequency. Before x86 found its legs, an SGI/ANSYS user told me his ideal machine would be one good CPU and 1TB RAM, and that was almost 20 years ago.lilmoe - Monday, October 29, 2018 - link
Instead of all the 10+ pages of gaming benchmarks and client side javascript for a platform that most probably won't be used solely for gaming or casual content works, wouldn't it be better to have a suit of server side based benchmarks that are more server oriented? These platforms are becoming very attractive for development and testing of server side applications:- gzip
- pdf conversion
- database transactions
- modern web services
- node.js
etc, etc...
I really see no real value in gaming benchmarks. Not for this platform.
mapesdhs - Monday, October 29, 2018 - link
You might not see the value, but your desire does not reflect that of others, and there's no harm in the data points. You're right though that server side testing would be good, but is this site really the right place for that kind of testing? And from what I've read in the past it can be rather more complicated to run those kinds of tests. AT has a wide audience; they have to think more broadly about to whom they can or should appeal.Howeverm you're wrong in one regard, the cost of the 12-core inparticular to me looks like a rather nice alternative for those wanting decent gaming performance at 1440p or higher, but also good productivity potential. Given its cost, seems like an ideal streaming/gaming/productivity all-rounder to me.
DominionSeraph - Monday, October 29, 2018 - link
i9 9900k would be a better choice. It splits the heavily multithreaded benchmarks with the 12 core, is $160 cheaper for the CPU, and doesn't require a $400 motherboard.eva02langley - Tuesday, October 30, 2018 - link
Techspot takes.''We didn’t have time to retest the Core i9-7900X, but I can assure you with the data we have on hand the 2920X also dominates that part as well, mostly because the 10-core Intel CPU costs over 40% more. That just leaves the 9900K, and honestly, if productivity tasks are the focus then we believe the 2920X is the smarter buy. It will end up costing a little more overall but for applications that utilize the 12-core Threadripper CPU well, a heavily overclocked 9900K will melt trying to keep up.''
TheinsanegamerN - Tuesday, October 30, 2018 - link
The i9 9900k would spend its time melting down under water cooling attempting to keep up, while costing more after the cooling solution then threadripper costs.Icehawk - Monday, October 29, 2018 - link
Please provide your full Handbrake settings (IMO it should be linked in the article), you get about 3x faster encoding than I do at “Fast, Main, 3500kbs”. I’d love to triple my throughput.mapesdhs - Monday, October 29, 2018 - link
It's amazing how some options in Handbrake can cut performane in half. I've been meddling with it a lot today, certain filters can really slow things down.rony_ph - Monday, October 29, 2018 - link
Hello,With all these threadripper tests, how come we never see any reference or use case scenarios for Virtualization. Those CPUs with with this amount of cores, can easily be used to host multiple VMs, etc... yet all the testing is mainly on Office Apps, Gaming and 3D but never on virtualization and the advantage of having such a CPU would do for these scenarios... I'm certain that there are tons of people using those chips to run VMware & Hyper-V, etc...
schujj07 - Monday, October 29, 2018 - link
You wouldn't use these for VMware or Hyper-V to run mission critical VMs. You might use VMware Workstation with them to run Sandbox systems.rony_ph - Monday, October 29, 2018 - link
Never mentioned mission critical systems. As hone or power user. A cpu like 2990w or 2970w will easily let you have 60+ vms running in parallel to do your own testing and lab environment. While buying an equivalent from intel for same price range (not talking about Xeon) wont let u make half as much VMs. You can even probably run an azure stack on it for testing purposes. So the use of such a CPU is huge for an IT Pro for instance.schujj07 - Monday, October 29, 2018 - link
You would be far to limited with RAM to run 60 VMs on that system. I've got 80 on dual Dell 7425's with dual 24 Core Epycs and 512GB RAM and I'm already getting RAM limited.Again I wouldn't install ESXi on these. Use Win 10 and Workstation for your test/dev and you will have a more agile system. If you don't need it for testing that day you still have Windows. FYI I'm VMware Admin.
Ratman6161 - Monday, October 29, 2018 - link
All depends...in my home lab environment (which lets me test things at will and do whatever I want as opposed to at work where even the lab is more locked down) . For me, the Threadrippers would be great...but extreme overkill. I actually use old FX8320's which I bought when they were dirt cheap and DDR3 RAM was cheap too. The free version of ESXi works fine for me too. For my purposes the threadrippers would be really cool but more expensive than they would be worth.Icehawk - Monday, October 29, 2018 - link
I would love one of these high cores boxes for our test lab, using W10 and VM on my desktop is very limiting for me (work rig is 7700 & 32gb) - one of these would let me put plenty of resources onboard. Currently my lab runs off a G6 Dell server which is totally fine but if I could get myself a new, personal, lab I'd want a TR rig since it can host a lot more RAM than Intel's option.odrade - Tuesday, October 30, 2018 - link
Hi I completely agree with you.With security enhancement moving to sandbox/VM (Application Guard, Sandboxed Defender in 19H1) virtualization scenario will be more prevalent beyond developper or test scenarios.
One major disappointment is that after 12+ months since GA there is no support for nested virtualization for TR/TR2 ?, Ryzen ? Epyc ?.
This issue seems to be general and not limited to hyper-v (KVM, etc..).
This is strange since EPYC made is way through Azure or Oracle Cloud catalog.
During Ignite 2018 there was a demo with an EPYC box (VM or Server).
Regards G.
GreenReaper - Wednesday, October 31, 2018 - link
You could ask for HyperV over here:https://windowsserver.uservoice.com/forums/295047-...
But such features are often buggy in their initial implementations:
http://www.os2museum.com/wp/vme-broken-on-amd-ryze...
https://www.reddit.com/r/Amd/comments/8ljgph/has_t...
It wouldn't surprise me if they ran into too many problems to want to push out a solution. And Intel has had issues here too - most recently L1 Terminal Fault relating to EPT:
https://www.redhat.com/en/blog/understanding-l1-te...
If people buy enough of them, and there is a performance benefit or it otherwise becomes a feature differentiator, support will doubtless be developed. Chicken and egg, I know.
odrade - Monday, November 5, 2018 - link
Hi,Thanks four your inputs.
This feature is handy if you want to build advanced lab scenarios while preserving your work environment or avoid the hassle to use dual boot.
Maybe this feature will be enabled with the 2019 Epyc / TR iteration.
And if the the socket and compatibility promises is kept by AMD refreshing
my setup will do it and put those extra pcie lanes to use (upgrading storage as well).
At least the 7mm process will help to kept the power compatibility in line.
Regards G.
Blindsay - Monday, October 29, 2018 - link
For the chart on the last page, the "12-core Battle" it would be interesting to see a "similar price battle" of like the 9900k vs 7820X vs 2920X. I suspect the 9900k would hold up rather well especially once it returns to its SRPmapesdhs - Monday, October 29, 2018 - link
A battle for what? If it's gaming, get the far cheaper 2700X and using the difference to buy a better GPU, giving better gaming results by default (some niche cases at 1080p, but in general the 9900K is a poor value option for gaming, except for those who've gone the NPC route into high refresh displays from which there's no way back, ironic now NVIDIA has decided to move backwards to sub-60Hz 1080p with RTX).Blindsay - Monday, October 29, 2018 - link
Definitely not for gaming lol. It is for a home server (unraid)PeachNCream - Tuesday, October 30, 2018 - link
That's a lot of compute for a home server. Home servers (outside of those used for the development of professional skills or to test software outside of a setting where there are office usage policies) serve very limited useful purposes. They're mainly a solution looking for a problem or just fun to mess around with. I have an old C2D E8400-powered desktop PC with 8GB of RAM that I just recently put online as a local file, media, and internal web server connected via a cheap TPLink PCI (non-e) wifi card. There's nothing that the kids and I have done to it yet that brings it anywhere close to its knees. Even streaming videos from it to three other systems at once is a non-issue and all of those files are stored on a single 1TB 5400 RPM 2.5 inch mechanical HDD. TR is extreme overkill for a toy server at home. Literally any old scavenged desktop or laptop can act as a home server.euler007 - Tuesday, October 30, 2018 - link
For gaming a 8600k will beat the 2700x and is priced 16% less than a 2700x (just checked newegg prices).Stuka87 - Monday, October 29, 2018 - link
Hey guys, just a quick correction, World of Tanks has been using the Encore engine for six months now. So its not an unreleased engine. But it is a great engine, incredible performance for the graphics that it offers.Great article otherwise.
br83taylor - Monday, October 29, 2018 - link
Can you clarify if your benchmarks are with PBO enabled or disabled?hoohoo - Monday, October 29, 2018 - link
Civ 6 - the slowest paced strategy game ever, now rendered at high frame rates!hansmuff - Monday, October 29, 2018 - link
Just my personal wish list:Can you make 1080p the new 720p, drop 720p altogether, but add in1440p? I feel that's a pretty common resolution these days, and affordable high-frequency screens with FreeSync and G-SYNC are available. I think it would mean more to people than the mostly artificial 720p.
nevcairiel - Monday, October 29, 2018 - link
720p may not have many real-world usecases anymore, however it does clearly show CPU performance scaling in games while removing most GPU bottlenecks entirely. Its definitely an interesting metric on that alone.mapesdhs - Monday, October 29, 2018 - link
People argue this one a lot; some will say 720p is so unrealistic that what's the point? It shows differences at a resolution that virtually nobody uses, so who cares? Anyone buying this class of hw is far more likely to be gaming at least at 1080p and more likely 1440p or higher. Others say by using a low resolution it allows the test to be used as a psuedo CPU test, but it's hard to escape the criticism that such testing is still not real-world in any useful sense. Interesting from a technical perspective perhaps, but not *useful* when it comes to making a purchasing decision.GreenReaper - Wednesday, October 31, 2018 - link
It helps if you plan to keep the CPU around for when you buy your next video card, which might *then* be CPU-limited when running 1440p. You're basically finding what happens when the CPU is the bottleneck, which it might be in the future. For example, people who upgraded i7-3770K systems with modern video cards. AMD chips of that era (e.g. FX-8370) haven't held up so well.At the same time, if you plan to hand down the system to someone else and get a new one in three year's time, or repurpose it as a home server, the future potential may not matter to you at all.
SLVR - Monday, October 29, 2018 - link
Why no 9900K power consumption figures?mapesdhs - Monday, October 29, 2018 - link
The IR radiation off the chip melted the power meter. ;)The Hardcard - Monday, October 29, 2018 - link
I am not clear on this: can I get a 4-active-die TR for rendering and then turn off the 2 parasite dies when they are a disadvantage. Say make the 2990X operate as a 2950X with the same performance and power?I am not clear if that is what the dynamic local mode is offering. I’d like to be able to do that, whether there is an official AMD path, or if the community finds another way.
BikeDude - Monday, October 29, 2018 - link
<blockquote>Please note, if you plan to share out the Compression graph, please include the Decompression one. Otherwise you’re only presenting half a picture.</blockquote>Many moons ago I made a request to internal IT to adopt 7-zip so that I could save on bandwidth whenever I needed to pull a largish database (this was several years before GDPR obviously).
No go. It turned out that compressing the backups every night eats a lot of time. (decompressing these files was very fast regardless of setup) Well, actually they did use 7z.exe, but only as a normal zipper.
So sometimes the only relevant part of the equation is the compression time. (I do plan on purchasing AMD regardless for my next upgrade)
GreenReaper - Wednesday, October 31, 2018 - link
Use a threading-capable version of xz with the -T parameter so it uses all available threads and you'll find it flies on the default compression settings. It has a Windows version, too: https://tukaani.org/xz/GreenReaper - Wednesday, October 31, 2018 - link
Incidentally, you can probably run it something like xz < "input command" > output.xz, which should mean you don't actually have to write the dumps out, just the compressed version.PaoDeTech - Monday, October 29, 2018 - link
I need 13 cores and 26 threads. Now what? I returned the 32 cores 64 threads one since it could not run FAR CRY at 60fps. But boy could it blend! Sarcasm aside, I write multi-threaded server software and unless I code an infinite loop by mistake (I'm NOT admitting to it) I can never max out 8 threads before hitting I/O limitations (on NVMe PCIe disk). But I can see how some number crunching parallel software would go to town with it.peevee - Wednesday, October 31, 2018 - link
"I can never max out 8 threads before hitting I/O limitations (on NVMe PCIe disk)"Do you know these are IO limitations or do you assume this? Because lack of scaling after 8 threads does not mean IO limit at all. For example, if you write in Java/C#/Python/JS etc (heap-mandatory languages), or even use heap alloc/dealloc in critical thread sections in fast languages like C++, this is what you are going to get (heap mutex = no scalability). And this is just 1 of a thousand pitfalls of massive threading.
PaoDeTech - Thursday, November 1, 2018 - link
No locks, every client call gets its own thread (REST- IIS -WebAPI -.NET "stateless" server - Entity Framework - SQL Server with read committed snapshot isolation). Async all the way down. Under load I can see the disk active >50% and write speed maxes out at 7 MB/s (Toshiba NVMe PCIe 1TB SSD M2). All processes running on the same PC (i7 6700k - 32GB RAM): server, test clients, SQL server. Plenty of free ram.Of course performance optimization is in the details and I was referring to a specific write intensive test case. My point is that parallel scaling is not easy and may stop sooner than expected (for many reasons). On the other hand, I can always use faster single thread performance...
29a - Monday, October 29, 2018 - link
Please replace EgoMark (3DPM) with something else, anything else.danjw - Monday, October 29, 2018 - link
Are there any motherboards out there that support the security features of the Threadripper platform?SLVR - Monday, October 29, 2018 - link
This review is a bit more useful: https://www.techspot.com/review/1737-amd-threadrip...peevee - Monday, October 29, 2018 - link
You don't have real workstation tests except for Chromium compile, and even that is apparently broken (for example, no /Gm on the projects or something like that).Schmich - Monday, October 29, 2018 - link
Your ads are one of the worst of tech blogs. Distracting ads with moving items. Dynamic resizing of the slow loading header ads, so by the time you want to click on something you've clicked on something else. Autoplaying videos that follow you down as you read the article. No wonder people install adblock but strangely blogs call the readers a problem.peevee - Wednesday, October 31, 2018 - link
Somebody reading anandtech who does not use adblocking?I am genuineley shocked.
And it's not a blog.
firestream - Monday, October 29, 2018 - link
Can someone test the those in-memory business application like Qlikview? It should be very interesting whether TR2 can replace the developer machine who are crunching large amount of dataset to build dashboard or analytic.crotach - Tuesday, October 30, 2018 - link
Damn, this i9-9900k is a beast! It even looks like good value for money when compared like this.SanX - Wednesday, October 31, 2018 - link
What the problem with AMD AVX or test's AVX?GreenReaper - Wednesday, October 31, 2018 - link
AMD's AVX units are limited due to the Zen architecture. Basically, they cut stuff down into 128-bit chunks but only certain modules can do certain things. AVX2 requires work over two instructions. And it can't do AVX-512 yet. This might well have been the appropriate decision - after all, wider units means more to go wrong, and more power. But it limits performance on AVX workloads.Henk Poley - Saturday, November 10, 2018 - link
I wonder when they'll include a few high performance cores for single core heavy tasks. Kinda ridiculous that an iPad Pro / iPhone XR can get +33 to +50% better performance on Speedometer 2.0Henk Poley - Saturday, November 10, 2018 - link
It could be cool to throw in a 4-core Intel Core i7-7740X, which appears to be fairly efficient in multicore performance. I wouldn't be surprised if it held up decently at the bottom spot, but using much less cores.