Yeah, the 90€, 4-Thread-APU from AMD is now as fast in this benchmark as the 120€, 4-Thread-APU from Intel. Which definition of pathetic are you using exactly? Now that the software has caught up, the AMD design shows similar performance with the same number of threads, a slightly higher (turbo) frequency and a slightly lower count of transistors.
As far as comparing designs goes, that doesn't sound like a loss for AMD at all. If they had access to a 22nm processing node, we would be right back to a head-to-head competition between the two companies.
I sympathize with AMD for being the underdog and loathe the means Intel used to cripple it into a perpetual runner up, but CPU performance for AMD is way too abysmal. They should really focus more on CPU design and less on trying to downplay the importance of raw CPU perf.
Price is not a factor, AMD are pushed into having to offer chips at low prices, because their performance is raw, so they have no other way to offer "bang for the buck" than settle for "barely keep afloat" margins.
Fact is, in most workloads, AMD's "utmost high end" is completely annihilated by Intel's midrange, which is my definition of "pathetic" - when your best is beaten by another's "meh".
AMD's problem is trying to beat Intel at their own game - x86. Sure, there are the "Athlon" days, but that happened only because Intel got sloppy enough to bet on the wrong horse. Athlon shined only because P4 sucked lama ballz. I am astounded that so far AMD has not come up with their own ISA, after all, this is the 21 century, "Intel compatible" is no longer a requirement for commercialism, just look at ARM. AMD totally has the resources to provision for their own ISA software wise, and their own ISA could easily diminish the handicap of having to conform to Intel without having the manufacturing edge. But I guess for AMD it suffices to be a "pseudo competitor" to Intel, just so that people don't get to point at the latter and cry "monopoly".
Oh, and not forget that awesum fella HECKtor RuiNz, who stepped just in time to ruin the rarity "successful AMD", an act made that much more amusing by all the ANALysts, praising him for the ridicuous price he paid for ATi, just to have AMD+ATi be worth half of what they paid for the latter in just a couple of years. It's like there is an industry plot against AMD, now fabless and settling for even lower margins, which is somehow also pimped as a "good thing".
Actually, AMD is working on a new x86 core. (http://www.anandtech.com/show/7991/amd-is-also-wor... It just takes quite a while to get that out the door. I'm sincerely hoping it bring some competition to Intel, sometime soon. As it is, Intel is stuck in neutral while we continue to pay for new chips with barely any advantage in performance.
Yes, Intel is in sleep mode again, but hey, I kind of like that. Haven't felt the need to upgrade my farm since 4770. If you buy new, it doesn't really matter, since product tears come at the same price, if you already have a recent system, no need to upgrade.
The downside is that due to lack of competition, Intel started investing in the waste of silicon real estate in the form of iGPUs barely any prosumer takes advantage of, and would really prefer 2 extra cores. In other words, the lack of competition helps Intel bundle uselessness and gain GPU market share. And the privilege of pushing products which do not waste die area on iGPU at a hefty price premium. High margins - gotta lovem.
Those integrated chips are used on a massive scale, especially in ultrabooks and hybrid devices. It also drives down the cost of desktops that no longer need dedicated video cards. For those who are gamers/enthusiasts or plan to buy a dedicated card, Intel ships the -E series of CPUs that do literally exactly what you ask, and replace the integrated GPU with two/four extra CPU cores.
I'm not seeing your point. If you're talking about i7, then that's exactly what they offer, and exactly what I stated. Go buy a Haswell-E i7 for only slightly more and get extra cores and no GPU. I'm not sure what your complaint is - that ALL i7's don't come without a GPU? Why would you request such a thing when it's already an optional feature? There are actually people who want high end processors with integrated graphics.
@inighthawki I absolutely agree with you. It's not even up for debate in the corporate world. I work for a very large employer and we all get laptops with i7 processors but have no need for anything beyond the integrated GPU solution.
Companies buy boatloads of devices that require powerful CPUs but only OK graphics. That's the business laptop/desktop. And the market there is way bigger than the normal consumer one. It was done since forever, they just moved the entire NB into the CPU. If you don't need it just... ignore it or go bigger (E series) :).
I know hundreds of people who game on HD3000 and related intel iGPU's. Whilst you have a farm, most people have a city home, and Intel KNOWS that. They will completely ignore your opinion, as do I, interpreted as mere bragging.
You haven't felt the need to upgrade your farm since way back when 4770 launched? When was the last time you "felt the need" to upgrade the last generation of hardware to the current one? Or to upgrade 1 year after you bought the latest and the greatest?
I can understand wanting to upgrade the very lowest end every year since some benefits must come out of it. Upgrading the high end can't be qualified as "a need" for year now. Because the benefits are barely visible there. Anyway not enough to justify the price.
Yeah I agree, but your previous comment was talking about graphics. About making a new ISA, that could be tough for an underdog. Intel wasn't really able to pull it off with Itanium IA-64.
No, my previous comment was about AMD trying to downplay how much their CPUs suck by exploiting the reducement of CPU overhead in graphics performance. I somehow feel like this is the wrong approach to address lacking CPU performance. "Hey, if you play games, and are willing to wait for next-gen games, our CPUs won't suck that bad" - not hugely inspiring message. Especially keeping in mind that CPU cost is not all, and power consumption is a factor that can easily overwhelm in time. We have a saying "not rich enough to afford cheap", you know, because most of the time "cheap" comes at a price too.
The failure of IA-64 did not lie in it being "non x86", but in compile complexity and in Intel not pouring enough resources in its dev toolchain, plus it was a long time ago, previous century really. Intel could easily push a new ISA today, but it has to be mandated. And even today, who would want Itanum when you have Power, there is simply no demand for Itanium, nor much room to shine. Itanium is not mandated, too "over-engineered", efficient affordable ISA was mandated, enter ARM, a tiny holding really, something AMD totally has the resources to pull off targeting between x86 and ARM.
It's not a "wrong approach" since they can do both. They can and are attacking the problem from both sides. It's just that one approach (Mantle, and pushing MS to produce a low-level API as a result) could be done quickly and cheaply. The other side, the development of a new architecture? That takes time. They needed to buy time. Plus the benefits of low-level APIs can still be reaped to various degrees regardless. It's not a wasted effort.
You know the sad part about this too was when AMD sold the GPU IP rights to Qualcomm which by the way runs GPU for phones now.......Radeon mobility was a good IP to have and they messed it up and sold for merely 60+million dollars.
Might as well say that AMD messed up when they took "only" 2 billion as a settlement from Intel (and the right to spin off the fabs to GloFo), too.
These were all actions forced by the ATI purchase that put AMD so deep in the red that they had to sell anything not nailed down to try and make up for the costs. The fabs were important to AMD mostly because they needed cashflow immediately to stay open long enough for some miracle to happen and magical moment come that would justify the whole endeavor and bring them back from the brink.
Except that moment can't come while AMD coasts on fumes. It's funny. They keep playing musical chairs with management, cull staff like terminators marching across fields of humans, stretch product generations for all their products across far longer time periods than they were EVER intended to be...
At this point, they're so far behind the schedule they should have had that Zen better be a HUGE leapfrog over the last generation of performance or it's going to be the final bit of crap that stubbornly clings to your anus before the end.
I agree they have smelled of death for years but now they really need another K6-2 or we will hear the overnight announcement and then the endless fan whining will reach a shrill level never before seen nor heard. It's torture seeing the crybaby fail for so long, they couldn't just be a success, they are a victim, and a failure, and are to be hated for it.
Selling their old IP doesn't preclude them from entering that market again. They have some fairly competent low-power tablet chips, so they're not miles off in that regard. But those chips are x86. Perhaps after their new ARM-ISA chips for servers come out they would consider building a mobile ARM chip for phones.
Software caught up? How many games us DX12 again? Ahh, there's the kicker. How long did it take for DX11 to catch on? This is a band-aid for a severed limb, while AMD continues to bleed people and money. Can a company do a decade re-org? We may find out, sadly. AMD should have avoided consoles like NV did. NV was correct in saying it would detract from their CORE products. We see it has from AMD and if you're going to do that, that new product you spent your R&D wad on for a few years needs to replace it handily. Consoles clearly aren't doing that. They cost AMD 30%+ of their employees, less R&D than NV now, less revenue now etc. Meanwhile NV exploding up with more than a few record quarters recently. CORE products count.
If AMD's next gpu isn't a HUGE hit...The apus will just be squeezed by the ARM/Intel race. That is, ARM racing up to notebook/desktop and Intel racing down to mobile. Maybe AMD has an easy lawsuit after NV proves IP theft all over in mobile, but other than that I don't see many surprises ahead that are massive money makers for AMD. There debt is now more than the company market value. ouch. Well, after checking the pop the last few weeks just under I guess, but you get the point. Management has ruined AMD in the last decade. Too bad for the employees/consumers, we're watching a great company go down.
@TheJian "NV was correct in saying it would detract from their CORE products."
Nvidia couldn't offer the integrated solution that Microsoft and Sony were both looking for. After getting design wins for the original Xbox and PS3, do you really think Nvidia didn't try to get another win this generation? They made that statement after all three major console players awarded AMD with their business.
And by the way, the GPUs in XB1 and PS4 are indeed using AMD's CORE gpu product. I think it's fascinating that you interpret those design wins and resulting revenue stream in a negative light.
That's not true at all. nVidia didn't want to use up all of it's limited wafer production at TSMC with very low profit chips, especially when it might cut into the number of proper graphics chips they can have manufactured. Sony/Microsoft wanted inexpensive consoles that didn't incur a loss on each unit sold like they did last generation which is why both the PS4 and Xbone are basically 3 year old laptop hardware that doesn't have the power to run modern graphics (PC Ultra Settings) at 1080p/30. The only way either console has been hitting 1080p is by dumbing down the quality as seen in every single multi-platform game.
Sorry Andrew I disagree. AMD is not licensing those chips, they are selling them. There is higher margin to be had because of that. I do not know what the exact margins are, if you do please share, but there is no way you can say with such certainty that it was a bad move on AMD's part to produce those chips.
Well industry commenters including articles here at Anand said what Andrew said, and noted the console wins for AMD are very low margin items. What is AMD going to "make it's money on" according to you for the 3 or 4 years the consoles lifespans run ? They get one year of oomph, then the rest is tiny residual sales - is that "a good profit plan" ? The pro AMD'er above linked Forbes and it's talking about 2013 sales...ROFL - proving my above point, which is of course easily surmised common sense. One year, minor profit - 3-4 years next to nothing = "a good plan ?"
At least AMD tied in the fan boy set groping with "Mantle and consoles!!!!" for the win... I admit that had many foaming at the mouth with glee for quite some time.
I don't really see what your point is. The designs used in the consoles are primarily using technology they already built. Existing CPU cores, existing GCN cores. The R&D investment was minimal... they made money on consoles. Perhaps not much money, but it's still a positive flow of income and the consoles continue to sell in reasonable numbers. Some money is better than no money, and they need all the help they can get.
Problem is once software (games) catch up and actually make use of the better efficiency and get more complex the lower IPC of APUs will again become apparent.
Back in real world intel 32nm sandy bridge from early 2011 non trigate transistors which only happened on 22nm destroy 28nm steamroller in ipc and performance per watt and have far stronger memory controllers and floating point performance. Intel so far ahead if intel were on 32nm and amd were on 20nm intel would still lead performance per watt in cpu design.
AMD CPUs have slower single-thread performance, but typically more cores. Therefore, they certainly significantly benefit from software optimized for multi-threading. And removing any bottleneck is always a good thing. After all, even i3 saw significant performance improvement.
Are you blind? i3 was paired with a dedicated graphics you numb nuts. Look at the test set up. AMD APU beats the shit out of i3 intel integrated graphics anytime. lol. That's why this test is biased. Why compare AMD APU with an i3 + dedicated GPU. I commented about it and it got deleted lol. Thanks. Please do this test again, AMD APU vs Intel integrated graphics. Now we're talking.
Right, the problem is, this wasn't really a head-to-head comparison between AMD and Intel CPUs. The title is: "Star Swarm, DirectX 12 AMD APU Performance Preview". The purpose of putting an Intel CPU with an Nvidia dGPU was not to make AMD look bad, but to test the new API with a completely different setup to see how its benefits scale across various hardware platforms.
The mid range AMD card isn't necessary. That is the whole point of Mantle and Dx12. GCN and HSA are working as designed. Now when AMD releases high bandwidth memory in APU cache the APU's will really fly. nVidia is at least a year away from HBM and in Intel silicon it's not even a glimmer on the horizon.
i3-4330 can ONLY be competitive with GeForce 770. Take away 770 and i3-4330 becomes a joke.
Actually AMD A10-7800 APU is LIGHT YEARS faster than i3-4330. Intel's i3 is only competitve when the Geforce 770 is used. Take away the nVidia GeForce 770 and the Intel i3-4330 alone would be a joke.
Dx12 is basically Mantle and Intel HD IGP can't compete.
It's not about reducing CPU overhead, it's about allowing the IGP to run at full efficiency, especially running gaming software with tens of thousands of draw calls.
Direct x11 was crippled intentionally to keep Intel HD IGP competitive. Mantle changed that game.
Now gaming studios can develop games of truly epic proportions and inexpensive hardware can run it!
If Anand ran the same DX12 benchmarks using RX480 the results would be quite different.
What Anand did not tell you was they disabled Asynch Compute. This of course results in favorable Intel bench scores as no GTX board runs well with it enabled.
Why would Anand omit such critical data and run a bench test without using the best AMD dGPU AIB available?
In fact DX12 Explicit Multi-adaptor would show that 2 RX 480 boards crushes GTX 1080 for far less money.
Another fact the online media will twist themselves into knots trying to discredit.
Anand disabled Asychronous Compute which is NOT supported by NVidia.
AMD GPU's can accept non-serial data streams from ALL cpu cores as AMD GPU;s manage that data. This is a massive performance multiplier for DX12 3d game engines and does make a huge difference while running Star Swarm.
DX12 also support Explicit Multi-adaptor which would scale 2x RX480 and allow performance exceeding GTX 1080.
The consoles both run 8 core Jaguar APU's. And they fire on "all 8 cylinders".
Both XBOX DX11.X and PS4 GNM and GNMX use extensions that support Asynchronous Compute.
While this is a GPU feature, this feature allows the CPU to process and send data to the shader pipelines in a far faster rate. This reduces CPU bottleneck.
NVidia can do this and in fact unless disabled, ALL GTX boards run DX12 3d gaming engines SLOWER.
Did Anand tell the readers that they were disabling this MAJOR PERFROMANCE ENHANCING FEATURE?
I'm guessing one of AMD's main new high end cards don't work yet with DX12 properly(testing hassles), so that's why we have an nVidia 770 only in comparison.
It says its an i3 4330. Anyway finally after 5 or 6 years intel cpus owning in dx11 games amd cpus with dx12 release will finally be able to play games how there gpus intended instead of massive bottlenecks. I seen a lot of people on forums upgrade gpus costing £200-£300 and getting worse performance then people with i3 and a £100 gpu just because of cpu bottleneck and these people don't realise the amd cpu been bottlenecking them just because there cpu isn't utilised 100% while there gpu is pegged at like 20-50%. Good news all round I say as it means no more wasted gpu performance.
Why buy an nVidia GPU AIB? Just get an AMD APU. It's cheaper than the Intel choice AND you do not need to spend money on a GPU!!!
If you insist on an Intel IGP then you need the nVidia GPU AIB. That increases your system cost by about $300 over what the same system would cost with the AMD APU.
Wait until Carrizo hits the bricks. Intel graphics is an embarrassment.
Do you think Carrizo (image compression) 'apparently' boasting up to 2x performance and DX12 reducing the CPU problem; that the next gen APUs will be capable of 1080p 60 fps gaming?
Possibly. The combined memory bandwidth savings of DX12 (or Mantle) and texture compression are certainly intriguing. Not to mention the boos that hUMA provides by not needing to copy memory blocks from CPU to GPU regions of RAM.
They were strictly talking about APUs. So it doesn't matter whether they used DX12 or Mantle, both will benefit Carrizo greatly and yeah it may very well achieve decent 1080p gaming at least with the higher-end chips.
For discrete cards DX12 should be fairly neutral if game developers don't heavily optimize for one architecture (use Nvidia prebuilt middleware, for example). So it really depends on what each company deploys in the coming months and years.
Probably faster ram would be necessary. But who knows. Fully GCN and HSA compatible could make a huge difference. I would really like to see AMD build dual socket APU's!
I guess we now know why the current-gen consoles all use a lot of low-power CPU cores. With less API overhead, the less-powerful cores makes more sense. This is good for AMD in the short-term at least, because they have poor single thread performance in Bulldozer-derived cores. But it probably means there is even less reason to upgrade your CPU for gaming. It's rare that a software change can yield such big results so let's hope that DirectX 12 becomes popular for all the big blockbuster games, the less taxing ones don't need it anyway.
Just because DX12 seems to level the playing field for single threaded performance doesn't negative the possibility of better mhz or core scaling. In fact, I read that lower level APIs will likely take greater advantage of mainstream hex and octacore cpus.
Where are we going to get the GPUs that need those hex and octacore CPUs (which Intel's roadmap lacks in the mainstream even for the next 2 generations). It seems like everything will be GPU-limited, unless game developers come up with something to do with the excess CPU power, like better AI, physics or something I can't even think of right now. It would be great if they did that, but I suspect that the short-term result will be that CPU won't factor into game performance very much.
Also, I didn't say that DX12 levels the playing field for single-threaded performance, it doesn't. It does however validate AMD's more slower cores policy.
Based on the batch submission times, it looks like DX12 reduces AMD submission times to the same time as Intel. I thought that speaks to the individual thread performance.
Frame rates of only 10 or 11 on Mid Quality settings for AMD's A8/A10 APUs seem to point out the need for an external discrete GPU and to not use the IGP part of the APU.
Even on low settings you only get 20 or 22 FPS from the IGP.
It's not a game, you know? This is a benchmark purposefully built to stress test any CPU+GPU combination. The FPS you see here is just a number, you have to read it as a performance index just like 3DMarks.
See how a GTX770 only gets 30FPS at low quality in DX11? has there ever been a real game where running it in low quality yields such performance on a 770? There's your answer.
I would gently point out that it's quite clear in the graphs that Star Swarm does not stress a 770 significantly, but is crazily CPU limited. Hence the drop between low/medium settings of *only* 4 fps with DX12, and the 70+ fps average with an i3. That said, it appears the bottleneck shifts once you reach medium quality - or star swarm's extreme preset doesn't really up the load significantly.
Star Swarm is a demo/benchmark tool that was made by Oxide for AMD and is highly tuned specifically to AMD architecture, specifically their APU's. It focuses far too much on specific memory operations and not a wide variety of GPU workloads that would be be comparable to the kind of work a GPU does while running games. to be honest I don't think this can be called a benchmark tool and instead should be called a AMD Marketing Tool.
Oh really, explain why it's specifically tuned for AMD arch, what numbers support your claim, and what specific memory operations, and why does a CPU test, mainly to stress API draw calls need wide variety of GPU workloads.
Quote: It should be noted that while Star Swarm itself is a synthetic benchmark, the underlying Nitrous engine is relevant and is being used in multiple upcoming games. Stardock is using the Nitrous engine for their forthcoming Star Control game, and Oxide is using the engine for their own game, set to be announced at GDC 2015. So although Star Swarm is still a best case scenario, many of its lessons will be applicable to these future games.
But with AMD IGPs having frame rates of 11 for Mid Quality settings we can see that AMD IGPs will be utterly useless for any of those games.
Apologies, there was a miscommunication on my part. I did specifically run the APUs (and Intel) at their maximum supported frequency defined by the manufacturer, so in this case it means the AMDs were at 2133 and the i3 was at 1600. I've updated the configuration table.
With the drivers required for DX12, for whatever reason the Mantle version of Star Swarm did not seem to initiate. This may be a driver flag or something similar not enabling the APU integrated graphics to engage Mantle. Of course it's all a work in progress still. No GTX 770 + Mantle numbers, obviously.
To be perfectly honest would've preferred discrete card was AMD powered for this DX12 comparison. Can't imagine Nvidia optimising their drivers as much for AMD CPU's when both companies compete on GPU front.
It's actually a loss-loss situation for AMD CPUs here, their own GPU drivers are bad at CPU scaling (with both Intel and AMD) so we are not really losing much even if Nvidia purposefully harms their performance on AMD CPUs.
nVidia's drivers do not purposefully harm performance while coupled with AMD CPU's. It's AMD's own CPU's that simply suck. If nVidia intentionally had code in their drivers that targeted AMD processors so they'd look bad, that would be a violation of the law, and AMD would sue their pants off.
As Ryan doesn't have APUs in house, I ran these numbers on the APUs I have, but the best AMD GPUs I have are HD 7970s or an R7 240, both of which are not DX12 capable for this testing. We're not all testing in one big office to easily get hardware from each other - it's a transatlantic shipping process between Ryan and myself.
Preview was DX12 AMD APU performance, then goes to highlight AMD beta drivers suck in comparison using discrete Nvidia. Is it too much to ask for AMD hardware in a AMD review? We already knew AMD's DX12 driver needs work from previous review!
All of the new-gen games that are having issues with high system requirements need this stat. I have a friend who has an old Phenom II X6 1055T + HD 7950 that plays *basically* everything out there.
He wanted to get Dying Light but is afraid to buy the game because there's a decent chance it'll be unplayable. Without optimizations he'd probably have to shell out hundreds of dollars upgrading his rig... or go for the path of least resistance and buy an Xbone or PS4.
So Microsoft picks up the optimization slack that all the ports dropped. Probably just make the studios even lazier and we end up right back here in no time. How can a game run on a 1.8ghz jaguar core but not a Phenom II?
For this to have more meaning for me, I'd want to know which Core i3 CPU was used so that I could know both its clockspeed and which version of HD graphics (4400? 4600?) is being used in the comparison).
I don't get this review, so the Core i3 is using a dedicated GPU versus AMD APU without dedicated graphics card? Is this right.....? Why don't you do a review that AMD and Intel can really go head to head? AMD APU vs Intel CPU/with integrated graphics, then I would be happy with this test. This is pretty stupid test I'd say. In all fairness, I think AMD APU out shined i3 easily in this test since there's no help from dedicated GPU. Please re do test.
Unfortunately using Nvidia dGPU voids 1) Nvidia won't optimise drivers on AMD (They prioritise Intel) Whilst AMD & Nvidia compete for market share. Neither will adopt features created by the opposition unless forced to do so. It would also have been nice to see if all those extra AMD cores were being fully utilised.
The problem is that AMD GPUs like Intel CPUs more than their own brand. So ironically, with an AMD GPU, the AMD CPUs would have looked slightly worse compared to their Intel counterparts, than they do with Nvidia GPUs. Also, Nvidia drivers often tend to be more consistent in performance across multiple platforms, which is why nearly 9 out of 10 times, in non-GPU tests, professional reviewers/benchmarkers go with an Nvidia GPU.
I bet for the Physx support, nVidia told AMD they needed to send a couple of engineers over so they could hook more than the 1st active core, and AMD sent back a few emails, some paranoia, and no engineers, including maybe a 2 minute rude phone call demanding nVidia do all the work or else...
lol The review only really tested AMD's driver using the APU's iGpu when they could easily have added AMD dedicated GPU. A review claiming to be AMD DX12 driver related should at least have had a dedicated GPU from AMD somewhere in it.
Lol Wondered why you referred me to original Review when discussing AMD Apu's. Those discrete results whilst valid only speak for Intel. Wanted to see Apu's using discrete AMD hardware not just Nvidia, who previously have even disabled features when detecting non Nvidia cards on board! Restricting test to Nvidia stops us seeing how Mantle or dual graphics scales on Apu's using DX12 in comparison to AMD dGpu.
HD3000 plays games well too. You have to remember that here and online generally at enthusiast sites, only the greatest framerates at the highest resolutions with the best screen colors and minimum delays get a half nod in between the intense criticism and fan fighting.
Can you please set up a test without i3 using a dedicated GPU. Intel integrated graphics vs AMD APU, that's what we need to see. It's biased to partner an i3 with a dedicated GPU vs AMD APU. Dedicated GPU will kick the crap out of AMD APUs. From this test, it totally shows how great AMD APUs are considering they didn't even need to use a dedicated GPU and still manage to keep up closed to an i3 with dedicated gpu.
Development of DX12 is older than Mantle, and also, AMD has been a development partner for DX12 from the start. Remember, search engines (like Google or Bing), are very useful for looking up this sort of stuff.
"I think it's entirely possible AMD has known about D3D12 from the beginning, that it pushed ahead with Mantle anyhow in order to secure a temporary advantage over the competition"
:) That right there is the AMD marketing milkshake that brings all the fanboys to their yard, to put it crudely.
Good for AMD but it still doesn't solve its problems in application performance or raw cpu performance where the Intel equivalent is faster and lower power at the same time.
It could be useful though for improving the AMD based gaming consoles.
What just happened is Intel and nVidia "got mantle". Now AMD's high end is further threatened, whilst they have a huge driver rewrite burden, dangerous for them and not dangerous for nVidia or Intel with all their spare $, so what just happened is AMD just got another kill the zombie chop to their forehead - with the +12 giant two handed ax.
This review/benchmark is not good. Why does i3 uses a dedicated GPU while it's being ram against AMD APU. Can't you do a test that intel hd graphics go head to head with AMD APU? By the looks of it, AMD APU is a winner here to be able to produce that much performance while being integrated graphics. Any chance you can do just intel vs amd apu graphics....?
It would be interesting to see how Intel's iGPUs fare with DX12 compared to DX11. I mean there has to be something substantial in it for Intel to be a part of DX12 development from the start, otherwise they just spent time and resources to benefit their competition.
Ok. We got some AMD APUs in the mix now :) I still want to know whether the scaling is limited to 4 threads or not. Please test it again using the data of the A10 7850 and compare it to an FX-6xxx and FX-8xxx.
I wouldn't hold out much hope for a significant performance boost as regards Bulldozer-based CPUs. We should've seen it from Mantle before now, though admittedly AMD does stand to benefit a little more here.
I'd be interested in seeing if the G3258 benefits less than its APU/Athlon competition the more we look at Mantle/DirectX 12.
It's not really a hope, but simply whether it makes sense to have more than 4 threads. Mantle has shown to reduce CPU overhead, but I haven't seen it actually divide the CPU tasks across multiple threads as well as DX12. I might be wrong though, but it would still be good to see it. Theoretically they could've tested this with the Intel i7 CPUs, but their single cores are too strong. So before more than 4 threads can be utilized, the bottleneck was already removed. With the weak ones of the FX, we can see whether DX12 actually scales or not.
Good article. Please allow me to offer my unsolicited critique anyway.
Is there any reason the core-i3 is missing out in the last two charts?? Its exclusion in the last two charts makes the article incomplete in a sense. There is a reason I'm asking this. A better way to lay all the charts out in this article would have been to have the items in the same consistent order in all the charts. (e.g. i3, a8, a10)
E.g. the first 3 charts have i3, a10 and a8 in that order. The 4th chart has i3, a8 and a10. The 5th and the 6th charts have the most damaging (meaning most misrepresentative) change in my opinion. These two charts have GTX 770, A10 and A8 in that order. And it looks exactly the same (black and red bars) as the 4th chart that has i3, a8, a10 in that order !!!
Now I guess most readers would see the first few charts and assume that in the 5th and 6th charts too, that i3 is listed first and the two AMD apus are listed after that. That this, this will cause them to assume that the first item in the 5th and 6th charts is i3, whereas it is actually a gtx 770. They will think incorrectly that i3 takes a lead over the APUs, when actually it is gtx770 that is taking the lead.
Maybe I'm a doofus, and this is what I did exactly at my first read and thought "whoa, core-i3 is equivalent to the apus in all other tests but so much faster in the last two charts?". Only later did I read the last two charts again and realised that it is gtx770 that is taking the naturally massive lead over the others.
Sorry for the long text, but it is the only way I know to convey what I had in mind. Otherwise, a good article and looking forward to more consistent graphs in the future !
Anand omitted the exquisite little fact that they DISABLED Asynchronous Compute which quite effectively crippled the AMD CPU and giving Anand the results that they wanted. Show Intel in the best light possible and AMD in the worst.
The test I want to see is Intel i3-4330 WITHOUT the Geforce AIB. Lets see just how much AMD silicon really outclasses Intel HD IGP.
Windows 10 and DX12 appears to be the best friend that AMD has right now. Mantle has done it's job. It got Microsoft to evolve Dx12 lightyears from where Dx11 was.
One wonders if the intent was to deliberately cripple AMD APU's. Mantle really is the tail wagging the dog!
Agreed, i ended up watching the graphics over and over trying to determine if i had missed something, since we were on the GPU sub section talking about APU performance and the critical part of the actual iGPU performance was missing from this article!, what a waste of time this was to me.
"But with that in mind, our results from bottlenecking AMD’s APUs point to a clear conclusion. Thanks to DirectX 12’s greatly improved threading capabilities, the new API can greatly close the gap between Intel and AMD CPUs. At least so long as you’re bottlenecking at batch submission."
When you decide to run RX 480 instead of GXT and do not DISABLE MAJOR PERFORMANCE enhancing features you will find that AMD does quite a bit better.
How come you are no longer running 3d Mark API Drawcall Overhead Feature test? Because it shows Intel and NVidia as woefully poor performers?
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
152 Comments
Back to Article
eanazag - Friday, February 13, 2015 - link
I'm guessing that the Core i3 is simulated from a i5 or i7 since it doesn't list a processor specifically.ddriver - Friday, February 13, 2015 - link
Wow, is this a new low (or high) in pathetic? Reducing CPU overhead finally helps AMD be nearly as fast as an i3 graphics wise?ShieTar - Friday, February 13, 2015 - link
Yeah, the 90€, 4-Thread-APU from AMD is now as fast in this benchmark as the 120€, 4-Thread-APU from Intel. Which definition of pathetic are you using exactly?Now that the software has caught up, the AMD design shows similar performance with the same number of threads, a slightly higher (turbo) frequency and a slightly lower count of transistors.
As far as comparing designs goes, that doesn't sound like a loss for AMD at all. If they had access to a 22nm processing node, we would be right back to a head-to-head competition between the two companies.
ddriver - Friday, February 13, 2015 - link
I sympathize with AMD for being the underdog and loathe the means Intel used to cripple it into a perpetual runner up, but CPU performance for AMD is way too abysmal. They should really focus more on CPU design and less on trying to downplay the importance of raw CPU perf.Price is not a factor, AMD are pushed into having to offer chips at low prices, because their performance is raw, so they have no other way to offer "bang for the buck" than settle for "barely keep afloat" margins.
Fact is, in most workloads, AMD's "utmost high end" is completely annihilated by Intel's midrange, which is my definition of "pathetic" - when your best is beaten by another's "meh".
AMD's problem is trying to beat Intel at their own game - x86. Sure, there are the "Athlon" days, but that happened only because Intel got sloppy enough to bet on the wrong horse. Athlon shined only because P4 sucked lama ballz. I am astounded that so far AMD has not come up with their own ISA, after all, this is the 21 century, "Intel compatible" is no longer a requirement for commercialism, just look at ARM. AMD totally has the resources to provision for their own ISA software wise, and their own ISA could easily diminish the handicap of having to conform to Intel without having the manufacturing edge. But I guess for AMD it suffices to be a "pseudo competitor" to Intel, just so that people don't get to point at the latter and cry "monopoly".
Oh, and not forget that awesum fella HECKtor RuiNz, who stepped just in time to ruin the rarity "successful AMD", an act made that much more amusing by all the ANALysts, praising him for the ridicuous price he paid for ATi, just to have AMD+ATi be worth half of what they paid for the latter in just a couple of years. It's like there is an industry plot against AMD, now fabless and settling for even lower margins, which is somehow also pimped as a "good thing".
dgingeri - Friday, February 13, 2015 - link
Actually, AMD is working on a new x86 core. (http://www.anandtech.com/show/7991/amd-is-also-wor... It just takes quite a while to get that out the door. I'm sincerely hoping it bring some competition to Intel, sometime soon. As it is, Intel is stuck in neutral while we continue to pay for new chips with barely any advantage in performance.ddriver - Friday, February 13, 2015 - link
Yes, Intel is in sleep mode again, but hey, I kind of like that. Haven't felt the need to upgrade my farm since 4770. If you buy new, it doesn't really matter, since product tears come at the same price, if you already have a recent system, no need to upgrade.The downside is that due to lack of competition, Intel started investing in the waste of silicon real estate in the form of iGPUs barely any prosumer takes advantage of, and would really prefer 2 extra cores. In other words, the lack of competition helps Intel bundle uselessness and gain GPU market share. And the privilege of pushing products which do not waste die area on iGPU at a hefty price premium. High margins - gotta lovem.
inighthawki - Friday, February 13, 2015 - link
Those integrated chips are used on a massive scale, especially in ultrabooks and hybrid devices. It also drives down the cost of desktops that no longer need dedicated video cards. For those who are gamers/enthusiasts or plan to buy a dedicated card, Intel ships the -E series of CPUs that do literally exactly what you ask, and replace the integrated GPU with two/four extra CPU cores.ddriver - Friday, February 13, 2015 - link
I am talking about i7, integrated graphics in entry level and midrange makes sense.inighthawki - Friday, February 13, 2015 - link
I'm not seeing your point. If you're talking about i7, then that's exactly what they offer, and exactly what I stated. Go buy a Haswell-E i7 for only slightly more and get extra cores and no GPU. I'm not sure what your complaint is - that ALL i7's don't come without a GPU? Why would you request such a thing when it's already an optional feature? There are actually people who want high end processors with integrated graphics.anandreader106 - Friday, February 13, 2015 - link
@inighthawki I absolutely agree with you. It's not even up for debate in the corporate world. I work for a very large employer and we all get laptops with i7 processors but have no need for anything beyond the integrated GPU solution.FlushedBubblyJock - Sunday, February 15, 2015 - link
It's a "prosumer" so it's a special thang only the internal mind legends know about....close - Monday, February 16, 2015 - link
Companies buy boatloads of devices that require powerful CPUs but only OK graphics. That's the business laptop/desktop. And the market there is way bigger than the normal consumer one. It was done since forever, they just moved the entire NB into the CPU. If you don't need it just... ignore it or go bigger (E series) :).Pissedoffyouth - Saturday, February 14, 2015 - link
Oh not this againFlushedBubblyJock - Sunday, February 15, 2015 - link
I know hundreds of people who game on HD3000 and related intel iGPU's.Whilst you have a farm, most people have a city home, and Intel KNOWS that.
They will completely ignore your opinion, as do I, interpreted as mere bragging.
close - Monday, February 16, 2015 - link
You haven't felt the need to upgrade your farm since way back when 4770 launched? When was the last time you "felt the need" to upgrade the last generation of hardware to the current one? Or to upgrade 1 year after you bought the latest and the greatest?I can understand wanting to upgrade the very lowest end every year since some benefits must come out of it. Upgrading the high end can't be qualified as "a need" for year now. Because the benefits are barely visible there. Anyway not enough to justify the price.
FlushedBubblyJock - Sunday, February 15, 2015 - link
Vaporware bro. Wasted resources, AMD PR, totaling nothing, ever, in any future imagined, like so many other "AMD technologies".mikato - Friday, February 13, 2015 - link
Yeah I agree, but your previous comment was talking about graphics. About making a new ISA, that could be tough for an underdog. Intel wasn't really able to pull it off with Itanium IA-64.ddriver - Friday, February 13, 2015 - link
No, my previous comment was about AMD trying to downplay how much their CPUs suck by exploiting the reducement of CPU overhead in graphics performance. I somehow feel like this is the wrong approach to address lacking CPU performance. "Hey, if you play games, and are willing to wait for next-gen games, our CPUs won't suck that bad" - not hugely inspiring message. Especially keeping in mind that CPU cost is not all, and power consumption is a factor that can easily overwhelm in time. We have a saying "not rich enough to afford cheap", you know, because most of the time "cheap" comes at a price too.The failure of IA-64 did not lie in it being "non x86", but in compile complexity and in Intel not pouring enough resources in its dev toolchain, plus it was a long time ago, previous century really. Intel could easily push a new ISA today, but it has to be mandated. And even today, who would want Itanum when you have Power, there is simply no demand for Itanium, nor much room to shine. Itanium is not mandated, too "over-engineered", efficient affordable ISA was mandated, enter ARM, a tiny holding really, something AMD totally has the resources to pull off targeting between x86 and ARM.
Alexvrb - Saturday, February 14, 2015 - link
It's not a "wrong approach" since they can do both. They can and are attacking the problem from both sides. It's just that one approach (Mantle, and pushing MS to produce a low-level API as a result) could be done quickly and cheaply. The other side, the development of a new architecture? That takes time. They needed to buy time. Plus the benefits of low-level APIs can still be reaped to various degrees regardless. It's not a wasted effort.Michael Bay - Monday, February 16, 2015 - link
Wake up, Mantle is not just dead, it never even lived.Gigaplex - Saturday, February 14, 2015 - link
"They should really focus more on CPU design and less on trying to downplay the importance of raw CPU perf."Are you implying that the engineers stop working while the marketing folk make announcements?
amilayajr - Saturday, February 14, 2015 - link
You know the sad part about this too was when AMD sold the GPU IP rights to Qualcomm which by the way runs GPU for phones now.......Radeon mobility was a good IP to have and they messed it up and sold for merely 60+million dollars.HisDivineOrder - Saturday, February 14, 2015 - link
Might as well say that AMD messed up when they took "only" 2 billion as a settlement from Intel (and the right to spin off the fabs to GloFo), too.These were all actions forced by the ATI purchase that put AMD so deep in the red that they had to sell anything not nailed down to try and make up for the costs. The fabs were important to AMD mostly because they needed cashflow immediately to stay open long enough for some miracle to happen and magical moment come that would justify the whole endeavor and bring them back from the brink.
Except that moment can't come while AMD coasts on fumes. It's funny. They keep playing musical chairs with management, cull staff like terminators marching across fields of humans, stretch product generations for all their products across far longer time periods than they were EVER intended to be...
At this point, they're so far behind the schedule they should have had that Zen better be a HUGE leapfrog over the last generation of performance or it's going to be the final bit of crap that stubbornly clings to your anus before the end.
FlushedBubblyJock - Sunday, February 15, 2015 - link
I agree they have smelled of death for years but now they really need another K6-2 or we will hear the overnight announcement and then the endless fan whining will reach a shrill level never before seen nor heard.It's torture seeing the crybaby fail for so long, they couldn't just be a success, they are a victim, and a failure, and are to be hated for it.
akamateau - Monday, February 23, 2015 - link
Meyers was an idiot. They also gave Qualcomm the means to compete with them for next to nothing. What does $65 million buy these days?Alexvrb - Saturday, February 14, 2015 - link
Selling their old IP doesn't preclude them from entering that market again. They have some fairly competent low-power tablet chips, so they're not miles off in that regard. But those chips are x86. Perhaps after their new ARM-ISA chips for servers come out they would consider building a mobile ARM chip for phones.TheJian - Friday, February 13, 2015 - link
Software caught up? How many games us DX12 again? Ahh, there's the kicker. How long did it take for DX11 to catch on? This is a band-aid for a severed limb, while AMD continues to bleed people and money. Can a company do a decade re-org? We may find out, sadly. AMD should have avoided consoles like NV did. NV was correct in saying it would detract from their CORE products. We see it has from AMD and if you're going to do that, that new product you spent your R&D wad on for a few years needs to replace it handily. Consoles clearly aren't doing that. They cost AMD 30%+ of their employees, less R&D than NV now, less revenue now etc. Meanwhile NV exploding up with more than a few record quarters recently. CORE products count.If AMD's next gpu isn't a HUGE hit...The apus will just be squeezed by the ARM/Intel race. That is, ARM racing up to notebook/desktop and Intel racing down to mobile. Maybe AMD has an easy lawsuit after NV proves IP theft all over in mobile, but other than that I don't see many surprises ahead that are massive money makers for AMD. There debt is now more than the company market value. ouch. Well, after checking the pop the last few weeks just under I guess, but you get the point. Management has ruined AMD in the last decade. Too bad for the employees/consumers, we're watching a great company go down.
anandreader106 - Friday, February 13, 2015 - link
@TheJian "NV was correct in saying it would detract from their CORE products."Nvidia couldn't offer the integrated solution that Microsoft and Sony were both looking for. After getting design wins for the original Xbox and PS3, do you really think Nvidia didn't try to get another win this generation? They made that statement after all three major console players awarded AMD with their business.
And by the way, the GPUs in XB1 and PS4 are indeed using AMD's CORE gpu product. I think it's fascinating that you interpret those design wins and resulting revenue stream in a negative light.
http://www.forbes.com/sites/greatspeculations/2014...
Andrew LB - Friday, February 13, 2015 - link
That's not true at all. nVidia didn't want to use up all of it's limited wafer production at TSMC with very low profit chips, especially when it might cut into the number of proper graphics chips they can have manufactured. Sony/Microsoft wanted inexpensive consoles that didn't incur a loss on each unit sold like they did last generation which is why both the PS4 and Xbone are basically 3 year old laptop hardware that doesn't have the power to run modern graphics (PC Ultra Settings) at 1080p/30. The only way either console has been hitting 1080p is by dumbing down the quality as seen in every single multi-platform game.anandreader106 - Saturday, February 14, 2015 - link
Sorry Andrew I disagree. AMD is not licensing those chips, they are selling them. There is higher margin to be had because of that. I do not know what the exact margins are, if you do please share, but there is no way you can say with such certainty that it was a bad move on AMD's part to produce those chips.FlushedBubblyJock - Sunday, February 15, 2015 - link
Well industry commenters including articles here at Anand said what Andrew said, and noted the console wins for AMD are very low margin items.What is AMD going to "make it's money on" according to you for the 3 or 4 years the consoles lifespans run ? They get one year of oomph, then the rest is tiny residual sales - is that "a good profit plan" ?
The pro AMD'er above linked Forbes and it's talking about 2013 sales...ROFL - proving my above point, which is of course easily surmised common sense.
One year, minor profit - 3-4 years next to nothing = "a good plan ?"
At least AMD tied in the fan boy set groping with "Mantle and consoles!!!!" for the win... I admit that had many foaming at the mouth with glee for quite some time.
Alexvrb - Sunday, February 15, 2015 - link
I don't really see what your point is. The designs used in the consoles are primarily using technology they already built. Existing CPU cores, existing GCN cores. The R&D investment was minimal... they made money on consoles. Perhaps not much money, but it's still a positive flow of income and the consoles continue to sell in reasonable numbers. Some money is better than no money, and they need all the help they can get.akamateau - Monday, February 23, 2015 - link
Sony and Microsoft didn't invite nVidia OR Intel to the table.akamateau - Monday, February 23, 2015 - link
Dx12 = Mantlegame studios LOVE Mantle as they can write EPIC games.
Direct x12 is several orders of magnitude BETTER than Direct x11.
beginner99 - Saturday, February 14, 2015 - link
Problem is once software (games) catch up and actually make use of the better efficiency and get more complex the lower IPC of APUs will again become apparent.Samus - Sunday, February 15, 2015 - link
Further proof that AMD CPU's just aren't properly optimized in mainstream applications and API's.nissangtr786 - Wednesday, February 18, 2015 - link
Back in real world intel 32nm sandy bridge from early 2011 non trigate transistors which only happened on 22nm destroy 28nm steamroller in ipc and performance per watt and have far stronger memory controllers and floating point performance. Intel so far ahead if intel were on 32nm and amd were on 20nm intel would still lead performance per watt in cpu design.ppi - Friday, February 13, 2015 - link
AMD CPUs have slower single-thread performance, but typically more cores. Therefore, they certainly significantly benefit from software optimized for multi-threading. And removing any bottleneck is always a good thing. After all, even i3 saw significant performance improvement.amilayajr - Saturday, February 14, 2015 - link
Are you blind? i3 was paired with a dedicated graphics you numb nuts. Look at the test set up. AMD APU beats the shit out of i3 intel integrated graphics anytime. lol.That's why this test is biased. Why compare AMD APU with an i3 + dedicated GPU. I commented about it and it got deleted lol. Thanks. Please do this test again, AMD APU vs Intel integrated graphics. Now we're talking.
Ryan Smith - Saturday, February 14, 2015 - link
"I commented about it and it got deleted lol."Just to be clear, no comments have been deleted from this article.
D. Lister - Saturday, February 14, 2015 - link
Right, the problem is, this wasn't really a head-to-head comparison between AMD and Intel CPUs. The title is: "Star Swarm, DirectX 12 AMD APU Performance Preview". The purpose of putting an Intel CPU with an Nvidia dGPU was not to make AMD look bad, but to test the new API with a completely different setup to see how its benefits scale across various hardware platforms.FlushedBubblyJock - Sunday, February 15, 2015 - link
Yes, where is the midrange AMD card ? It's not working is my answer. That we can expect, so the reviewers are excused.akamateau - Monday, February 23, 2015 - link
The mid range AMD card isn't necessary. That is the whole point of Mantle and Dx12. GCN and HSA are working as designed. Now when AMD releases high bandwidth memory in APU cache the APU's will really fly. nVidia is at least a year away from HBM and in Intel silicon it's not even a glimmer on the horizon.i3-4330 can ONLY be competitive with GeForce 770. Take away 770 and i3-4330 becomes a joke.
akamateau - Monday, February 23, 2015 - link
Actually AMD A10-7800 APU is LIGHT YEARS faster than i3-4330. Intel's i3 is only competitve when the Geforce 770 is used. Take away the nVidia GeForce 770 and the Intel i3-4330 alone would be a joke.Dx12 is basically Mantle and Intel HD IGP can't compete.
akamateau - Monday, February 23, 2015 - link
It's not about reducing CPU overhead, it's about allowing the IGP to run at full efficiency, especially running gaming software with tens of thousands of draw calls.Direct x11 was crippled intentionally to keep Intel HD IGP competitive. Mantle changed that game.
Now gaming studios can develop games of truly epic proportions and inexpensive hardware can run it!
akamateau - Monday, March 6, 2017 - link
If Anand ran the same DX12 benchmarks using RX480 the results would be quite different.What Anand did not tell you was they disabled Asynch Compute. This of course results in favorable Intel bench scores as no GTX board runs well with it enabled.
Why would Anand omit such critical data and run a bench test without using the best AMD dGPU AIB available?
In fact DX12 Explicit Multi-adaptor would show that 2 RX 480 boards crushes GTX 1080 for far less money.
Another fact the online media will twist themselves into knots trying to discredit.
EMA is NOT Crossfire or SLI. It is BETTER.
akamateau - Monday, March 6, 2017 - link
It's too bad that Anand LIED.Anand disabled Asychronous Compute which is NOT supported by NVidia.
AMD GPU's can accept non-serial data streams from ALL cpu cores as AMD GPU;s manage that data. This is a massive performance multiplier for DX12 3d game engines and does make a huge difference while running Star Swarm.
DX12 also support Explicit Multi-adaptor which would scale 2x RX480 and allow performance exceeding GTX 1080.
"Async Compute Praised by Several Devs; Was Key to Hitting Performance Target in DOOM on Consoles"
http://wccftech.com/async-compute-praised-by-sever...
The consoles both run 8 core Jaguar APU's. And they fire on "all 8 cylinders".
Both XBOX DX11.X and PS4 GNM and GNMX use extensions that support Asynchronous Compute.
While this is a GPU feature, this feature allows the CPU to process and send data to the shader pipelines in a far faster rate. This reduces CPU bottleneck.
NVidia can do this and in fact unless disabled, ALL GTX boards run DX12 3d gaming engines SLOWER.
Did Anand tell the readers that they were disabling this MAJOR PERFROMANCE ENHANCING FEATURE?
Ian Cutress - Friday, February 13, 2015 - link
It's an i3-4330, forgot to put it in. :)FlushedBubblyJock - Sunday, February 15, 2015 - link
I'm guessing one of AMD's main new high end cards don't work yet with DX12 properly(testing hassles), so that's why we have an nVidia 770 only in comparison.akamateau - Tuesday, February 24, 2015 - link
You missed the entire point of the benchtest.Intel i3-4330 with Intel HD IGP is ONLY competitive with an AMD A10-7800 or A8-6800 APU if an good mid range nVidia AIB GPU is added.
AMD A10-7800 = Intel i3-4330 + nVidia Geforce 770.
That is staggering.
Direct x 12 is the best friend that AMD ever had.
I would really like to see ALL Intel HD IGP benched against A10-7800. I bet even i5 and i7 gets trashed by AMD.
nissangtr786 - Wednesday, February 18, 2015 - link
It says its an i3 4330. Anyway finally after 5 or 6 years intel cpus owning in dx11 games amd cpus with dx12 release will finally be able to play games how there gpus intended instead of massive bottlenecks. I seen a lot of people on forums upgrade gpus costing £200-£300 and getting worse performance then people with i3 and a £100 gpu just because of cpu bottleneck and these people don't realise the amd cpu been bottlenecking them just because there cpu isn't utilised 100% while there gpu is pegged at like 20-50%. Good news all round I say as it means no more wasted gpu performance.akamateau - Tuesday, February 24, 2015 - link
Why buy an nVidia GPU AIB? Just get an AMD APU. It's cheaper than the Intel choice AND you do not need to spend money on a GPU!!!If you insist on an Intel IGP then you need the nVidia GPU AIB. That increases your system cost by about $300 over what the same system would cost with the AMD APU.
Wait until Carrizo hits the bricks. Intel graphics is an embarrassment.
akamateau - Monday, February 23, 2015 - link
I wonder if Anand Tech has the journalistic guts to show the results of i3-4330 WITHOUT the benefit of Geforce 770 AIB?I for one want to see just how bad Intel HD IGP really is.
Even better would be a bench test that shows at what point DOES Intel i3, i5 or i7 compares to AMD A10-7800 running Star Swarm.
Direct x12 is the best friend that AMD has ever had. All thanks to Mantle.
Xailter - Friday, February 13, 2015 - link
Do you think Carrizo (image compression) 'apparently' boasting up to 2x performance and DX12 reducing the CPU problem; that the next gen APUs will be capable of 1080p 60 fps gaming?Maybe not for brand new games but...
SaberKOG91 - Friday, February 13, 2015 - link
Possibly. The combined memory bandwidth savings of DX12 (or Mantle) and texture compression are certainly intriguing. Not to mention the boos that hUMA provides by not needing to copy memory blocks from CPU to GPU regions of RAM.FlushedBubblyJock - Sunday, February 15, 2015 - link
This puts AMD at further risk, since DX12 is pumping up nVidia cards to mantle performance, thus they will surpass AMD.Alexvrb - Sunday, February 15, 2015 - link
They were strictly talking about APUs. So it doesn't matter whether they used DX12 or Mantle, both will benefit Carrizo greatly and yeah it may very well achieve decent 1080p gaming at least with the higher-end chips.For discrete cards DX12 should be fairly neutral if game developers don't heavily optimize for one architecture (use Nvidia prebuilt middleware, for example). So it really depends on what each company deploys in the coming months and years.
akamateau - Tuesday, February 24, 2015 - link
You don't understand the Benchtest.A10-7800 = Intel i3-4330 + nVidia GeForce 770.
Intel i3-4330 just can not compete with AMD WITHOUT GeForce 770.
Embarassing for Intel.
This was the whole point of Mantle: to force Microsoft into evolving Directx11 several generations to be competitive with Mantle.
akamateau - Tuesday, February 24, 2015 - link
Notice how Anand doesn't embarrass Intel by testing i3-4330 without nVidia 770 on-board.HisDivineOrder - Saturday, February 14, 2015 - link
No.akamateau - Tuesday, February 24, 2015 - link
Probably faster ram would be necessary. But who knows. Fully GCN and HSA compatible could make a huge difference. I would really like to see AMD build dual socket APU's!Flunk - Friday, February 13, 2015 - link
I guess we now know why the current-gen consoles all use a lot of low-power CPU cores. With less API overhead, the less-powerful cores makes more sense. This is good for AMD in the short-term at least, because they have poor single thread performance in Bulldozer-derived cores. But it probably means there is even less reason to upgrade your CPU for gaming. It's rare that a software change can yield such big results so let's hope that DirectX 12 becomes popular for all the big blockbuster games, the less taxing ones don't need it anyway.mojobear - Friday, February 13, 2015 - link
Just because DX12 seems to level the playing field for single threaded performance doesn't negative the possibility of better mhz or core scaling. In fact, I read that lower level APIs will likely take greater advantage of mainstream hex and octacore cpus.Flunk - Friday, February 13, 2015 - link
Where are we going to get the GPUs that need those hex and octacore CPUs (which Intel's roadmap lacks in the mainstream even for the next 2 generations). It seems like everything will be GPU-limited, unless game developers come up with something to do with the excess CPU power, like better AI, physics or something I can't even think of right now. It would be great if they did that, but I suspect that the short-term result will be that CPU won't factor into game performance very much.Flunk - Friday, February 13, 2015 - link
Also, I didn't say that DX12 levels the playing field for single-threaded performance, it doesn't. It does however validate AMD's more slower cores policy.mojobear - Friday, February 13, 2015 - link
Based on the batch submission times, it looks like DX12 reduces AMD submission times to the same time as Intel. I thought that speaks to the individual thread performance.FlushedBubblyJock - Sunday, February 15, 2015 - link
If AMD has a more slower cores policy, that right there tells us why they will disappear.I certainly hope you made that up in your head.
HighTech4US - Friday, February 13, 2015 - link
Frame rates of only 10 or 11 on Mid Quality settings for AMD's A8/A10 APUs seem to point out the need for an external discrete GPU and to not use the IGP part of the APU.Even on low settings you only get 20 or 22 FPS from the IGP.
gonchuki - Friday, February 13, 2015 - link
It's not a game, you know? This is a benchmark purposefully built to stress test any CPU+GPU combination. The FPS you see here is just a number, you have to read it as a performance index just like 3DMarks.See how a GTX770 only gets 30FPS at low quality in DX11? has there ever been a real game where running it in low quality yields such performance on a 770? There's your answer.
takeship - Friday, February 13, 2015 - link
I would gently point out that it's quite clear in the graphs that Star Swarm does not stress a 770 significantly, but is crazily CPU limited. Hence the drop between low/medium settings of *only* 4 fps with DX12, and the 70+ fps average with an i3. That said, it appears the bottleneck shifts once you reach medium quality - or star swarm's extreme preset doesn't really up the load significantly.Andrew LB - Friday, February 13, 2015 - link
Star Swarm is a demo/benchmark tool that was made by Oxide for AMD and is highly tuned specifically to AMD architecture, specifically their APU's. It focuses far too much on specific memory operations and not a wide variety of GPU workloads that would be be comparable to the kind of work a GPU does while running games. to be honest I don't think this can be called a benchmark tool and instead should be called a AMD Marketing Tool.HighTech4US - Saturday, February 14, 2015 - link
Not a very good Marketing Tool if it makes AMD's APU IGP's look like crap on DX12.Th-z - Saturday, February 14, 2015 - link
Oh really, explain why it's specifically tuned for AMD arch, what numbers support your claim, and what specific memory operations, and why does a CPU test, mainly to stress API draw calls need wide variety of GPU workloads.HighTech4US - Thursday, February 26, 2015 - link
Quote: It's not a game, you know?It is being used in games, you know?
http://www.anandtech.com/show/8962/the-directx-12-...
Quote: It should be noted that while Star Swarm itself is a synthetic benchmark, the underlying Nitrous engine is relevant and is being used in multiple upcoming games. Stardock is using the Nitrous engine for their forthcoming Star Control game, and Oxide is using the engine for their own game, set to be announced at GDC 2015. So although Star Swarm is still a best case scenario, many of its lessons will be applicable to these future games.
But with AMD IGPs having frame rates of 11 for Mid Quality settings we can see that AMD IGPs will be utterly useless for any of those games.
okp247 - Friday, February 13, 2015 - link
Cheers for the follow up, guys. Much appreciated!One little thing: maybe next time pair the APUs up with at least DDR3-2133 when doing iGPU-tests?
K_Space - Friday, February 13, 2015 - link
+1Ian Cutress - Friday, February 13, 2015 - link
Apologies, there was a miscommunication on my part. I did specifically run the APUs (and Intel) at their maximum supported frequency defined by the manufacturer, so in this case it means the AMDs were at 2133 and the i3 was at 1600. I've updated the configuration table.okp247 - Friday, February 13, 2015 - link
Ah, good job. Thanks.SaberKOG91 - Friday, February 13, 2015 - link
What, no Mantle comparison? I'm disappointed.Ian Cutress - Friday, February 13, 2015 - link
With the drivers required for DX12, for whatever reason the Mantle version of Star Swarm did not seem to initiate. This may be a driver flag or something similar not enabling the APU integrated graphics to engage Mantle. Of course it's all a work in progress still. No GTX 770 + Mantle numbers, obviously.0VERL0RD - Friday, February 13, 2015 - link
To be perfectly honest would've preferred discrete card was AMD powered for this DX12 comparison. Can't imagine Nvidia optimising their drivers as much for AMD CPU's when both companies compete on GPU front.gonchuki - Friday, February 13, 2015 - link
It's actually a loss-loss situation for AMD CPUs here, their own GPU drivers are bad at CPU scaling (with both Intel and AMD) so we are not really losing much even if Nvidia purposefully harms their performance on AMD CPUs.Andrew LB - Friday, February 13, 2015 - link
nVidia's drivers do not purposefully harm performance while coupled with AMD CPU's. It's AMD's own CPU's that simply suck. If nVidia intentionally had code in their drivers that targeted AMD processors so they'd look bad, that would be a violation of the law, and AMD would sue their pants off.0VERL0RD - Saturday, February 14, 2015 - link
It Doesn't take much to cripple supported hardware features using unoptimised data paths or just plain ignore them altogether.
Illegal practice's like this have been used before!
Remember these!
http://www.agner.org/optimize/blog/read.php?i=49
http://techreport.com/news/8547/does-intel-compile...
http://www.osnews.com/story/22683/Intel_Forced_to_...
http://www.extremetech.com/computing/193480-intel-...
http://wccftech.com/nvidia-disables-physx-support-...
http://hardforum.com/showthread.php?t=1799010
Ian Cutress - Friday, February 13, 2015 - link
As Ryan doesn't have APUs in house, I ran these numbers on the APUs I have, but the best AMD GPUs I have are HD 7970s or an R7 240, both of which are not DX12 capable for this testing. We're not all testing in one big office to easily get hardware from each other - it's a transatlantic shipping process between Ryan and myself.MikeMurphy - Friday, February 13, 2015 - link
For the purpose of a performance preview, you made the right choice.FlushedBubblyJock - Sunday, February 15, 2015 - link
He just said he had no choice, since his AMD cards are not working.0VERL0RD - Friday, February 13, 2015 - link
Preview was DX12 AMD APU performance, then goes to highlight AMD beta drivers suck in comparison using discrete Nvidia. Is it too much to ask for AMD hardware in a AMD review? We already knew AMD's DX12 driver needs work from previous review!Andrew LB - Friday, February 13, 2015 - link
He also used an older nVidia driver in the review even though the Catalyst driver is the latest.HighTech4US - Saturday, February 14, 2015 - link
So you want to hide the fact that AMD makes crappy drivers.FlushedBubblyJock - Sunday, February 15, 2015 - link
AMD is still working on the drivers for the cards Ian has, and may be "working on them" for some time to come.coburn_c - Friday, February 13, 2015 - link
Watch AMD screw this up by not including DX12 support for the next three iterations.silverblue - Saturday, February 14, 2015 - link
I'm not sure what you're getting at. Their GCN parts support it now.FlushedBubblyJock - Sunday, February 15, 2015 - link
He's paranoid (or realistic) since AMD has given him more than ample reason to be so over many years.MrCommunistGen - Friday, February 13, 2015 - link
All of the new-gen games that are having issues with high system requirements need this stat. I have a friend who has an old Phenom II X6 1055T + HD 7950 that plays *basically* everything out there.He wanted to get Dying Light but is afraid to buy the game because there's a decent chance it'll be unplayable. Without optimizations he'd probably have to shell out hundreds of dollars upgrading his rig... or go for the path of least resistance and buy an Xbone or PS4.
coburn_c - Friday, February 13, 2015 - link
So Microsoft picks up the optimization slack that all the ports dropped. Probably just make the studios even lazier and we end up right back here in no time. How can a game run on a 1.8ghz jaguar core but not a Phenom II?mikato - Friday, February 13, 2015 - link
AMEN! Call of Duty Ghosts! Don't just haphazardly port the games making your game run like dog poo.HotBBQ - Friday, February 13, 2015 - link
"With this relatively low batch count the benefits of DirectX 11 are still present but diminished."I think you mean 12 here, right?
Ryan Smith - Friday, February 13, 2015 - link
Yes. Thank you for pointing that out.yannigr2 - Friday, February 13, 2015 - link
Very nice follow up. Thank you.Marthisdil - Friday, February 13, 2015 - link
I'd still buy the intel processor. AMD sucks.Andrew LB - Friday, February 13, 2015 - link
Agreed. I spent $189 on my i5-4670k at microcenter and it clocks to 4.4ghz, completely annihilating anything AMD makes.LoneWolf15 - Friday, February 13, 2015 - link
For this to have more meaning for me, I'd want to know which Core i3 CPU was used so that I could know both its clockspeed and which version of HD graphics (4400? 4600?) is being used in the comparison).Ryan Smith - Friday, February 13, 2015 - link
Intel i3-4330, but we're not using the iGPU. We only have DX12 drivers for AMD and NVIDIA GPUs at this time.ppi - Friday, February 13, 2015 - link
And this ... is THE reason, why AMD pushed Mantle.amilayajr - Friday, February 13, 2015 - link
I don't get this review, so the Core i3 is using a dedicated GPU versus AMD APU without dedicated graphics card? Is this right.....? Why don't you do a review that AMD and Intel can really go head to head? AMD APU vs Intel CPU/with integrated graphics, then I would be happy with this test. This is pretty stupid test I'd say. In all fairness, I think AMD APU out shined i3 easily in this test since there's no help from dedicated GPU. Please re do test.Ryan Smith - Friday, February 13, 2015 - link
The point was to look at two things: 1) AMD vs. Intel, both using the same dGPU. 2) To see if DX12 benefits AMD's iGPUsIntel DX12 drivers are not currently available. So an Intel vs. AMD iGPU comparison is not currently possible.
0VERL0RD - Friday, February 13, 2015 - link
Unfortunately using Nvidia dGPU voids 1) Nvidia won't optimise drivers on AMD (They prioritise Intel) Whilst AMD & Nvidia compete for market share. Neither will adopt features created by the opposition unless forced to do so. It would also have been nice to see if all those extra AMD cores were being fully utilised.D. Lister - Friday, February 13, 2015 - link
The problem is that AMD GPUs like Intel CPUs more than their own brand. So ironically, with an AMD GPU, the AMD CPUs would have looked slightly worse compared to their Intel counterparts, than they do with Nvidia GPUs. Also, Nvidia drivers often tend to be more consistent in performance across multiple platforms, which is why nearly 9 out of 10 times, in non-GPU tests, professional reviewers/benchmarkers go with an Nvidia GPU.0VERL0RD - Saturday, February 14, 2015 - link
After all the Recent Gameworks fracus between Red & Green I'd rather a So called AMD review used AMD.History repeating itself http://www.xbitlabs.com/news/multimedia/display/20...
FlushedBubblyJock - Sunday, February 15, 2015 - link
I bet for the Physx support, nVidia told AMD they needed to send a couple of engineers over so they could hook more than the 1st active core, and AMD sent back a few emails, some paranoia, and no engineers, including maybe a 2 minute rude phone call demanding nVidia do all the work or else...silverblue - Saturday, February 14, 2015 - link
Well, if AMD drivers use more CPU cycles, it's never going to look good for a current AMD CPU, even an FX8... unless you clock them to buggery.0VERL0RD - Saturday, February 14, 2015 - link
lol The review only really tested AMD's driver using the APU's iGpu when they could easily have added AMD dedicated GPU. A review claiming to be AMD DX12 driver related should at least have had a dedicated GPU from AMD somewhere in it.silverblue - Saturday, February 14, 2015 - link
...but they did. Did you not see the original article?http://www.anandtech.com/show/8962/the-directx-12-...
0VERL0RD - Saturday, February 14, 2015 - link
All GPU's tested in original article were using i7-4960X http://www.anandtech.com/show/8962/the-directx-12-...silverblue - Sunday, February 15, 2015 - link
"A review claiming to be AMD DX12 driver related should at least have had a dedicated GPU from AMD somewhere in it." You didn't mention CPUs.0VERL0RD - Sunday, February 15, 2015 - link
Lol Wondered why you referred me to original Review when discussing AMD Apu's. Those discrete results whilst valid only speak for Intel. Wanted to see Apu's using discrete AMD hardware not just Nvidia, who previously have even disabled features when detecting non Nvidia cards on board! Restricting test to Nvidia stops us seeing how Mantle or dual graphics scales on Apu's using DX12 in comparison to AMD dGpu.Pissedoffyouth - Friday, February 13, 2015 - link
interesting, I have a mini-itx a10-7800 and it plays games well at 1280x1024FlushedBubblyJock - Sunday, February 15, 2015 - link
HD3000 plays games well too. You have to remember that here and online generally at enthusiast sites, only the greatest framerates at the highest resolutions with the best screen colors and minimum delays get a half nod in between the intense criticism and fan fighting.So yes, enjoy your gaming rig.
Mat3 - Friday, February 13, 2015 - link
Nice article. Would really like to see some comparisons with something like an FX-8350..amilayajr - Saturday, February 14, 2015 - link
Can you please set up a test without i3 using a dedicated GPU. Intel integrated graphics vs AMD APU, that's what we need to see. It's biased to partner an i3 with a dedicated GPU vs AMD APU. Dedicated GPU will kick the crap out of AMD APUs. From this test, it totally shows how great AMD APUs are considering they didn't even need to use a dedicated GPU and still manage to keep up closed to an i3 with dedicated gpu.Ryan Smith - Saturday, February 14, 2015 - link
Unfortunately we do not currently have WDDM 2.0 drivers for Intel, so we can only test NVIDIA and AMD GPUs.beffin - Saturday, February 14, 2015 - link
It took them this long to reverse engineer Mantle? I'm quite disappointed in you.D. Lister - Saturday, February 14, 2015 - link
Development of DX12 is older than Mantle, and also, AMD has been a development partner for DX12 from the start. Remember, search engines (like Google or Bing), are very useful for looking up this sort of stuff.FlushedBubblyJock - Sunday, February 15, 2015 - link
So AMD stole DX12 ideas and flaked out their Mantle, good to know.FlushedBubblyJock - Sunday, February 15, 2015 - link
PS - I'm so glad our greatest reviewer here continues to give AMD credit for forcing Intel with Mantle to do a DX12...Yes, what wondrous minds do appear with a mini-AMD mantle reindeer.
D. Lister - Sunday, February 15, 2015 - link
https://scalibq.wordpress.com/2014/03/27/who-was-f..."AMD was a part of all DX12 development, and was intimately familiar with the API details and requirements"
http://www.extremetech.com/gaming/177407-microsoft...
"In fact, as some of you may recall, an AMD executive publicly stated a year ago that there was no “DirectX 12″ on the Microsoft roadmap."
http://techreport.com/review/26239/a-closer-look-a...
"I think it's entirely possible AMD has known about D3D12 from the beginning, that it pushed ahead with Mantle anyhow in order to secure a temporary advantage over the competition"
:) That right there is the AMD marketing milkshake that brings all the fanboys to their yard, to put it crudely.
zodiacfml - Saturday, February 14, 2015 - link
Good for AMD but it still doesn't solve its problems in application performance or raw cpu performance where the Intel equivalent is faster and lower power at the same time.It could be useful though for improving the AMD based gaming consoles.
FlushedBubblyJock - Sunday, February 15, 2015 - link
What just happened is Intel and nVidia "got mantle". Now AMD's high end is further threatened, whilst they have a huge driver rewrite burden, dangerous for them and not dangerous for nVidia or Intel with all their spare $, so what just happened is AMD just got another kill the zombie chop to their forehead - with the +12 giant two handed ax.amilayajr - Saturday, February 14, 2015 - link
This review/benchmark is not good. Why does i3 uses a dedicated GPU while it's being ram against AMD APU. Can't you do a test that intel hd graphics go head to head with AMD APU? By the looks of it, AMD APU is a winner here to be able to produce that much performance while being integrated graphics. Any chance you can do just intel vs amd apu graphics....?D. Lister - Saturday, February 14, 2015 - link
"Any chance you can do just intel vs amd apu graphics....?"No, because it has already been done:
http://www.anandtech.com/show/8291/amd-a10-7800-re...
silverblue - Sunday, February 15, 2015 - link
...but not in this test.D. Lister - Saturday, February 14, 2015 - link
It would be interesting to see how Intel's iGPUs fare with DX12 compared to DX11. I mean there has to be something substantial in it for Intel to be a part of DX12 development from the start, otherwise they just spent time and resources to benefit their competition.NightAntilli - Sunday, February 15, 2015 - link
Ok. We got some AMD APUs in the mix now :) I still want to know whether the scaling is limited to 4 threads or not. Please test it again using the data of the A10 7850 and compare it to an FX-6xxx and FX-8xxx.silverblue - Monday, February 16, 2015 - link
I wouldn't hold out much hope for a significant performance boost as regards Bulldozer-based CPUs. We should've seen it from Mantle before now, though admittedly AMD does stand to benefit a little more here.I'd be interested in seeing if the G3258 benefits less than its APU/Athlon competition the more we look at Mantle/DirectX 12.
NightAntilli - Monday, February 16, 2015 - link
It's not really a hope, but simply whether it makes sense to have more than 4 threads. Mantle has shown to reduce CPU overhead, but I haven't seen it actually divide the CPU tasks across multiple threads as well as DX12. I might be wrong though, but it would still be good to see it. Theoretically they could've tested this with the Intel i7 CPUs, but their single cores are too strong. So before more than 4 threads can be utilized, the bottleneck was already removed. With the weak ones of the FX, we can see whether DX12 actually scales or not.akamateau - Monday, March 6, 2017 - link
Anand disabled Asynch Compute which natively supports multi-core proceesing and non-serial data streams to the GPU.They went to a horserace and broke legs!
abianand - Monday, February 16, 2015 - link
Good article. Please allow me to offer my unsolicited critique anyway.Is there any reason the core-i3 is missing out in the last two charts?? Its exclusion in the last two charts makes the article incomplete in a sense. There is a reason I'm asking this. A better way to lay all the charts out in this article would have been to have the items in the same consistent order in all the charts. (e.g. i3, a8, a10)
E.g. the first 3 charts have i3, a10 and a8 in that order.
The 4th chart has i3, a8 and a10.
The 5th and the 6th charts have the most damaging (meaning most misrepresentative) change in my opinion. These two charts have GTX 770, A10 and A8 in that order. And it looks exactly the same (black and red bars) as the 4th chart that has i3, a8, a10 in that order !!!
Now I guess most readers would see the first few charts and assume that in the 5th and 6th charts too, that i3 is listed first and the two AMD apus are listed after that. That this, this will cause them to assume that the first item in the 5th and 6th charts is i3, whereas it is actually a gtx 770. They will think incorrectly that i3 takes a lead over the APUs, when actually it is gtx770 that is taking the lead.
Maybe I'm a doofus, and this is what I did exactly at my first read and thought "whoa, core-i3 is equivalent to the apus in all other tests but so much faster in the last two charts?". Only later did I read the last two charts again and realised that it is gtx770 that is taking the naturally massive lead over the others.
Sorry for the long text, but it is the only way I know to convey what I had in mind. Otherwise, a good article and looking forward to more consistent graphs in the future !
Ryan Smith - Monday, February 16, 2015 - link
"Is there any reason the core-i3 is missing out in the last two charts??"Intel WDDM 2.0 drivers are not available at this time. So we can test for the CPU but not the GPU.
abianand - Tuesday, February 17, 2015 - link
thanks, Ryan !the other things still stand, though ;-)
c4t - Monday, February 16, 2015 - link
Did you use Dual-Rank Memory in the APU iGPU test ? Kaveri needs it and the 7850k is fine with DDR3-2400 and more and graphically gains and gains.c4t - Monday, February 16, 2015 - link
http://www.computerbase.de/2014-01/amd-kaveri-arbe...Shadowmaster625 - Monday, February 16, 2015 - link
How about some memory scaling tests for APUs on DX12?thunderising - Monday, February 16, 2015 - link
And how would the results look like if an AMD GPU is used?akamateau - Monday, March 6, 2017 - link
EXACTLY.Anand omitted the exquisite little fact that they DISABLED Asynchronous Compute which quite effectively crippled the AMD CPU and giving Anand the results that they wanted. Show Intel in the best light possible and AMD in the worst.
Essentially Anand lied to the readers.
beepboy - Tuesday, February 17, 2015 - link
All thanks to Apple for pushing Intel in the right directionWealth Generators - Sunday, February 22, 2015 - link
Awesome article http://wallstreetgenerators.com/akamateau - Monday, February 23, 2015 - link
The test I want to see is Intel i3-4330 WITHOUT the Geforce AIB. Lets see just how much AMD silicon really outclasses Intel HD IGP.Windows 10 and DX12 appears to be the best friend that AMD has right now. Mantle has done it's job. It got Microsoft to evolve Dx12 lightyears from where Dx11 was.
One wonders if the intent was to deliberately cripple AMD APU's. Mantle really is the tail wagging the dog!
Revdarian - Thursday, February 26, 2015 - link
Agreed, i ended up watching the graphics over and over trying to determine if i had missed something, since we were on the GPU sub section talking about APU performance and the critical part of the actual iGPU performance was missing from this article!, what a waste of time this was to me.akamateau - Monday, March 6, 2017 - link
Why did you run Star Swarm with ANY NVidia GTX AIB?Why did you neglect to tell the readers that Asycnh Compute was disabled?
It is an UNDENIABLE FACT that AMD GPU's and dGPU's ALL support DX12 fully and NVidia does not.
Using an Nvidia GPU AIB in effect cripples AMD performance.
This was a completely misleading piece.
akamateau - Monday, March 6, 2017 - link
"But with that in mind, our results from bottlenecking AMD’s APUs point to a clear conclusion. Thanks to DirectX 12’s greatly improved threading capabilities, the new API can greatly close the gap between Intel and AMD CPUs. At least so long as you’re bottlenecking at batch submission."When you decide to run RX 480 instead of GXT and do not DISABLE MAJOR PERFORMANCE enhancing features you will find that AMD does quite a bit better.
How come you are no longer running 3d Mark API Drawcall Overhead Feature test? Because it shows Intel and NVidia as woefully poor performers?
akamateau - Monday, March 6, 2017 - link
DID YOU TELL YOUR READERS THAT YOU DISABLED ASYNCHRONOUS COMPUTE?WHY DIDN'T YOU RUN RX 480? BETTER YET 2X RX 480 AS EXPLICIT MULTI-ADAPTOR DOES PROVIDE A VIRTUALLY 1 FOR 1 GAIN IN PERFORMANCE.
THIS IS THE BIG SECRET THAT THE INTEL BIASED ON-LINE MEDIA DOES NOT WANT TO CONSUMERS TO UNDERSTAND!!!
EMA IS NOT CROSSFIRE. IT IS BETTER!!!