The GPU

AMD making the move from VLIW4 to the newer GCN architecture makes a lot of sense. Rather than being behind the curve, Kaveri now shares the same GPU architecture as Hawaii based GCN parts; specifically the GCN 1.1 based R9-290X and 260X from discrete GPU lineup. By synchronizing the architecture of their APUs and discrete GPUs, AMD is finally in a position where any performance gains or optimizations made for their discrete GPUs will feed back into their APUs, meaning Kaveri will also get the boost and the bonus. We have already discussed TrueAudio and the UVD/VCE enhancements, and the other major one to come to the front is Mantle.

The difference between the Kaveri implementation of GCN and Hawaii, aside from the association with the CPU in silicon, is the addition of the coherent shared unified memory as Rahul discussed in the previous page.

AMD makes some rather interesting claims when it comes to the gaming market GPU performance – as shown in the slide above, ‘approximately 1/3 of all Steam gamers use slower graphics than the A10-7850K’. Given that this SKU is 512 SPs, it makes me wonder just how many gamers are actually using laptops or netbook/notebook graphics. A quick look at the Steam survey shows the top choices for graphics are mainly integrated solutions from Intel, followed by midrange discrete cards from NVIDIA. There are a fair number of integrated graphics solutions, coming from either CPUs with integrated graphics or laptop gaming, e.g. ‘Mobility Radeon HD4200’. With the Kaveri APU, AMD are clearly trying to jump over all of those, and with the unification of architectures, the updates from here on out will benefit both sides of the equation.

A small bit more about the GPU architecture:

Ryan covered the GCN Hawaii segment of the architecture in his R9 290X review, such as the IEEE2008 compliance, texture fetch units, registers and precision improvements, so I will not dwell on them here. The GCN 1.1 implementations on discrete graphics cards will still rule the roost in terms of sheer absolute compute power – the TDP scaling of APUs will never reach the lofty heights of full blown discrete graphics unless there is a significant shift in the way these APUs are developed, meaning that features such as HSA, hUMA and hQ still have a way to go to be the dominant force. The effect of low copying overhead on the APU should be a big break for graphics computing, especially gaming and texture manipulation that requires CPU callbacks.

The added benefit for gamers as well is that each of the GCN 1.1 compute units is asynchronous and can implement independent scheduling of different work. Essentially the high end A10-7850K SKU, with its eight compute units, acts as eight mini-GPU blocks for work to be carried out on.

Despite AMD's improvements to their GPU compute frontend, they are still ultimately bound by the limited amount of memory bandwidth offered by dual-channel DDR3. Consequently there is still scope to increase performance by increasing memory bandwidth – I would not be surprised if AMD started looking at some sort of intermediary L3 or eDRAM to increase the capabilities here.

Details on Mantle are Few and Far Between

AMD’s big thing with GCN is meant to be Mantle – AMD's low level API for game engine designers intended to improve GPU performance and reduce the at-times heavy CPU overhead in submitting GPU draw calls. We're effectively talking about scenarios bound by single threaded performance, an area where AMD can definitely use the help. Although I fully expect AMD to eventually address its single threaded performance deficit vs. Intel, Mantle adoption could help Kaveri tremendously. The downside obviously being that Mantle's adoption at this point is limited at best.

Despite the release of Mantle being held back by the delay in the release of the Mantle patch for Battlefield 4 (Frostbite 3 engine), AMD was happy to claim a 2x boost in an API call limited scenario benchmark and 45% better frame rates with pre-release versions of Battlefield 4. We were told this number may rise by the time it reaches a public release.

Unfortunately we still don't have any further details on when Mantle will be deployed for end users, or what effect it will have. Since Battlefield 4 is intended to be the launch vehicle for Mantle - being by far the highest profile game of the initial titles that will support it - AMD is essentially in a holding pattern waiting on EA/DICE to hammer out Battlefield 4's issues and then get the Mantle patch out. AMD's best estimate is currently this month, but that's something that clearly can't be set in stone. Hopefully we'll be taking an in-depth look at real-world Mantle performance on Kaveri and other GCN based products in the near future.

Dual Graphics

AMD has been coy regarding Dual Graphics, especially when frame pacing gets plunged into the mix. I am struggling to think if at any point during their media presentations whether dual graphics, the pairing of the APU with a small discrete GPU for better performance, actually made an appearance. During the UK presentations, I specifically asked about this with little response except for ‘AMD is working to provide these solutions’. I pointed out that it would be beneficial if AMD gave an explicit list of paired graphics solutions that would help users when building systems, which is what I would like to see anyway.

AMD did address the concept of Dual Graphics in their press deck. In their limited testing scenario, they paired the A10-7850K (which has R7 graphics) with the R7 240 2GB GDDR3. In fact their suggestion is that any R7 based APU can be paired with any G/DDR3 based R7 GPU. Another disclaimer is that AMD recommends testing dual graphics solutions with their 13.350 driver build, which due out in February. Whereas for today's review we were sent their 13.300 beta 14 and RC2 builds (which at this time have yet to be assigned an official Catalyst version number).

The following image shows the results as presented in AMD’s slide deck. We have not verified these results in any way and are only here as a reference from AMD.

It's worth noting that while AMD's performance with dual graphics thus far has been inconsistent, we do have some hope that it will improve with Kaveri if AMD is serious about continuing to support it. With Trinity/Richland AMD's iGPU was in an odd place, being based on an architecture (VLIW4) that wasn't used in the cards it was paired with (VLIW5). Never mind the fact that both were a generation behind GCN, where the bulk of AMD's focus was. But with Kavari and AMD's discrete GPUs now both based on GCN, and with AMD having significantly improved their frame pacing situation in the last year, dual graphics is in a better place as an entry level solution to improving gaming performance. Though like Crossfire on the high-end, there are inevitably going to be limits to what AMD can do in a multi-GPU setup versus a single, more powerful GPU.

AMD Fluid Motion Video

Another aspect that AMD did not expand on much is their Fluid Motion Video technology on the A10-7850K. This is essentially using frame interpolation (from 24 Hz to 50 Hz / 60 Hz) to ensure a smoother experience when watching video. AMD’s explanation of the feature, especially to present the concept to our reader base, is minimal at best: a single page offering the following:

A Deep Dive on HSA The Kaveri Socket and Chipset Line Up: Today and Q1, No Plans for FX or Server(?)
Comments Locked

380 Comments

View All Comments

  • fteoath64 - Sunday, January 19, 2014 - link

    "Now we need a new one, a fully HSA compliant HyperTransport." Yes! The dedicated people working on new SuperComputers are doing exotic Interconnects close or exceeding 1TBytes/sec speeds but limited by distance naturally. I see that for HyperTransport 3.0 one can implement 10 channels for high aggregated bandwidth, but that will use more transistors. In a budget conscious die size, using eSRAM seems to be a good trick to boost the bandwidth without overt complexity or transistor budget. The downside is eSRAM suck constant power so it becomes a fixture in the TDP numbers. Iris PRO uses 128MB of eDRAM while Xbox One uses 32MB eSRAM. I think the least amount would be somewhere around 24MB for the x86 to be effective in getting effective RAM bandwidth high enough!.
    The cascading effect if that the memory controller becomes complex and eats into the transistor budget considerably. Seems like a series of moving compromises to get the required performance numbers vs power budget for TDP.
    I am actually very excited to see an Arm chip implementing HSA!!.
  • Samus - Wednesday, January 15, 2014 - link

    I don't get why AMD can't compete with Intel's compute performance like they were absolutely able to do a decade ago. Have they lost all their engineering talent? This isn't just a matter of the Intel manufacturing/fab advantage.
  • zodiacfml - Wednesday, January 15, 2014 - link

    oh no, after all that, I just came impressed with the Iris Pro. I believe memory bandwidth is needed for Kaveri to stretch its legs.
  • duploxxx - Wednesday, January 15, 2014 - link

    impressed with iris pro? for that price difference i would buy a mediocore CPU and dedicated GPU and run circles around it with any game....
  • oaf_king - Wednesday, January 15, 2014 - link

    I can point out some carpola here: "I am not sure if this is an effect of the platform or the motherboard, but it will be something to inspect in our motherboard reviews going forward." This sure discounts the major performance benefits you can achieve without faulty hardware. Search the real benchmarks on WCCF tech for A-10 7850 and be amazed. I can STRONGLY DOUBT the CPU has any issue running at 4ghz on a stock cooler/900mhz GPU. Yes the GPU overclock seems skipped over in this Anand review also, but should really pull it into the "useful" category for gaming!
  • oaf_king - Wednesday, January 15, 2014 - link

    recall AMD had some leaks suggesting 4ghz CPU / 900Mhz GPU. Is that possible after all? Apparently not all motherboards are faulty. If the TDP tops out at 148 at 4ghz, given the conservative power envelopes already placed on the chip, I'm sure it gets very good performance for between zero and ten extra dollars, and a couple seconds in the BIOS.
  • Fox McCloud - Wednesday, January 15, 2014 - link

    Maybe I was skim reading and missed it, but what are the idle power consumption figures for the A8-7600? I need a new home server and I have a iTX system, and mother boards with 6x SATA are slim. It seems the manufacturers only put them on AMD ITX boards, as Intel seem to max out at like 4. I wonder what power figures would be like if under clocked also. I might re-read the review!

    Excellent review as always guys. So in-depth, informative, technical and unbiased. This is why I love this site and trust your expert opinion :)
  • Zingam - Wednesday, January 15, 2014 - link

    AMDs PR: "The processor that your grandparents dream of!" FYEAHA!
  • keveazy - Wednesday, January 15, 2014 - link

    My i5 4440 costs the same as the a10-7850k. I don't think amd will ever compete. By the time they release something that would declare a significant jump, Intel would already have something new to destroy it by then.
  • duploxxx - Wednesday, January 15, 2014 - link

    compete to do what? general tasks in a day, just buy an SSD... cost? did check your motherboard price? GPU, did you check the 4600 performance vs a10? it runs circles around it unless you want to be stuck on low resolution with your gorgeous fast cpu.

    you see customers fool themselve not knowing what to buy for what. hey i have the best benchmarking cpu, but on daily tasks i can't even count the microseconds difference.

Log in

Don't have an account? Sign up now