• What
    is this?

    You've landed on the AMD Portal on AnandTech. This section is sponsored by AMD. It features a collection of all of our independent AMD content, as well as Tweets & News from AMD directly. AMD will also be running a couple of huge giveaways here so check back for those.

    PRESENTED BY

CES has wrapped up now and we’re all back home, but we’ve still got a few items to cover. While Anand was meeting with AMD on Thursday to go over some other their other tech, I got a chance to head into a separate area for a briefing on their mobile technology. There are two items I want to quickly discuss: the Radeon 7000M lineup, and Trinity.

We’ve already covered some of the 7000M parts that will ship in the very near future—quite a few laptops at CES were running these “new” GPUs. The 7400M, 7500M, and 7600M are all VLIW5 parts, which we’ve called “rebadged” GPUs. AMD pointed out (and we at agree at least in part) that calling these rebadged GPUs is a bit too harsh—reused or recycled might be a better term, particularly if you’re into the whole “green” thing. But seriously, the latest 7000M GPUs aren’t just a straight rerelease of the same silicon under a new name; we’ve pointed out in the past that as time passes, companies become more familiar with a process technology, both on the fabricator side and on the chip designer side. This is why initial 40nm GPUs don’t offer all the performance and features of later model 40nm GPUs for example. AMD didn’t specifically state that the 7400M/7500M/7600M will use a new revision/spin of the existing Northern Islands cores, but it was at least implied, and it’s likely that we’ll see slightly better performance and power characteristics out of the latest batch. You can see the released specs and expected performance in the gallery below.

Okay, that’s part one of the 7000M strategy: reuse the existing Northern Islands family to occupy the value and mainstream price segments. The second part of the strategy, we don’t have any specifics to reveal in terms of specs, but AMD did let us know one important piece of information. They have in essence drawn a line in the sand (e.g. in their product portfolio), and everything 7600M and below will reuse their existing 40nm VLIW5 architecture while all of the yet-to-be-announced parts above 7600M (7700M/7800M/7900M) will switch to 28nm GCN (Graphics Core Next). It sounds like the mobile GPUs will use lower power variants of “Pitcairn” and “Cape Verde”, leaving “Tahiti” as a desktop-only GPU for the time being, but features like DX11.1, VCE, and ZeroCore Power Technology will be present in the higher performance 7000M parts when they launch. And just when will that be? AMD wouldn’t give us a date, but all indications are we’ll see the 7000M Southern Islands GPUs in the April/May timeframe.

AMD had a couple of laptops (okay, monstrous DTR beasts really) running in their booth to show that they had working silicon for both classes of 7000M hardware. Both notebooks are Clevo X7200 units using desktop CPUs, so they’re more proof of concept than something that most people are going to buy, but they were both happily running 3D applications. The notebook on the left has a single HD 7690M running a custom AMD demo, while the notebook on the right has CrossFire high-end 7000M hardware (presumably something in the HD 7900M class) running Aliens Vs. Predator 2.

Finally, a few people continue to ask questions about Trinity hardware. Obviously Trinity was running in a couple demonstrations, but AMD is not yet disclosing the full hardware specs. Some have speculated that Trinity will have a GCN-based graphics core, but if you stop to think about it for a minute that’s obviously not going to happen. GCN is coming out on TSMC’s 28nm process technology while Trinity will use GLOBALFOUNDRIES’ 32nm process; as AMD is busy trying to work on the improved Bulldozer cores in Trinity along with upgrading the GPU, trying to bring GCN into the mix would seriously delay the whole process. The short story is that AMD (again) confirmed that Trinity is using a VLIW4 core for graphics, and it offers enhanced performance relative to the core in Llano. We’ll hopefully have final hardware in hand in the next few months to provide the full performance analysis.

POST A COMMENT

16 Comments

View All Comments

  • djc208 - Saturday, January 14, 2012 - link

    The thing I hate about this kind of Mobil strategy is that without doing your homework the average person isn't going to know that one number difference in their laptop could make a big difference in graphics performance and longevity. Reply
  • GenSozo - Saturday, January 14, 2012 - link

    In addition to that, I'm a mainstream buyer who does my homework, and I'm not going to pay the big bucks (or medium bucks, as the case may be) for last gen, rehashed hardware, no matter how they spitshine it. On principle, at the very least. Reply
  • Roland00Address - Saturday, January 14, 2012 - link

    So is amd going to release any VLIW4 discrete graphics to crossfire with the trinity gpu? Or is a VLIW4 going to crossfire with a VLIW5? Or is amd just going to drop APU+DGPU crossfire with trinity? Reply
  • JarredWalton - Sunday, January 15, 2012 - link

    Asymmetrical CrossFire is exactly that: different GPUs that still manage to (sort of, in theory) work together. That's actually the big hurdle to overcome, and I'd imagine it's why my initial testing of Llano's CrossFire with 6630M didn't pan out so well. Llano is 400 cores and 6630M is 480 cores, so I imagine ACF with VLIW4 and VLIW5 wouldn't be all that different. Here's to hoping ACF works (a lot) better when Trinity launches. Reply
  • Wolfpup - Wednesday, January 18, 2012 - link

    Does that even WORK though to begin with? Last I payed attention to it, it sounded like crossfire with the current 'A' CPUs and a separate GPU was basically non-functional.

    Even if drivers have improved, it still annoys me...instead of using a GPU on the CPU + a seperate GPU, SOOOOOO much better all around to dump the integrated GPU, leaving either a cheaper to guild CPU, or tons of transistors for more cores and cache, and then put all those extra transistors on the GPU instead-an 800 core or 960 core part instead of 480, for example.

    Granted AMD's 'A' series doesn't make me furious like Intel's worthless video does, but even still it doesn't actually make sense.
    Reply
  • AlB80 - Sunday, January 15, 2012 - link

    1. VLIW5 32-bit is a very effective architecture.
    2. I think Trinity will be VLIW4 32-bit (Cayman has 64-bit support). And AMD will find solution to glue VLIW4 and VLIW5 parts in Xfire.
    Reply
  • bennyg - Sunday, January 15, 2012 - link

    Smart-bummed wordsmithery does not change that a minor respin being deliberately associated with the next Gen of cards is flat out dishonest.

    And surely nvidia would get the lawyers onto them super quick as soon if they ever said the word "green" in relation to graphics cards :-)
    Reply
  • XZerg - Sunday, January 15, 2012 - link

    I don't care what kinda shitty options they provide as long as the laptop manufacturers have a TB port and there are solutions out to exploit the PCIe bus to use external switchable desktop GPU. Sure not the perfect beat the internals in performance but the other benefits more than make up for the slow performance. Reply
  • bennyg - Sunday, January 15, 2012 - link

    Yep I agree completely. But both sides would be doing themselves out of expensive mobile gpu sales... I think theyd rather sell a GF114 as a 580M rather than a 560Ti, and a Barts as a 6970M not 6870. The difference is many hundreds in RRP alone. Reply
  • JarredWalton - Monday, January 16, 2012 - link

    External GPUs via Thunderbolt may not have a fast enough interface bandwidth. Remember that a single x16 PCIe 2.0 connection can push 8GB/s (80Gbps with 8/10 encoding); PCIe 3.0 will be double that with ~16GB/s in each direction (128Gbps with 128/130 encoding). Thunderbolt is up to 10Gbps in each direction, which is only 1.25GB/s; even with two TB connections you're still only getting about a fourth of the bandwidth of an x16 PCIe 2.0 connection. Reply

Log in

Don't have an account? Sign up now