Back in December, NVIDIA contacted me to let me know that something "really big" was coming out in the near future. It's January 24 as I write this, and tomorrow is the "Optimus Deep Dive" event, an exclusive event with only 14 or so of the top technology websites and magazines in attendance. When Sean Pelletier from NVIDIA contacted me, he was extra excited this time and said something to the effect of, "This new technology is pretty much targeted at you, Jarred… when I saw it, I said, 'We should just call this the Jarred Edition of our mobile platform.' We can't go into details yet, but basically it's going to do all of the stuff that you've been talking about for the past couple of years." With a statement like that, you can understand that it got the gears in my head to start churning. What exactly have I been pining for in terms of mobile GPUs of late? So in advance of the unveiling of their latest technologies and products, I thought I'd put down what I really want to see and then we'll find out how well NVIDIA has matched my expectations.

I've put together my thoughts before getting any actual details from NVIDIA; I'll start with those, but of course NDAs mean that you won't get to read any of this until after the parts are officially announced. Page two will begin the coverage of NVIDIA's Optimus announcement, but my hopes and expectations will serve as a nice springboard into the meat of this article. They set my expectations pretty high back in December, which might come back to haunt them….

First off, if we're talking about a mobile product, we need to consider battery life. Sure, there are some users that want the fastest notebook money can buy—battery life be damned! I'm not that type of user. The way I figure it, the technology has now existed for at least 18 months to offer a laptop that can provide good performance when you need it, but at the same time it should be able to power down unnecessary devices and provide upwards of six hours of battery life (eight would be better). Take one of the big, beefy gaming laptops with an 85Wh (or larger) battery, and if you shut down the discrete GPU and limit the CPU to moderate performance levels, you ought to be able to get a good mobile solution as well as something that can power through tasks when necessary. Why should a 9 pound notebook be limited to just 2 hours (often less) of battery life?

What's more, not all IGPs are created equal, and it would be nice if only certain features of a discrete GPU could power up when needed. Take video decoding as an example. The Intel Atom N270/280/450 processors are all extremely low power CPUs, but they can't provide enough performance to decode a 1080p H.264 video. Pine Trail disappointed us in that respect, but we have Broadcom Crystal HD chips that are supposed to provide the missing functionality. Well, why can't we get something similar from NVIDIA (and ATI for that matter)? We really expect any Core i3/i5 laptop shipped with a discrete GPU to properly support hybrid graphics, and the faster a system can switch between the two ("instantly" being the holy grail), the better. What we'd really like to see is a discrete GPU that can power up just the video processing engine while leaving the rest of the GPU off (i.e. via power gate transistors or something similar). If the video engine on a GPU can do a better job than the IGP and only use a couple watts that would be much better than software decoding on the CPU. Then again, Intel's latest HD Graphics may make this a moot point, provided they can handle 1080p H.264 content properly (including Flash video).

Obviously, the GPU is only part of the equation, and quad-core CPUs aren't an ideal solution for such a product, unless you can fully shut down several of the cores and prevent the OS from waking them up all the time. Core i3/i5/i7 CPUs have power gate transistors that can at least partially accomplish this, but the OS side of things certainly appears to be lagging behind right now. If I unplug and I know all I'm going to be doing for the next couple of hours is typing in Word, why not let me configure the OS to temporarily disable all but one CPU core? What we'd really like to see is a Core i7 type processor that can reach idle power figures similar to Core 2 Duo ULV parts. Incidentally, I'm in a plane writing this in Word on a CULV laptop right now; my estimated battery life remaining is a whopping 9 hours on a 55Wh battery and I have yet to feel the laptop is "too slow" for this task. We haven't reached this state of technology yet and NVIDIA isn't going to announce anything that would affect this aspect of laptops, but since they said this announcement was tailored to meet my wish list I thought I'd mention it.

Another area totally unrelated to power use but equally important for mobile GPUs is the ability to get regular driver updates. NVIDIA first discussed plans for their Verde Notebook Driver Program back at the 8800M launch in late 2007. We first discussed this in early 2008 but it wasn't until December 2008 that we received the first official Verde driver. At that time, the reference driver was only for certain products and operating systems, and it was several releases behind the desktop drivers. By the time Windows 7 launched last fall, NVIDIA managed to release updated mobile drivers for all Windows OSes with support for their 8000M series and newer hardware, and this was done at the same time and with the same version as the desktop driver release. That pattern hasn't held in the months following the Win7 launch, but our wish list for mobile GPUs would definitely include drivers released at the same time as the desktop drivers. With NVIDIA's push on PhysX, CUDA, and other GPGPU technologies, linking the driver releases for both mobile and desktop solutions would be ideal. We can't discuss AMD's plans for their updated ATI Catalyst Mobility just yet, but suffice it to say ATI is well aware of the need for regular mobile driver updates and they're looking to dramatically improve product support in this area. We'll have more to say about this next week.

Finally, the last thing we'd like to see from NVIDIA is less of a gap between mobile and desktop performance. We understand that the power constraints on laptops inherently limit what you can do, and we're certainly not suggesting anyone try to put a 300W (or even 150W) GPU into a laptop. However, right now the gap between desktop and mobile products has grown incredibly wide—not so much for ATI, but certainly for NVIDIA. The current top-performing mobile solution is the GTX 280M, but despite the name this part has nothing to do with the desktop GTX 280. Where the desktop GTX 285 is now up to 240 shader cores (SPs) clocked at 1476MHz, the mobile part is essentially a tweaked version of the old 8800 GTS 512 part. We have a current maximum of 128 SPs running at 1500MHz (1463MHz for the GTX 280M), which is a bit more than half of the theoretical performance of the desktop part with the same name. The bandwidth side of things isn't any better, with around 159GB/s for the desktop and only 61GB/s for notebooks.

As we discussed recently, NVIDIA is all set to release Fermi/GF100 for desktop platforms in the next month or two. Obviously it's time for a new mobile architecture, but what we really want is a mobile version of GF100 rather than a mobile version of GT200. One of the key differences is the support for DirectX 11 on GF100, and with ATI's Mobility Radeon 5000 series already starting to show up in retail products, NVIDIA is behind the 8-ball in this area. We don't have a ton of released or upcoming DX11 games just yet, but all things being equal we'd rather have DX11 support than not. Considering Fermi looks to be a beast in terms of power consumption, we're obviously going to need to make some performance sacrifices in order to keep power in check. GF100 looks to have several parts with varying levels of SPs, so it may be as simple as cutting the number of SPs in half and toning down the clock rates. Another option is that perhaps NVIDIA can take a hybrid approach and tack DX11 features onto the G90 or GT200 architecture rather than reworking GF100 into a mobile product. Whatever route they take, NVIDIA really needs to maintain feature parity with ATI's mobile products, and right now that means DX11 support.

So, that's my wish list right now. I don't ask for much, really: give me mobile performance that has feature parity with desktop parts, with a moderate performance hit in order to keep maximum power requirements in check, and do all that with a chip that's able to switch between 0W power draw and normal power requirements in a fraction of a second as needed. Simple! Now it's time to begin coverage of the actual presentation and find out exactly what NVIDIA is announcing. So turn the page and let's delve into the latest and greatest mobile news from NVIDIA.

A Brief History of Switchable Graphics
POST A COMMENT

49 Comments

View All Comments

  • MasterTactician - Wednesday, February 10, 2010 - link

    But doesn't this solve the problem of an external graphics solution not being able to use the laptop's display? If the external GPU can pass the rendered frames back to the IGP's buffer via PCI-E than its problem solved, isn't it? So the real question is: Will nVidia capitalize on this? Reply
  • Pessimism - Tuesday, February 09, 2010 - link

    Did Nvidia dip into the clearance bin again for chip packaging materials? Will the laptop survive its warranty period unscathed? What of the day after that? Reply
  • HighTech4US - Tuesday, February 09, 2010 - link

    You char-lie droid ditto-heads are really something.

    You live in the past and swallow everything char-lie spews.

    Now go back to your church's web site semi-inaccurate and wait for your next gospel from char-lie.
    Reply
  • Pessimism - Tuesday, February 09, 2010 - link

    So I suppose the $300+ million dollars in charges Nvidia took were merely a good faith gesture for the tech community with no basis in fact regarding an actual defect. Reply
  • HighTech4US - Tuesday, February 09, 2010 - link

    The $300+ million was in fact a good faith measure to the vendors who bought the products that had the defect. nVidia backed extended warranties and picked up the total cost of repairs.

    The defect has been fixed long ago.

    So your char-lie comment as if it still exists deserves to be called what it is. A char-lie.
    Reply
  • Pessimism - Tuesday, February 09, 2010 - link

    So you admit that a defect existed. That's more than can be said for several large OEM manufacturers.


    Reply
  • Visual - Tuesday, February 09, 2010 - link

    Don't get me wrong - I really like the ability to have a long battery life when not doing anything and also have great performance when desired. And if switchable graphics is the way to achieve this, I'm all for it.

    But it seems counter-productive in some ways. If the external GPU was properly designed in the first place, able to shut down power to the unused parts of the processor, supporting low-power profiles, then we'd never have needed switching between two distinct GPUs. Why did that never happen?

    Now that Intel, and eventually AMD too, are integrating a low-power GPU inside the CPU itself, I guess there is no escaping from switchable graphics any more. But I just fail to see why NVidia or ATI couldn't have done it the proper way before.
    Reply
  • AmdInside - Tuesday, February 09, 2010 - link

    Because it's hard to compete with a company that is giving away integrated graphics for free (Intel) in order to move higher priced CPUs. In a way, AMD is doing the same with ATI. Giving us great motherboards with ATI graphics and cheap cheap prices (which in many ways are much better than Intel's much higher priced offerings) in order to get you to buy AMD CPUs. Reply
  • JarredWalton - Tuesday, February 09, 2010 - link

    Don't forget that no matter how pieces of a GPU go into a deep sleep state (i.e. via power gate transistors), you would still have some extra stuff receiving power. VRAM for example, plus any transistors/resistors. At idle the CULV laptops are down around 8 to 10W; even the WiFi card can suck up .5 to 1W and make a pretty substantial difference in battery life. I'd say the best you're likely to see from a discrete GPU is idle power draw that's around 3W over and above what an IGP might need, so a savings of 3W could be a 30% power use reduction. Reply
  • maler23 - Tuesday, February 09, 2010 - link

    I've been waiting for this article ever since it was hinted in the last CULV roundup. The ASUS laptop is a little disappointing, especially the graphic card situation(the Alienware M11X kind of sucked up a lot of excitement there). Frankly, I'd just take a discounted UL-30VT and deal with manual graphics switching.

    Couple of questions:

    -Any chance for a review of the aforementioned Alienware M11X soon?

    -I've seen a couple of reviews with display quality comparisons including this one. How do to the Macbook and Macbook Pros fit into the rankings?

    cheers!

    -J
    Reply

Log in

Don't have an account? Sign up now