POST A COMMENT

49 Comments

Back to Article

  • jkr06 - Saturday, February 27, 2010 - link

    From all the articles I read, one thing is still not clear to me. I have a laptop with core i5(which has IGP) and nvidia 330M. So can I utilize the optimus solution with just SW. Or the laptop manufacturers specifically need to add something to truly make it work. Reply
  • JonnyDough - Friday, February 19, 2010 - link

    was a wasted effort. Seems sort of silly to switch between two GPU's when you can just use one powerful one and shut off parts of it. Reply
  • iwodo - Thursday, February 18, 2010 - link

    First, the update, they should definitely set up something like Symantec or Panda Cloud Database, where users input are stored and shared and validated worldwide. The amount of games that needs to be profiled is HUGE. Unless there is a certain simple and damn clever way of catching games. Inputting every single games / needed apps running exe names sounds insane to me. There has to be a much better way to handle this.

    I now hope Intel would play nice, and gets a VERY power efficient iGPU inside SandyBridge to work with Optimus, Instead of botching even more transistors for GPU performance.
    Reply
  • secretanchitman - Saturday, February 13, 2010 - link

    any chance of this going to the macbook pros? im hoping when they get updated (soon i hope), it will have some form of optimus inside. if not, something radeon based. Reply
  • strikeback03 - Thursday, February 11, 2010 - link

    As I don't need a dGPU, I would like to see a CULV laptop with the Turbo33 feature but without any dGPU in order to save money. Maybe with Arrandale. Reply
  • jasperjones - Wednesday, February 10, 2010 - link

    Jarred,

    As you imply, graphics drivers are complex beasts. It doesn't make me happy at all that now Optimus makes them even more complex.

    Optimus will likely require more software updates (I don't think it matters whether they are called "driver" or "profile" updates).
    That puts you even more at the mercy of the vendor. Even prior to Optimus, it bothered me that NVIDIA's driver support for my 3 1/2 year old Quadro NVS 110m is miserable on Win 7. But, with Optimus, it is even more critical to have up-to-date software/driver support for a good user experience! Furthermore, software solutions are prone to be buggy. For example, did you try to see if Optimus works when you run virtual machines?

    Also, I don't like wasting time installing updates. Why can't GPUs just work out of the box like CPUs?

    Lastly, these developments are completely contrary to what I believe are necessary steps towards more platform independence. Will NVIDIA ever support Optimus on Linux? While I suspect the answer is yes, I imagine it will take a year and a half at the very least.
    Reply
  • obiwantoby - Wednesday, February 10, 2010 - link

    I think it is important to note, that the demo video, even though it is a .mov, works in Windows 7's Windows Media Player. It works quite well, even with hardware acceleration.

    Keep encoding videos in h.264, it works on both platforms in their native players.

    No need for Quicktime on Windows, thank goodness.
    Reply
  • dubyadubya - Wednesday, February 10, 2010 - link

    "Please note that QuickTime is required" FYI Windows 7 will play the mov file just fine so no need for blowtime. Why the hell would anyone use a codec that will not run on XP or Vista without Blowtime is beyond me. For anyone wanting to play mov files on XP or Vista go get Quicktime alternative. Reply
  • beginner99 - Wednesday, February 10, 2010 - link

    Did I read that right, HD video is always decoded on the dGPU even if the (Intel) IGP could deal with it?

    I mean it sounds nice but is there also an option prevent certain apps from using de dGPU?

    Or preventing the usage of dGPU completely like when one really needs the longest battery time possible? -> some users like to have control themselves.
    intel IGP might offer worse quality with their videao decode feature (but who really sees that on laptop lcd?) but when travelling the whole day and watching movies, I would like to use as little power as possible.
    Reply
  • JarredWalton - Wednesday, February 10, 2010 - link

    It sounds like this is really just a case of the application needing a "real profile". Since I test x264 playback using MPC-HC, I had to create a custom profile, but I think that MPC-HC detected the GMA 4500MHD and decided that was perfectly acceptable. I couldn't find a way to force decoding of an .mkv x264 video within MPC-HC, but other video playback applications may fare better. I'll check to see what happens with WMP11 as well tomorrow (once I install the appropriate VFW codec). Reply
  • Hrel - Tuesday, February 09, 2010 - link

    Now that I've calmed down a little I should add that I'm not buying ANY gpu that doesn't support DX11 EVER again. We've moved past that; DX11 is necessary; no exceptions. Reply
  • JarredWalton - Tuesday, February 09, 2010 - link

    I'm hoping NVIDIA calls me in for a sooper seekrit meeting some time in the next month or two, but right now they're not talking. They're definitely due for a new architecture, but the real question is going to be what they put together. Will the next gen be DX11? (It really has to be at this point.) Will it be a tweaked version of Fermi (i.e. cut Fermi down to a reasonable number of SPs), or will they tack DX11 functionality onto current designs?

    On a different note, I still wish we could get upgradeable notebook graphics, but that's probably a pipe dream. Consider: NVIDIA makes a new mGPU that they can sell to an OEM for $150 or something. OEM can turn that into an MXM module, do some testing and validation on "old laptops", and sell it to a customer for $300 (maybe even more--I swear the markup on mobile GPUs is HUGE!). Or, the OEM could just tell the customer, "Time for an upgrade" and sell them a new $1500 gaming laptop. Do we even need to guess which route they choose? Grrr....
    Reply
  • Hrel - Tuesday, February 09, 2010 - link

    It STILL doesn't have a screen with a resolution of AT LEAST 1600x900!!! Seriously!? What do I need to do? Get up on roof tops and scream from the top of my lungs? Cause I'm almost to that point. GIVE ME USEABLE SCREENS!!!!!!! Reply
  • MrSpadge - Wednesday, February 10, 2010 - link

    Not everyones eyes are as good as yours. When I asked some 40+ people if I got the location right and showed it to them via Google Maps on my HTC Touch Diamond they rfused to even think about it without their glasses. Reply
  • strikeback03 - Thursday, February 11, 2010 - link

    I've never had people complain about using Google Maps on my Diamond. Reading text messages and such yes, and for a lot of people forget about using the internet since they have to zoom the browser so far in, but the maps work fine. Reply
  • GoodRevrnd - Tuesday, February 09, 2010 - link

    Any chance you could add the Macbook / Pro to the LCD quality graphs when you do these comparisons? Reply
  • JarredWalton - Tuesday, February 09, 2010 - link

    Tell Anand to send me a MacBook for testing. :-) (I think he may have the necessary tools now to run the tests, but so far I haven't seen any results from his end.) Reply
  • MrSpadge - Tuesday, February 09, 2010 - link

    Consider this: Fermi and following high end chips are going to beasts, but they might accelerate scientific / engineering apps tremendously. But if I put one into my workstation it's going to suck power even when not in use. It's generating noise, it's heating the room and making the air stuffy. This could easily be avoided with Optimus! It's just that someone had to ditch the old concept of "desktops don't need power saving" even more. 20 W for an idle GPU is not OK.

    And there's more: if I run GP-GPU the screen refresh often becomes sluggish (see BOINC etc.) or the app doesn't run at full potential. With Optimus I could have a high performance card crunch along, either at work or BOINC or whatever, and still get a responsive desktop from an IGP!
    Reply
  • Drizzt321 - Tuesday, February 09, 2010 - link

    Is there a way to set this to specifically only use IGP? So turn off the discrete graphics entirely? Like if I'm willing to suffer lower performance but need the extra battery life. I imagine if I could, the UL50Vf could equal the UL80Vt pretty easily in terms of battery life. I'm definitely all for the default being Optimus turned on...but lets say the IGP is more efficient at decoding that 720p or 1080p, yet NVidia's profile says gotta fire up the discrete GPU. There goes quite a bit of battery life! Reply
  • kpxgq - Wednesday, February 10, 2010 - link

    depending on the scenario... the discrete gpu may use less power than the igp... ie say a discrete gpu working at 10% vs an igp working at 90%...

    kind of like using a lower gear at inclines uses less fuel than a higher gear going the same speed since it works less harder... the software should automatically do the math
    Reply
  • JarredWalton - Tuesday, February 09, 2010 - link

    You can manually set applications to only use the IGP instead of turning on the dGPU, but to my knowledge there's no way to completely turn off the dGPU and keep it off. Of course, when the GPU isn't needed it is totally powered off so you don't lose battery life unless you start running apps that want to run on the GPU. Reply
  • macroecon - Tuesday, February 09, 2010 - link

    Well, I was getting ready to pull the trigger over the weekend to buy a UL30Vt, but I'm glad that I waited. While this is not a revolutionary feature, it does make laptops that lacks it less valuable in my opinion. The video that Jarred posted toward the end of the article really demonstrates the value of on-the-fly GPU switching. I think that I'll wait for bit longer for Optimus, and also DirectX11 nVidia GPU, to hit the market. Thanks for the coverage Jarred! Reply
  • lopri - Tuesday, February 09, 2010 - link

    Not to rain on NV's parade, but I'd much prefer if Optimus is doing its thing in 100% hardware. In an ideal world, software solution can do the same job as hardware solution, but I've seen some caveats on software solutions - on desktops, admittedly. Instead of trying to 'detect' the apps, detecting 'loads' and take care of it in hardware.

    Some might know what I'm talking about.
    Reply
  • JarredWalton - Tuesday, February 09, 2010 - link

    The only problem with this is that the software is needed to work between Intel and NVIDIA hardware. There's also a concern about if you want something to NOT run on the dGPU (for testing purposes or to save battery life). With IGP reaching the point where it can handle most video tasks, you wouldn't want to power up the dGPU to do H.264 decoding as power requirements would jump several watts.

    Of course, if you could have NVIDIA IGP and dGPU it might be possible to do more on the hardware side, but Arrandale, Pineview, Sandy Bridge, etc. make it very unlikely that we would see another NVIDIA IGP any time soon.
    Reply
  • acooke - Tuesday, February 09, 2010 - link

    OK, so this is awesome (particularly with Lenovo and CUDA mentioned). But how is the encrupted profile update driver yadda yadda stuff going to work with Linux?

    I'm a software developer, I work with CUDA (OpenCL actually), I use Linux. NVidia should worry about people like me because we're the motor behind the take-up of Fermi, which is going to be a significant source of cash for them. Currently I can do very basic OpenCL development while on the road with my laptop using the AMD CPU driver (despite having Intel/Lenovo hardware), but being able to use a GPU woul dbe a huge improvement (it's not that much fun running GPU code on a CPU!).
    Reply
  • darckhart - Wednesday, February 10, 2010 - link

    yes, i'm curious about this also. Reply
  • room1oh1 - Tuesday, February 09, 2010 - link

    I hope they don't fit any brakes into a laptop! Reply
  • MonkeyPaw - Tuesday, February 09, 2010 - link

    Yeah, it's rather unfortunate that they said it should work like a hybrid, and they have the picture of a 2010 Prius in the slide. Just goes to show that car analogies don't work! They could have just drawn the parallel to your laptop battery--when you unplug the laptop, it starts using the battery with no user intervention. Reply
  • horseeater - Tuesday, February 09, 2010 - link

    Switchable graphics are nice, but I want external gfx cards (or enclosures for desktop gfx cards) for laptops. Just plug it in when you're home, kill precious time playing useless junk, and use the igp when on the road.

    That being said, UL80-vt is reportedly awesome, and improvements are surely welcome, if they don't up the price.
    Reply
  • synaesthetic - Wednesday, February 10, 2010 - link

    I want external GPUs also, but I want one that can use the laptop's LCD display rather than forcing me to plug in an external display. After all, external displays aren't portable, but a ViDock isn't terribly large. Reply
  • MasterTactician - Wednesday, February 10, 2010 - link

    But doesn't this solve the problem of an external graphics solution not being able to use the laptop's display? If the external GPU can pass the rendered frames back to the IGP's buffer via PCI-E than its problem solved, isn't it? So the real question is: Will nVidia capitalize on this? Reply
  • Pessimism - Tuesday, February 09, 2010 - link

    Did Nvidia dip into the clearance bin again for chip packaging materials? Will the laptop survive its warranty period unscathed? What of the day after that? Reply
  • HighTech4US - Tuesday, February 09, 2010 - link

    You char-lie droid ditto-heads are really something.

    You live in the past and swallow everything char-lie spews.

    Now go back to your church's web site semi-inaccurate and wait for your next gospel from char-lie.
    Reply
  • Pessimism - Tuesday, February 09, 2010 - link

    So I suppose the $300+ million dollars in charges Nvidia took were merely a good faith gesture for the tech community with no basis in fact regarding an actual defect. Reply
  • HighTech4US - Tuesday, February 09, 2010 - link

    The $300+ million was in fact a good faith measure to the vendors who bought the products that had the defect. nVidia backed extended warranties and picked up the total cost of repairs.

    The defect has been fixed long ago.

    So your char-lie comment as if it still exists deserves to be called what it is. A char-lie.
    Reply
  • Pessimism - Tuesday, February 09, 2010 - link

    So you admit that a defect existed. That's more than can be said for several large OEM manufacturers.


    Reply
  • Visual - Tuesday, February 09, 2010 - link

    Don't get me wrong - I really like the ability to have a long battery life when not doing anything and also have great performance when desired. And if switchable graphics is the way to achieve this, I'm all for it.

    But it seems counter-productive in some ways. If the external GPU was properly designed in the first place, able to shut down power to the unused parts of the processor, supporting low-power profiles, then we'd never have needed switching between two distinct GPUs. Why did that never happen?

    Now that Intel, and eventually AMD too, are integrating a low-power GPU inside the CPU itself, I guess there is no escaping from switchable graphics any more. But I just fail to see why NVidia or ATI couldn't have done it the proper way before.
    Reply
  • AmdInside - Tuesday, February 09, 2010 - link

    Because it's hard to compete with a company that is giving away integrated graphics for free (Intel) in order to move higher priced CPUs. In a way, AMD is doing the same with ATI. Giving us great motherboards with ATI graphics and cheap cheap prices (which in many ways are much better than Intel's much higher priced offerings) in order to get you to buy AMD CPUs. Reply
  • JarredWalton - Tuesday, February 09, 2010 - link

    Don't forget that no matter how pieces of a GPU go into a deep sleep state (i.e. via power gate transistors), you would still have some extra stuff receiving power. VRAM for example, plus any transistors/resistors. At idle the CULV laptops are down around 8 to 10W; even the WiFi card can suck up .5 to 1W and make a pretty substantial difference in battery life. I'd say the best you're likely to see from a discrete GPU is idle power draw that's around 3W over and above what an IGP might need, so a savings of 3W could be a 30% power use reduction. Reply
  • maler23 - Tuesday, February 09, 2010 - link

    I've been waiting for this article ever since it was hinted in the last CULV roundup. The ASUS laptop is a little disappointing, especially the graphic card situation(the Alienware M11X kind of sucked up a lot of excitement there). Frankly, I'd just take a discounted UL-30VT and deal with manual graphics switching.

    Couple of questions:

    -Any chance for a review of the aforementioned Alienware M11X soon?

    -I've seen a couple of reviews with display quality comparisons including this one. How do to the Macbook and Macbook Pros fit into the rankings?

    cheers!

    -J
    Reply
  • jfmeister - Tuesday, February 09, 2010 - link

    I was anxious to get an mx11 but 2 things were bothering me:
    1- No DirectX 11 compatibility
    2- No Core i5/i7 platform.

    Now there is another reason to wait for the refresh. But with arrendale prices droping, DX11 card available, Optimus, I would expect Alienware to get on the badwagon fast for a new mx11 platform and not wait 6 to 8 months for a refresh. This ultra laptop is intended for gamers and we all know that gamers are on top of their things. Optimus in the mx11 case should be a must.

    BTW, what I find funny is Optimus looks like a revolution, but what about 3dfx 10 years ago with their 3D Card addon (Monster 3D 8MB ftw)? Swithcing was used back then... This looks like the same thing except with HD video support! It took that long to come up with that?
    Reply
  • JarredWalton - Tuesday, February 09, 2010 - link

    Remember that the switching back in the days of 3dfx was just in software and that the 3D GPU was always powered. There was the dongle cable situation as well. So the big deal here isn't just switching to a different GPU, but doing it on-the-fly and powering the GPU on/off virtually instantly. We think this will eventually make its way into desktops, but obviously it's a lot more important for laptops. Reply
  • StriderGT - Tuesday, February 09, 2010 - link

    My take on Optimus:

    Optimus roots lie with hybrid SLI.
    Back then it was advertised as an nvidia only chipset feature (nvidia IGP + nvidia GPU) for both desktop and notebooks.

    Currently nvidia is being rapidly phased out of PC x86 chipsets so optimus is the only way to at least put an nvidia GPU on an intel IGP based system, but:

    1. Only real benefit is gaming performance without sacrificing autonomy in notebooks.
    2. Higher cost (in the form of the discrete GPU), intel has 60%+ of GPUs(=IGPs) because the vast majority do not care or are uninformed about game performance scaling.
    3. CUDA/Physx currently and in the foreseeable future irrelevant for mobile applications (gaming is much more relevant in comparison).
    4. Video decoding capabilities already present in most current IGPs (except pinetrail netbooks which can acquire it with a cheaper dedicated chip )
    5. Netbooks will not benefit from Optimus because they lack the CPU horsepower to feed the discrete GPU and are very cost sensitive... (same reason that ION1/2 is not the primary choice for netbook builders)
    6. In the desktop space only some niche small form factor PC applications could benefit from such a technology eg an SFF PC would need lesser cooling/noise during (IGP) normal operation and become louder more powerful while gaming (GPU)
    7. Idling/2D power consumption of most modern desktop GPUs is so low making the added complexity of a simultaneously working onboard IGP and the associated software a no benefit approach.
    8. Driver/application software problems that might arise from the complexity of profiles and the vastly different workload application scenarios.

    So in the end it boils down how can nvidia convince the world that a discrete GPU and its added cost is necessary in every portable (netbook and upwards sized) device out there. As for the desktop side it will be even more difficult to push such a thing with only noise reduction in small form factor PCs being of interest.

    BTW At least now the manufacturers won't have anymore excuses for the lack of descent GPU inside some of the cheaper notebook models (500-1000$), because of battery autonomy reasons.
    Oh well I'll keep my hopes low after so much time being a niche market since they might find some other excuse along the lines weight and space required for cooling the GPU during A/C operation... :-(

    PS Initially posted on yahoo finance forum
    Reply
  • Zoomer - Tuesday, February 09, 2010 - link

    Not like it was really necessary; the Voodoo 2 used maybe 25W (probably less) and was meant for a desktop use. Reply
  • jfmeister - Tuesday, February 09, 2010 - link

    Good point! I guess I did not take the time to think about it. I was more into the concept than the whole techincal side of that you brought up.

    Thanks!

    JF
    Reply
  • cknobman - Tuesday, February 09, 2010 - link

    Man mx11 was biggest disappointment out there. weak sauce last gen processor on a so called premium high end gaming brand? Ill consider it once they get an arrandale culv and optima cause right now looking at notebookreview.com forums it is a manual switching graphics not optima. Reply
  • crimson117 - Tuesday, February 09, 2010 - link

    Which processor should they have used, in your opinion? Reply
  • cknobman - Tuesday, February 09, 2010 - link

    Should have waited another month to market and used the Core i7 ulv processors. There are already a few vendors using this proc (panasonic is one). Reply
  • Wolfpup - Tuesday, April 20, 2010 - link

    Optimus is impressive software, but personally I don't want it, ever. I don't want Intel graphics on my CPU. I don't want Intel graphics in my memory controller. I don't want Intel graphics. I want my real GPU to by my real GPU, not a helper device that renders something that gets copied over to Intel's graphics.

    I just do not want this. I don't like having to rely on profiles either-thankfully you can manually add programs, but still.
    Reply

Log in

Don't have an account? Sign up now