GTX 980M and 970M Notebooks and Conclusion

Today's launch of 980M and 970M is about as much of a "hard launch" as we see with notebook GPUs. Quite a few notebooks should be available for order with the new chips, though it could take a couple weeks or more for orders to process. We were hoping to have the MSI GT72 prior to today's launch, but as noted earlier it should arrive in the next day (or in a few hours even). We'll post a follow-up Pipeline article as soon as we're able showing performance using some of our standard gaming and graphics benchmarks. In the meantime, here's the current list of notebooks that support the new GPUs.

Upcoming GeForce GTX 980M/970M Notebooks
Manufacturer Model GPU Size
ASUS G751 GeForce GTX 980M
GeForce GTX 970M
17”
MSI GT72 GeForce GTX 980M
GeForce GTX 970M
17”
MSI GS60 GeForce GTX 970M 15”
MSI GS70 GeForce GTX 970M 17”
Gigabyte P35 GeForce GTX 970M 15”
Gigabyte Aorus X7 2x GeForce GTX 970M (SLI) 17”
Clevo P150/P157 GeForce GTX 980M
GeForce GTX 970M
15”
Clevo P170/P177 GeForce GTX 980M
GeForce GTX 970M
17”
Clevo P650 GeForce GTX 980M
GeForce GTX 970M
15”

For their part, NVIDIA has provided performance numbers for both GPUs at different settings in a variety of games, but there's no comparison with other GPUs so the numbers are out of context. As a preview of what to expect, and considering several of the games use the built-in benchmark tools, here's what NVIDIA is reporting; all of the following games were tested at 1080p with the settings indicated:

NVIDIA Performance Results
Game Game Settings GTX 980M GTX 970M
Batman: Arkham Origins Max, FXAA High, PhysX High 60 45
Battlefield 4 Ultra 66 49
Bioshock Infinite Ultra DX11_DDOF 91 69
Crysis 3 Very High 4xMSAA 36 26
Far Cry 3 Ultra 4xMSAA 51 38
Hitman Absolution Ultra 74 65
Metro: Last Light Very High SSAA 36 27
StarCraft II Max 4xMSAA 68 62
Tomb Raider Ultimate 69 51

Most of the games are apparently being run at near maximum quality settings (though Batman is missing 4xMSAA, it does have PhysX enabled), which is good for putting as much of the bottleneck on the GPU as possible. StarCraft II and Hitman Absolution appear to be CPU limited, which isn't too surprising for StarCraft II as it has always been heavily influenced by CPU performance. On average, the GTX 980M outperforms the GTX 970M by 28%, even including the CPU limited games; if we ignore StarCraft II and Hitman Absolution the 980M is 34% faster on average.

Update: our own performance preview of GTX 980M is now available. The short summary is that GTX 980M is about the same performance level as the desktop GTX 770, though obviously with some newer features like DX12 support and VXGI. It's also twice as fast as the GTX 860M and 35% faster than GTX 880M on average.

One of the problems we're starting to run into with mobile GPUs getting so fast is that many laptops still top out at a 1920x1080 display, and even at maximum detail there are plenty of games that will easily break 60 FPS and may start running into CPU bottlenecks. For that reason, NVIDIA is billing the GTX 980M as a mobile GPU that targets playable frame rates at resolutions beyond 1080p, and we'll likely see more high-end notebooks ship with 2560x1440, 3K, or even 4K displays. It's probably a bit too much to assume that 3K gaming at 60 FPS will happen on most titles at maximum quality with the 980M, as games like Metro: Last Light and Crysis 3 can be very taxing, but we're definitely getting close to being able to max out settings on most games.

NVIDIA didn't provide specific numbers for their previous generation mobile GPUs, but they do note that GTX 980M should be around 40% faster than the GTX 880M, which is no mean feat. Compared to the first generation Kepler GPU, GTX 680M, the difference is even larger: 980M is roughly twice as fast as the GTX 680M that launched three years ago. GTX 970M is also supposed to be about 40% faster than the previous generation GTX 870M and on average twice as fast as the GTX 860M.

Wrapping up, we've provided a full gallery of slides from the NVIDIA presentation for those that are interested. We're very much looking forward to some hands on time testing out the GTX 980M, as it should prove to be quite a formidable GPU. That's not too surprising as GM204 proved to be quite potent on desktop GPUs, with a smaller and more efficient chip able to basically match and generally exceed the performance of l the larger and more power hungry GTX 780 Ti. The result is that this is as close as notebooks have come to matching desktop performance (for a single GPU) in as long as I've been reviewing notebooks.

Looking forward, performance is always improving and we'll certainly see even faster GPUs in the next year. We also know that NVIDIA is capable of making larger GPUs, so we're still missing the true "Big Maxwell" (i.e. GM200 or GM210). As with the GF110 and GK110 I don't expect we'll ever see that chip in consumer notebooks, but we might see GM204 return with even more SMMs enabled. But until NVIDIA comes out with an even bigger and faster Maxwell variant, this is the top mobile GPU, and that means it will priced as such.

We should see GTX 980M in gaming notebooks starting around the $2000 price point (give or take), with GTX 970M launching in notebooks starting at $1600. Based on MSI's pricing of their GT72, it also looks like the GTX 980M may have as much as a $350 price premium over the GTX 970M, or at least that's the difference in pricing for end users. (Ouch.) We're covering the notebooks that have been announced in separate Pipeline articles, and we should see some of them at the usual places like Newegg and Amazon. Stay tuned for our performance results from the MSI GT72, which will go up as soon as we get the laptop and can run some tests.

Closing the Performance Gap with Desktops
Comments Locked

68 Comments

View All Comments

  • chizow - Tuesday, October 7, 2014 - link

    Except most professionals don't want to be part of an ongoing beta project, they want things to just work. Have you followed the Adobe CS/Premiere developments and OpenCL support fiasco with the Mac Pros, and how much of a PITA they have been for end-users? People in the real world, especially in these pro industries that are heavily iterative, time sensitive, and time intensive cannot afford to lose days, weeks or months waiting for Apple, AMD and Adobe to sort out their problems.
  • JlHADJOE - Tuesday, October 14, 2014 - link

    This. As cool as open standards are, it's also important to not stuck. The industry has shown that it will embrace open when it works. Majority of web servers use open source, and Linux has effectively displaced UNIX out of all sectors except for extreme big-iron.

    But given a choice between open and "actually working", people will choose "working" every time. IE6 became the standard for so long because all of the "open" and "standards-compliant" browsers sucked for a very long time.
  • mrrvlad - Thursday, October 9, 2014 - link

    I have to work with both CUDA and openCL (for amd GPU) for compute workloads. The main advantage of CUDA is thier toolset - AMD compiler is ages behind and does not allow developer sufficient control of the code being generated. It's more of "randomizing" compiler, not optimizing... I would never even think about using openCL for GPU compute if I will start a new project now.
  • Ninjawithagun - Sunday, June 28, 2015 - link

    The problem is that what you are stating is only half the story. Unfortunately, each company does have a superior solution. Going with OpenGL is a compromise at best because the coding does not maximize or optimize automatically for hardware specific architectures. Maybe in a perfect world we would have an open non-proprietary standard across all programming schemes, but it's just not possible. Competition and more importantly profit is the key to making money and neither AMD or Nvidia will budge. Both parties are just as guilty as the other in this aspect.
  • atlantico - Wednesday, October 15, 2014 - link

    Apple will *never* embrace CUDA. OpenCL is an important part of the future vision and strategy of Apple, whatever Nvidia is pushing, Apple is not buying.
  • RussianSensation - Tuesday, October 7, 2014 - link

    If all Apple cares about was performance/watt, the MacPro would not feature AMD's Tahiti cores. There is a paragraph even dedicated to explaining the importance of OpenCL for Apple:

    "GPU computing with OpenCL.
    OpenCL lets you tap into the parallel computing power of modern GPUs and multicore CPUs to accelerate compute-intensive tasks in your Mac apps. Use OpenCL to incorporate advanced numerical and data analytics features, perform cutting-edge image and media processing, and deliver accurate physics simulations."
    https://www.apple.com/mac-pro/performance/

    Apple is known to switch between NV and AMD. Stating that AMD is not in good graces with Apple is ridiculous considering the MacPro has the less power efficient Tahiti vs. GK104s. And that is for a reason -- because HD7990 beats 690 at 4K, and destroys it in compute tasks -- which is proof performance/watt is no the only factor Apple looks at for their GPU selection.
  • Omoronovo - Wednesday, October 8, 2014 - link

    I didn't mean to imply that it was *literally* the only factors taken into account; they clearly wouldn't use a GPU that cost $3,000 if a competing one with similar (but worse) performance/watt was $300.

    I was trying to emphasize that, all factors being equal - ie, standards compliance, compatibility, supply, etc, then performance/watt is the prime metric used to determine hardware choices. The tahiti vs GK104 comparison is a great one - AMD has extremely heavily pushed OpenCL and their support for it was basically unanimous - nVidia was slow on the uptake of OpenCL support as they were pushing for CUDA.
  • bischofs - Tuesday, October 7, 2014 - link

    I may be wrong, but it seems like the only reason the mobile chips are catching up to the desktop is that they haven't really improved PC cards in 5+ years. Instead of pushing the limits on the PC, building architectures that are based on pure performance and not efficiency, and scaling it down they are doing the opposite, thus the performance difference is getting smaller. It is strange that they are marketing this as a good thing being that there is a rather large difference in power and cooling availability on a tower, thus there should be a large performance gap.
  • Razyre - Tuesday, October 7, 2014 - link

    Not at all. Haiwaii shows this if anything, the 290X is balls to the walls in OpenCL, while Nvidia's cards are more conservative and gaming optimised they still pack an as good and usually better punch in frame rates.

    Cards are getting too hot at 2-300W, you need silly cooling solutions which are either expensive, make your card larger or louder.

    The Maxwell series is phenomenal; it drastically improves frame rates while halving the power consumption of the same series chips from 2 years ago.

    GPUs have come on SO far since 2009 when you are touting they've barely improved. Let's say you pit a 5870 against a 290X. The 7970 is about twice as powerful as the 5870 (slightly less in places), a 2012 GPU and the current 290X is about 30% better than a 7970. So you're effectively seeing there a theoretical 130% improvement in performance over 4 years (I say this because the 290X is now a year old), so that's an average of 30%ish improvement per year.

    Considering the massive R&D costs and costs associated with moving to smaller dies to fit more transistors on a chip (which increases heat, hence why Nvidia's Maxwell is a great leap since they can now jam way more transistors on there for the GK110 replacement) GPUs have come on leaps and bounds.

    The only reason it might look like they haven't is because instead of jumping from a standard let's say 1680x1050 to 1920x1080, we jumped to 3840x2160 a FOUR TIMES increase in resolution.

    Mobile GPUs are even more impressive in progress really. That chart showing the performance closing between mobile and desktop GPUs isn't too untrue.
  • bischofs - Tuesday, October 7, 2014 - link

    I don't know much about the AMD stuff you are talking about, but I have probably more anecdotal evidence. Software and more importantly games for PCs have been pretty stagnant as far as resources go, I used a GTX 260 card for about 5 years and never had problems running anything until recently. Seeing as Games are the largest driver of innovation most games are built for consoles, with a large percentage also being built on mobile devices. Ive been playing games that look pretty much the same at 1080p for 5+ years on my pc, the only thing that has been added is more graphical features. Further support of my argument is processors, I remember the jump to Nehalem from the Core 2 was astounding, but from then on ( still running my i7-920 from 2008 ) its been lower power consumption and more cores with small changes in architecture. So you might throw some percentages around but I just don't see it.

Log in

Don't have an account? Sign up now