As the GPU company who’s arguably more transparent about their long-term product plans, NVIDIA still manages to surprise us time and time again. Case in point, we have known since 2012 that NVIDIA’s follow-up architecture to Kepler would be Maxwell, but it’s only more recently that we’ve begun to understand the complete significance of Maxwell to the company’s plans. Each and every generation of GPUs brings with it an important mix of improvements, new features, and enhanced performance; but fundamental shifts are fewer and far between. So when we found out Maxwell would be one of those fundamental shifts, it changed our perspective and expectations significantly.

What is that fundamental shift? As we found out back at NVIDIA’s CES 2014 press conference, Maxwell is the first NVIDIA GPU that started out as a “mobile first” design, marking a significant change in NVIDIA’s product design philosophy. The days of designing a flagship GPU and scaling down already came to an end with Kepler, when NVIDIA designed GK104 before GK110. But NVIDIA still designed a desktop GPU first, with mobile and SoC-class designs following. However beginning with Maxwell that entire philosophy has come to an end, and as NVIDIA has chosen to embrace power efficiency and mobile-friendly designs as the foundation of their GPU architectures, this has led to them going mobile first on Maxwell. With Maxwell NVIDIA has made the complete transition from top to bottom, and are now designing GPUs bottom-up instead of top-down.

Nevertheless, a mobile first design is not the same as a mobile first build strategy. NVIDIA has yet to ship a Kepler based SoC, let alone putting a Maxwell based SoC on their roadmaps. At least for the foreseeable future discrete GPUs are going to remain as the first products on any new architecture. So while the underlying architecture may be more mobile-friendly than what we’ve seen in the past, what hasn’t changed is that NVIDIA is still getting the ball rolling for a new architecture with relatively big and powerful GPUs.

This brings us to the present, and the world of desktop video cards. Just less than 2 years since the launch of the first Kepler part, the GK104 based GeForce GTX 680, NVIDIA is back and ready to launch their next generation of GPUs as based on the Maxwell architecture.

No two GPU launches are alike – Maxwell’s launch won’t be any more like Kepler’s than Kepler was Fermi’s – but the launch of Maxwell is going to be an even greater shift than usual. Maxwell’s mobile-first design aside, Maxwell also comes at a time of stagnation on the manufacturing side of the equation. Traditionally we’d see a new manufacturing node ready from TSMC to align with the new architecture, but just as with the situation faced by AMD in the launch of their GCN 1.1 based Hawaii GPUs, NVIDIA will be making do on the 28nm node for Maxwell’s launch. The lack of a new node means that NVIDIA would either have to wait until the next node is ready, or launch on the existing node, and in the case of Maxwell NVIDIA has opted for the latter.

As a consequence of staying on 28nm the optimal strategy for releasing GPUs has changed for NVIDIA. From a performance perspective the biggest improvements still come from the node shrink and the resulting increase in transistor density and reduced power consumption. But there is still room for maneuvering within the 28nm node and to improve power and density within a design without changing the node itself. Maxwell in turn is just such a design, further optimizing the efficiency of NVIDIA’s designs within the confines of the 28nm node.

With the Maxwell architecture in hand and its 28nm optimizations in place, the final piece of the puzzle is deciding where to launch first. Thanks to the embarrassingly parallel nature of graphics and 3D rendering, at every tier of GPU – from SoC to Tesla – GPUs are fundamentally power limited. Their performance is constrained by the amount of power needed to achieve a given level of performance, whether it’s limiting clockspeed ramp-ups or just building out a wider GPU with more transistors to flip. But this is especially true in the world of SoCs and mobile discrete GPUs, where battery capacity and space limitations put a very hard cap on power consumption.

As a result, not unlike the mobile first strategy NVIDIA used in designing the architecture, when it comes to building their first Maxwell GPU NVIDIA is starting from the bottom. The bulk of NVIDIA’s GPU shipments have been smaller, cheaper, and less power hungry chips like GK107, which for the last two years has formed the backbone of NVIDIA’s mobile offerings, NVIDIA’s cloud server offerings, and of course NVIDIA’s mainstream desktop offerings. So when it came time to roll out Maxwell and its highly optimized 28nm design, there was no better and more effective place for NVIDIA to start than with the successor to GK107: the Maxwell based GM107.

Over the coming months we’ll see GM107 in a number of different products. Its destiny in the mobile space is all but set in stone as the successor to the highly successful GK107, and NVIDIA’s GRID products practically beg for greater efficiency. But for today we’ll be starting on the desktop with the launch of NVIDIA’s latest desktop video cards: GeForce GTX 750 Ti and GeForce GTX 750.

Maxwell’s Feature Set: Kepler Refined
POST A COMMENT

181 Comments

View All Comments

  • Kevin G - Tuesday, February 18, 2014 - link

    Very impressive performance for its power consumption. I can see an underclocked version of this card coming with a passive cooler for HTPC solutions. Perhaps that'd be a hypothetical GT740? I'm surprised that nVidia hasn't launched a mobile version of this chip. It seems like it'd be ideal for midrange laptops that still have discrete graphics.

    I suspect that the extra overclocking headroom is in reserve for a potential rebrand to a GTX 800 series product. (Though a straight die shrink of this design to 20 nm would provide even more headroom for a GTX 800/900 card.) nVidia could have held back to keep it below the more expensive GTX 660.

    Though ultimately I'm left wanting the bigger GM100 and GM104 chips. We're going to have to wait until 20 nm is ready but considering the jump Maxwell has provided in the low end of the market, I'm eager to see what it can do in the high end.
    Reply
  • DanNeely - Tuesday, February 18, 2014 - link

    ASUS has a 65W TDP GT 640 with a big 2 slot passive heat sink (GT640-DCSL-2GD3); with the 750 Ti only hitting 60W a passive version of it should be possible at near stock performance. I suspect the 740 will be a farther cut down 3 SMM model which might allow a single slot passive design. Reply
  • PhoenixEnigma - Tuesday, February 18, 2014 - link

    Passive cooling was my first thought as well - I've been looking for something to replace the 6770 in my HTPC with, and I wanted something both faster and passively cooled. There are already passive 7750s on the market, and the numbers in Bench put the 750Ti at about 9W more than then 7750 under real world load, so a vanilla 750 with a passive cooler should be entirely possible. Even a 750Ti might be doable, but that could be pushing things a little far. Reply
  • evilspoons - Tuesday, February 18, 2014 - link

    I need a new half-height HTPC card, my 2.5 year old Asus Radeon 6570 bit the dust last month (sparkly picture, one particular shade of grey turned random colours). If they can work out the kinks in this thing and underclock it a bit, it sounds like a good candidate.

    It feels like it's been a long time since anything new showed up in the half-height video card game.
    Reply
  • TheinsanegamerN - Tuesday, February 18, 2014 - link

    Look at sapphire's 7750. superior in every way to the 6570, and is single slot low profile. and overclocks like a champ. Reply
  • dj_aris - Tuesday, February 18, 2014 - link

    Sure but it's cooler is kind of loud. Definitely NOT a silent HTPC choice. Maybe a LP 750 would be better. Reply
  • evilspoons - Tuesday, February 18, 2014 - link

    Thanks for pointing that out. None of my local computer stores sell that, but I took a look on MSI's site and sure enough, there it is. They also seem to have an updated version of the same card being sold as an R7 250, although I'm not sure there's any real difference or if it's just a new sticker on the same GPU. Clock speeds, PCB design, and heat sink are the same, anyway. Reply
  • Sabresiberian - Tuesday, February 18, 2014 - link

    I'm hoping the power efficiency means the video cards at the high end will get a performance boost because they are able to cram more SMMs on the die than SMXs were used in Kepler solutions. This of course assumes the lower power spec means less heat as well.

    I do think we will see a significant performance increase when the flagship products are released.

    As far as meeting DX11.1/11.2 standards - it would be interesting to hear from game devs how much this effects them. Nvidia has never been all that interested in actually meeting all the requirements for Microsoft to give them official status for DX versions, but that doesn't mean the real-world visual quality is reduced. In the end what I care about is visual quality; if it causes them to lose out compared to AMD's offerings, I will jump ship in a heartbeat. So far that hasn't been the case though.
    Reply
  • Krysto - Tuesday, February 18, 2014 - link

    Yeah, I'm hoping for a 10 Teraflops Titan, so I can get to pair with my Oculus Rift next year! Reply
  • Kevin G - Tuesday, February 18, 2014 - link

    nVidia has been quite aggressive with the main DirectX version. They heavily pushed DX10 back in day with the Geforce 8000/9000 series. They do tend to de-emphassize smaller updates like 8.1, 10.1, 11.1 and 11.2. This is partially due to their short life spans on the market before the next major update arrives.

    I do expect this to have recently changed as Windows it is moving to rapid release schedule and it'll be increasingly important to adopt these smaller iterations.
    Reply

Log in

Don't have an account? Sign up now