Taking place next week is the 2016 Game Developers Conference in San Francisco. GDC has been an important show for some time, but in recent years it has taken on an even bigger role as what happens and what is announced at GDC have greater implications for not just developers, but end-users as well. GDC has been the backdrop for PC hardware launches, graphics API launches, and more. And GDC 2016 promises to be much the same, as in the PC world developers look to embrace DirectX 12, Virtual Reality, and other emerging technologies.

Ahead of next week’s show, I had a chance to sit down and talk shop with an interesting trio of techies: Brian Langley, Microsoft’s DirectX 12 lead, Max McMullen, Microsoft’s principle lead for Direct3D, and Dan Baker, co-founder and guru developer for Oxide Games. Microsoft of course is looking to further push the development of (and developers towards) DirectX 12, as the first games come out for the APi. Meanwhile Oxide’s Ashes of the Singularity has been a common sight around here, as while it won’t claim the title of the first DX12 game – that technically goes to the new Windows 10 port of Gears of War – Ashes is arguably the first game to take meaningful advantage of the API. As a result there’s a lot of excitement with Ashes not only at Oxide, but at Microsoft as well ahead of its impending March 31st launch.

With the chance to talk to developers on both sides of the spectrum – API development at Microsoft and application development at Oxide – I wanted to ask the gathered gurus about their experiences with bringing up the API and implementing it in games, what their perceptions are of the wider market, what developer response has been like, and what’s in store next for DirectX 12. Though there are rarely grand revelations in brief conversations such as these, it was none the less an interesting view into how DirectX 12 has taken root since it officially shipped back in July with Windows 10.

DirectX 12 Adoption & Stability

It didn’t take long for our conversation to reach the point of discussing DirectX 12 adoption, both from a development standpoint and an end-user standpoint. Historically speaking it has taken many years for new versions of DirectX to be widely adopted by most games. The reasons for this are varied, but it’s often a mix of slow user adoption of new OSes, slow developer adoption when working with multi-platform titles – developers tend to stick to the API that most closely matches the consoles - and the fact that new versions of DirectX and new hardware standards have often gone hand-in-hand.

DirectX 12 is very different in that respect, both because it runs on 2012+ hardware and that the necessary OS upgrade is free. In fact free is likely playing a huge part here, as Baker has mentioned that Oxide’s seeing a “fairly strong uptake” of the new OS. For reference, Steam’s most recent hardware survey puts Windows 10 64-bit adoption at 34% of all machines surveyed, and with a sub-1% gap, it’s likely that it will cross Windows 7 64-bit this month.

A relatively rapid adoption of Windows 10 by end-users means that developers can in turn make their own leaps sooner, as the necessary critical mass will be in place sooner than with past generations. Both Baker and Langley agreed that DirectX 12 will likely see faster adoption from developers than past generations have, as the user base is building up much sooner. Also helping matters is the fact that the consoles (particularly the Xbox One) are so similar to DirectX 12 with their own respective low-level APIs, which means that developers can synchronize multi-platform titles around low-level APIs much easier than in past generations where the consoles have lagged behind. The APIs won’t be perfectly identical due to some inherent platform differences such as memory management (more on this later), but Microsoft is looking to make the Windows and console APIs as close as reasonably possible to help facilitate this.

Microsoft for their part is of course pleased with this outcome, but even within the realm of the DirectX development team they have made it clear that they aren’t done yet and want to do even more to drive the adoption of DirectX 12 for both end-users and developers and to convince the holdouts to make the jump to Win10. Now that DX12 is out, they have been working on better tools for developers to make the API more approachable and easier to debug. At the same time while Microsoft isn’t being specific, they are making it clear that they aren’t done adding features to the API, and that along with fixing bugs there’s more to come for DX12.

But what surprised me the most in our conversation on adoption was Baker’s comments on the state of DirectX 12. “DX12 is in far better shape than DX11 was in the first generation; it's way further along,” Baker said, and a lot of this has to do with the basic design of the API. Because DX12 is low-level and fairly thin, what bugs there are tend to be fairly straightforward. DirectX 11, by comparison, took years to sort out, and even then Baker doesn’t trust GPU drivers when it comes to DX11 multi-threading. DX12, by comparison, is handling upwards of 16 threads from Ashes of the Singularity without encountering any issues. Which is not to say that DX12 is already perfect, but DX12 is on the path to quickly being in a better overall state than DX11, even more than 6 years after the introduction of the latter.

From Microsoft’s point of view, Langley echoed Baker’s statements. Working with developers, Microsoft is already finding that DX12 is typically a net win for CPU performance in most cases. Just how much any in-development title is benefitting from DX12 varies from game to game, but a common thread in all of these cases is that the earlier game developers can implement it the better. Games that add DX12 at the last moment are benefitting the least – and Microsoft is trying to help developers integrate it sooner – whereas games that do integrate it sooner like Ashes are seeing much more significant benefits.

One question I threw at both groups was whether DX12’s lack of abstraction meant that developers were being exposed to any hardware bugs. And though there have been driver bugs, neither the developers Microsoft had worked with nor Oxide had run into notable hardware bugs. Which given just how much hand-holding DX11 required at times by developers to adapt for implementation differences, the stricter implementation standards for DX12 have made things a lot easier in some ways even with the intricacies of working at a lower level.

Ultimately not only is DirectX 12 likely to be faster than any version of DirectX before it, but there’s a very real possibility that DirectX 12 will become the baseline version of the API for major games (outside of internal Microsoft projects) far sooner than with DirectX 11. Though making it clear that it’s merely an option on the table at this time and not yet a decision made, Baker said that Oxide’s next game may go DX12-exclusive, as adoption is strong and doing so would give Oxide’s developers the freedom to implement some new rendering strategies that they can’t properly implement in a game that needs to support both DX11 and DX12. Similarly, multi-platform developers looking to synchronize their projects between the consoles and Windows will have further incentive to go with DX12 exclusively if it means they can reuse the vast majority of their existing low-level code; a DX11 path in this case would mean spending a lot more effort on a rendering path for a single platform.

Developing For DirectX 12

One point that has consistently been reiterated about DirectX 12 and other low-level APIs is that they’re not for the faint of heart, and that making effective use of it will require more guru-level programmers who can work with a video card without all of the hand-holding that came with DirectX 11 and earlier APIs. And though DirectX 11 isn’t going anywhere, in our chat Microsoft said that they want to help more developers make the jump.

One part of that is going to be to improve the tools situation for DX12 in order to give developers better and easier to understand tools to work with. Though Microsoft isn’t being specific at this time – and from this sounds of it this is what part of their GDC presentation will be about – Langley said that the DirectX group “really wants to take [DX12] and broaden it, and make it the API that everyone uses to do all of their game development." The path to DirectX 12 for many developers will still be through inheriting it from licensed engines, but for those developers who do go their own route, Microsoft wants to make the jump less painful.

Even so, for developers it has definitely been a learning experience. Making effective use of DX12 requires a better understanding of the underlying hardware, and how to best treat it. Avoiding pathologically bad cases is one major hurdle for new developers, particularly those who don’t have a firm grasp on the hardware. The low-level nature of DX12 means that more control over optimizations will be in the hands of developers – and they will need to rise up to the challenge for best results – as opposed to video card drivers.

Similarly however, it’s also a new world for driver developers, and while drivers overall are responsible for less of the optimization process, they do have their own role to play. Drivers are still responsible for exposing various hardware queues and HLSL shader compiling, not to mention implicit mode DX12 multi-adapter. So driver developers will still be a part of the optimization process, though in a different way than before.

Meanwhile in the case of Ashes of the Singularity, Oxide is at an interesting position for both better and worse. As the first game to make extensive use of DX12’s strongest features, the game is a pathfinder for other games to follow. And at the other end, because so many eyes are on the game, Oxide has needed to walk a sometimes narrow path to avoid favoring one hardware vendor or another (or being seen as doing so). As Baker notes, since the PC is such a large and varied platform “You can never perfectly optimize for every platform because it's too much work” as compared to the highly regulated consoles, so instead the name of the game is making generic optimizations and try to be as even-handed as possible. At the same time the company has also been atypically transparent with its code, sharing it with all of the GPU vendors so that they can see what’s going on under the hood and give feedback as necessary.

An unexpected outcome of this has been that as Baker and the rest of the Oxide crew have needed to learn more about GPUs to better write for the DirectX 12, they have also learned some things that have helped them write a more efficient DX11 rendering path. Though DX11 abstracts a great deal from developers, from a broad perspective there are still some algorithms and techniques that are a better match to modern hardware than other techniques, and with DX12 strongly pushing developers towards taking efficiency into their own hands, this has impacted DX11 development as well.

Memory: A Uniquely PC Perspective

While we were on the subject of developing for DirectX 12, the matter of memory management came up, and how the PC situation is still unique compared to all other platforms. The consoles are fixed hardware devices, with the most recent incarnations running games inside hypervisors with a fixed memory allocation since only one game can be running at a time. Developers in turn don’t get all 8GB a console offers, but what they do get they can count on getting virtually the entire time.

The PC on the other hand is a very different beast. Besides the obvious matter of having separate VRAM and system DRAM pools for the CPU and GPU respectively (for systems with a discrete GPU), PCs are also multi-tasking environments. Games aren’t running in a hypervisor and they can’t be written counting on receiving a specific allocation of memory all to themselves. This is coupled with the fact that the amount of DRAM in a video card varies wildly between 2GB and 8GB for most recent cards, so developers can’t even count on the GPU having all the resources they would like to use.

Consequently, memory management under DirectX 12 is still a challenge, albeit one that’s evolving. Under DirectX 11 memory management was typically a driver problem, and the drivers usually got it right – though as Baker noted in our conversation, even now they do sometimes fail when dealing with issues such as memory fragmentation. DX12 on the other hand gives all of this control over to developers, which brings both great power and great responsibility. PC developers need to be concerned with issues such as memory overcommitment, and how to gracefully handle it. Mantle users will be familiar with this matter: most Mantle games would slow to a crawl if memory was overcommitted, which although better than crashing, is not necessarily the most graceful way to handle the situation.

As a result it’s still a learning process across the board for DX12 developers. In developing Ashes, Oxide has developed new strategies to deal with memory management, though it has taken some time to do so. However successfully tackling DX12 memory management also reaps its own rewards: since even automated DX11-style memory management is not without its faults, a well-tuned DX12 implementation has the potential to exceed DX11, offering better performance and avoiding DX11’s faults in the process.

Though even with better tools, this will always be something that sets apart the PC from the consoles in the low-level API space. As Microsoft noted in our call, their goal is to align the console and Windows DirectX APIs as close as possible, but memory management will be one of a handful of areas where the two APIs still diverse.

Looking Towards the Future

Though much of our conversation was focused on the present, both Baker and the DirectX team are also looking towards the future of DirectX 12. I’ve previously mentioned Microsoft’s plans to improve the toolset available for DX12, but tools are only one part of the equation. At the end of the day DX12 is a very powerful API, and it’s up to developers to make the best possible use of it.

In Oxide’s case, Ashes is ahead of the curve in several ways. Along with utilizing DX12’s more fundamental multi-threading capabilities, it’s also been pushing the envelope on features such as asynchronous shading/compute and multi-GPU support. In fact both the DirectX team and Oxide were surprised with just how well the latter worked at this early stage, with Baker noting that the image quality from AMD and NVIDIA GPUs was closer than he expected. And though Ashes’s unique AFR multi-GPU support is one possible implementation, the wider development community also has their eyes on looking at ways to meaningfully combine a dGPU and an iGPU, as virtually all dGPU systems have the latter, and it’s currently going unused.

As for asynchronous shading, for Ashes it’s primarily being used to improve performance by improving GPU utilization. However Baker believes this is just scratching the surface of the technology, and once DX12 becomes the baseline API for a game, there are far more exotic uses he wants to look into. This includes having the GPU work on more pure compute tasks, such as running first-order physics simulations or parts of the game simulation on the GPU rather than the CPU. And this wouldn’t just apply to clients; in games with a dedicated server, the server could be GPU accelerated as well, using the GPU in a pure GPGPU context to do some of the aforementioned compute work.

Though for the time being, it may be all that the rest of the PC ecosystem can do to keep up with DX12 as-is. While every game will be unique, in the case of Ashes Oxide has already run into situations where they are both CPU memory bandwidth and CPU core count limited. Much of this has to do with the game’s expensive AI and simulation code paths, but as Baker was all too proud to recount, Ashes’ QA team had to go track down a more powerful system for multi-GPU testing, as their quad core systems were still CPU limited. DX12’s low-level nature is going to reduce CPU usage in some ways, but with its multithreading capabilities it’s going to scale it back up again in other ways that may very well push the limits of conventional quad core CPUs in other games as well.

Ultimately even pathfinder games like Ashes are still treating DX12 as a more advanced graphics API, which certainly reaps several immediate benefits, but isn’t the only thing the API is good for. As we’ve already seen in some instances with other low-level APIs such as Apple’s Metal, these kinds of APIs are a path towards using the GPU as general compute processor, and game developers have not even begun to scratch the surface there. Once games start using DX12 as a baseline API, many more options become available to developers who are prepared to break traditional graphics rendering paradigms.

Wrapping things up, be sure to check back next week for our GDC 2016 coverage. With confirmed events from AMD, Crytek, Epic Games, Microsoft, and more, it should be another busy and interesting year for PC game development.

Comments Locked

67 Comments

View All Comments

  • Jukens - Friday, March 11, 2016 - link

    Directx 12 Rise of Tomb Raider patch released, lets get a detailed review of the findings! I need stacked vram for SLI.
  • Flunk - Friday, March 11, 2016 - link

    The idea of "stacked VRAM" is based on a misconception. DirectX 12 doesn't magically allow both your video cards to access each other's RAM without penalty. It allows the developer to access the RAM of both cards independently and store different things on each card. Some things are still going to need to be doubled up, or it will overload the PCI-E interface and it has to be specifically managed by the developer. DirectX 12 isn't magical and implementing it doesn't instantly give the game extra features.
  • Jukens - Friday, March 11, 2016 - link

    Its not my misconception, I know that it requires the developer to implement it, but doesn't it go hand in hand with async compute? This is the quote from Tomb raider patch blog "Another big feature, which we are also using on Xbox One, is asynchronous compute. This allows us to re-use GPU power that would otherwise go to waste, and do multiple tasks in parallel."
  • Jukens - Friday, March 11, 2016 - link

    Or maybe I am getting the terminology mixed up async compute feature not the same as asynchronous multi-GPU to implement split frame rendering ?
  • tuxfool - Friday, March 11, 2016 - link

    Completely separate things. Think of Async Compute as SMT for your GPU.
  • Jukens - Friday, March 11, 2016 - link

    Thanks for clearing that up
  • alex00711 - Sunday, April 30, 2017 - link

    Obviously MS didn't feel it would be great if Vulcan, a clear competition to DirectX 12, was mentioned in a positive light - and there isn't anything negative to say about it I guess. <a href="http://cracks4apk.com/directx-11-2-download-offlin... 11.2</a> is out now
  • alex00711 - Sunday, April 30, 2017 - link

    http://cracks4apk.com/directx-11-2-download-offlin...
  • bcronce - Friday, March 11, 2016 - link

    To go a bit deeper. Prior to Async Computer, you could only do one job at a time. If a given job didn't have enough work to make sue of your entire GPU, then part of your GPU would remain idle. tuxfool's analogy is a good one. Similar to SMT for CPUs, Async Compute allows work to be queued up and when there is any free time, the scheduler will fill the holes.

    The reason it's async is because the old way was purely synchronous. One job after the other in determined orders. Async in this context just means "best effort" with no guaranteed order, which is typically the most efficient way to maximize throughput.

    Of course there are ways to guarantee ordering or even prioritizing, but that always comes at a cost to throughput. The age old low jitter or high throughput tradeoff. Maximizing one doesn't mean minimizing the other, but it does put constraints.
  • hansmuff - Friday, March 11, 2016 - link

    It'd be great if you added some information about how Vulkan fits into the whole thing technology wise. Are Vulkan and DX12 as divorced as OpenGL and DX11? Will it easier for devs to address both APIs in the same game?

Log in

Don't have an account? Sign up now