The Secret of Denver: Binary Translation & Code Optimization

As we alluded to earlier, NVIDIA’s decision to forgo a traditional out-of-order design for Denver means that much of Denver’s potential is contained in its software rather than its hardware. The underlying chip itself, though by no means simple, is at its core a very large in-order processor. So it falls to the software stack to make Denver sing.

Accomplishing this task is NVIDIA’s dynamic code optimizer (DCO). The purpose of the DCO is to accomplish two tasks: to translate ARM code to Denver’s native format, and to optimize this code to make it run better on Denver. With no out-of-order hardware on Denver, it is the DCO’s task to find instruction level parallelism within a thread to fill Denver’s many execution units, and to reorder instructions around potential stalls, something that is no simple task.

Starting first with the binary translation aspects of DCO, the binary translator is not used for all code. All code goes through the ARM decoder units at least once before, and only after Denver realizes it has run the same code segments enough times does that code get kicked to the translator. Running code translation and optimization is itself a software task, and as a result this task requires a certain amount of real time, CPU time, and power. This means that it only makes sense to send code out for translation and optimization if it’s recurring, even if taking the ARM decoder path fails to exploit much in the way of Denver’s capabilities.

This sets up some very clear best and worst case scenarios for Denver. In the best case scenario Denver is entirely running code that has already been through the DCO, meaning it’s being fed the best code possible and isn’t having to run suboptimal code from the ARM decoder or spending resources invoking the optimizer. On the other hand then, the worst case scenario for Denver is whenever code doesn’t recur. Non-recurring code means that the optimizer is never getting used because that code is never seen again, and invoking the DCO would be pointless as the benefits of optimizing the code are outweighed by the costs of that optimization.

Assuming that a code segment recurs enough to justify translation, it is then kicked over to the DCO to receive translation and optimization. Because this itself is a software process, the DCO is a critical component due to both the code it generates and the code it itself is built from. The DCO needs to be highly tuned so that Denver isn’t spending more resources than it needs to in order to run the DCO, and it needs to produce highly optimal code for Denver to ensure the chip achieves maximum performance. This becomes a very interesting balancing act for NVIDIA, as a longer examination of code segments could potentially produce even better code, but it would increase the costs of running the DCO.

In the optimization step NVIDIA undertakes a number of actions to improve code performance. This includes out-of-order optimizations such as instruction and load/store reordering, along register renaming. However the DCO also behaves as a traditional compiler would, undertaking actions such as unrolling loops and eliminating redundant/dead code that never gets executed. For NVIDIA this optimization step is the most critical aspect of Denver, as its performance will live and die by the DCO.


Denver's optimization cache: optimized code can call other optimized code for even better performance

Once code leaves the DCO, it is then stored for future use in an area NVIDIA calls the optimization cache. The cache is a 128MB segment of main memory reserved to hold these translated and optimized code segments for future reuse, with Denver banking on its ability to reuse code to achieve its peak performance. The presence of the optimization cache does mean that Denver suffers a slight memory capacity penalty compared to other SoCs, which in the case of the N9 means that 1/16th (6%) of the N9’s memory is reserved for the cache. Meanwhile, also resident here is the DCO code itself, which is shipped and stored as already-optimized code so that it can achieve its full performance right off the bat.

Overall the DCO ends up being interesting for a number of reasons, not the least of which are the tradeoffs are made by its inclusion. The DCO instruction window is larger than any comparable OoOE engine, meaning NVIDIA can look at larger code blocks than hardware OoOE reorder engines and potentially extract even better ILP and other optimizations from the code. On the other hand the DCO can only work on code in advance, denying it the ability to see and work on code in real-time as it’s executing like a hardware out-of-order implementation. In such cases, even with a smaller window to work with a hardware OoOE implementation could produce better results, particularly in avoiding memory stalls.

As Denver lives and dies by its optimizer, it puts NVIDIA in an interesting position once again owing to their GPU heritage. Much of the above is true for GPUs as well as it is Denver, and while it’s by no means a perfect overlap it does mean that NVIDIA comes into this with a great deal of experience in optimizing code for an in-order processor. NVIDIA faces a major uphill battle here – hardware OoOE has proven itself reliable time and time again, especially compared to projects banking on superior compilers – so having that compiler background is incredibly important for NVIDIA.

In the meantime because NVIDIA relies on a software optimizer, Denver’s code optimization routine itself has one last advantage over hardware: upgradability. NVIDIA retains the ability to upgrade the DCO itself, potentially deploying new versions of the DCO farther down the line if improvements are made. In principle a DCO upgrade not a feature you want to find yourself needing to use – ideally Denver’s optimizer would be perfect from the start – but it’s none the less a good feature to have for the imperfect real world.

Case in point, we have encountered a floating point bug in Denver that has been traced back to the DCO, which under exceptional workloads causes Denver to overflow an internal register and trigger an SoC reset. Though this bug doesn’t lead to reliability problems in real world usage, it’s exactly the kind of issue that makes DCO updates valuable for NVIDIA as it gives them an opportunity to fix the bug. However at the same time NVIDIA has yet to take advantage of this opportunity, and as of the latest version of Android for the Nexus 9 it seems that this issue still occurs. So it remains to be seen if BSP updates will include DCO updates to improve performance and remove such bugs.

Designing Denver SPECing Denver's Performance
Comments Locked

169 Comments

View All Comments

  • mkygod - Saturday, February 7, 2015 - link

    I think so to. The 3:2 ratio is one of the things that Microsoft has gotten right with their Surface Pro devices. It's the perfect compromise IMO
  • UtilityMax - Sunday, February 8, 2015 - link

    I am a little perplexed by this comment. A typical user will be on the web 90% of time. Not only the web browser does not need to be natively designed or optimized for any screen ratio, but it also will be more usable on a 4:3 screen. So will the productivity apps. The only disappointment for me on the 4:3 screen would be with watching the widescreen videos or TV shows. Moreover, there is quite a bit of evidence than a lot of the next generation tablets will be 4:3. Samsung's next flagship tablet supposedly will be 4:3.
  • gtrenchev - Wednesday, February 4, 2015 - link

    Anandtech is becoming more and more boring last year. Sparse on reviews, short on tech comments, lacking on depth and enthusiasm. I can see Anandtech has become a just job for you guys, not the passion it was for Anand :-) And yes, his absence is definitely noticeable.

    George
  • Ian Cutress - Wednesday, February 4, 2015 - link

    Was the Denver deep-dive not sufficient enough? Always welcome for comments.
    As for timing, see Ryan's comment above.
    We've actually had a very good quarter content wise, with a full review on the front page at least four out of every five weekdays if not every weekday.
  • milkod2001 - Wednesday, February 4, 2015 - link

    Why not to post on your forum some sort of suggestion box/poll where all could say what should get reviewed first so some folks won't cry where is the review of their favorite toy :) ?
  • Impulses - Wednesday, February 4, 2015 - link

    Because they'll still cry regardless, and they can't possibly work entirely based on readers' whim, doesn't make sense logistically or nor editorially... Readers might vote on five things ahead of the rest which all fall on the same writer's lap, they won't all get reviewed before the rest, or readers might not be privy to new hardware because of NDAs or cases where Anandtech can't source something for review.
  • tuxRoller - Thursday, February 5, 2015 - link

    While I enjoyed the review, I would've loved to have seen the kind of code driven analysis that was done with Swift.
    In particular, how long does it take for dco to kick in. What is the IPC for code that NEVER gets optimized, and conversely, what is the IPC for embarrassingly instruction-wise parallel code? Since it's relying on ram to store the uops, how long does the code need to run before it breaks even with the arm decoder? Etc.
  • victorson - Wednesday, February 4, 2015 - link

    Are you guys kidding? Better late than never, but heck.. this is freaking late.
  • abufrejoval - Wednesday, February 4, 2015 - link

    Thanks for making it worth the wait!

    The in-depth analysis of Denver is uniquely Anandtech, because you can't get that anywhere else.

    And while Charly D. is very entertaining, the paywall is a bit of an impediment and I quite like again the Anand touch of trying to be as fair as possible.

    I was and remain a bit worried that there seems to be no other platform for Denver, which typically signals a deeper flaw with an SoC in the tablet and phone space.

    While I'm somewhat less worried now, that Denver might be acceptable as a SoC, the current Nexus generation is no longer attractive at these prices, even less with the way the €/$ is evolving.
  • Taneli - Wednesday, February 4, 2015 - link

    eDRAM cache à la Crystalwell would be interesting in a future Denver chip.

Log in

Don't have an account? Sign up now