A Brief History of Mali

ARM as a CPU designer of course needs no introduction. The vast majority of the world’s smartphones and tablets are powered either by an ARM designed CPU or a CPU based on ARM’s instruction sets. In the world of SoC CPUs, ARM is without a doubt the 800lb gorilla.

However in the world of SoC GPUs, while ARM is a major competitor they are just one of several. In fact from a technology perspective they’re among the newest, having roots that go back over a decade but still not as far as the rest of the major competitors. This is something of a point of pride for ARM’s GPU team, and as we’ll see, results in a GPU design that’s not quite like anything else we’ve seen so far.

Like all good GPU stories, the earliest history of ARM’s GPU division goes back to the late 1990s, where what would become ARM’s GPU division was first created. Originally a research project by Norwegian University of Science and Technology, the core Mali group was spun off to form Falanx Microsystems in 2001. At first Falanx was attempting to break into the PC video card market, a risky venture in the post-3dfx era that saw several other PC GPU manufacturers get shaken out – S3, Rendition, Revolution, and Imagination among them – and ultimately a venture that crashed and burnt as Falanx was unable to raise funding.

During a period of development of which they call their “scrappy phase,” due to their limited resources Falanx pivoted from designing PC GPUs to designing SoC-class GPUs and licensing those designs to SoC integrators, a much easier field to get in to. From this change in direction came the first Mali GPUs, and ultimately Falanx’s first customers. Among these were Zoran, who used the Mali-55 for their Approach 5C SoC, which in turn ended up in a couple of notable products including LG’s Viewty. But with that said, Falanx never saw great success on their own, never landing the “big fish” that they were hoping for.


LG's Viewty, One of Mali's First Wins

Ultimately as the SoC industry began to heat up from growing cell phone sales, ARM purchased Falanx in 2006. This gave ARM a capable (if previously underutilized) GPU division to create GPUs to go along with their growing CPU business. And from a business perspective the two companies were a solid match for each other as Falanx was already in the business of licensing GPUs, so for Falanx this was largely a continuation of status quo. Though for what was becoming ARM’s GPU division this was just as much of net win as it was for ARM, since it gave the Mali team access to ARM’s engineering resources and validation, capabilities that were harder to come by as a struggling 3rd party GPU designer.

Now as a part of ARM, the Mali team released their first OpenGL ES 2.0 design in 2007, the Mali-200. Mali-200 and its immediate successors Mali-300, Mali-400, and Mali-450, were based on the team’s Utgard architecture. Utgard was a non-unified GPU (discrete pixel and vertex shaders) designed for SoC-class graphics, and over the years received various upgrades to improve performance and scalability, especially on Mali-400 where Mali products first introduced the ability to use multiple cores.

Even to this day Utgard is arguably the Mali team’s most successful GPU architecture, based in large part on the architecture’s no-frills ES 2.0 design and resulting low die space requirements. Along with driving high-end SoCs of the era, including those that have powered devices such as the Samsung Galaxy S II, Utgard remains a popular mid-range GPU to this day, with Mali-450 securing a spot just this week in Samsung’s forthcoming Galaxy S5 Mini.


Mali-450 Lives On In Samsung's Galaxy S5 Mini

Now with a team of nearly 500 people and having shipped 400 million Mali GPUs in 2013 (or as close to “shipping” as an IP licenser can get), the Mali team’s latest architecture and the subject of today’s article is Midgard. Midgard ships as the basis of ARM’s Mali-T600 and Mali-T700 family GPUs, and while it was initially introduced as a high-end architecture, it will be making a transition to cover both the high-end and mid-range through the recent introduction of more space/power/cost optimized designs. After initially replacing Utgard at the high-end last year, this year will see Utgard finally replaced in the mid-range market.

ARM’s Mali Midgard Architecture Explored Midgard: The Modern Mali
Comments Locked

66 Comments

View All Comments

  • kkb - Monday, July 7, 2014 - link

    I hope you do understand how to read benchmark results. 3dmark and GFXbench(offscreen) results are resolution independent. Now go and check the results in the article.
    As per T760, I will not comment on theoretical GFLOP numbers unless there is real product. Even the theoretical MAD GFLOPs are not so great (roughly half) compared to others. I don't think anyone going to fall for marketing gimmick of taking dot product also as extra 40 GFLOPs
  • darkich - Monday, July 7, 2014 - link

    You said it definitely performs better yet it loses on the T Rex HD onscreen and in Basemark X overall. That's not definite you know.
    Regardless, even if a case can be made that it performs slightly better overall than the Mali T628, it is without doubt outperformed by the :
    ULP Gforce 3
    Adreno 330
    Sgx G6320

    .. and is *definitely* far outclassed by the Adreno 420, Kepler K1, SGX G6550, Mali T760MP6-10.

    Until Intel shows their next generation of ULP graphics, I don't see a point in comparing the current one
  • darkich - Monday, July 7, 2014 - link

    Correction, I believe the GPU in Tegra 4 is internally referred to as the ULP GeForce 4, not 3
  • fithisux - Friday, July 4, 2014 - link

    Could you provide an expository of C66x architecture since it is suitable in my opininion for GPGPU tasks and realtime software rendering/raytracing
  • jann5s - Friday, July 4, 2014 - link

    lol, I thought this expression was wrong: "the proof is in the pudding", but in fact I was wrong: http://en.wiktionary.org/wiki/the_proof_is_in_the_...
  • toyotabedzrock - Friday, July 4, 2014 - link

    I wish you would have talked more about the GPU in the Nexus 10 since that is a shipping product. It would be nice to know how it differs from the newer midgard designs.
  • seanlumly - Friday, July 4, 2014 - link

    Another interesting point to make about the Mali architecture (that goes unnoticed, but is significant) is that the anti-aliasing is fully pipelined, tiled (read zero bandwidth penalty for the op), and very fast. MSAA 4x costs 1 cycle, MSAA 8x costs 2 cycles, and MSAA 16x costs 4 cycles. This means that if you have a scene full of fragment shaders running for more than 4 cycles (which is not too complex these days) you get the benefit of ultra-high quality MSAA 16x for FREE.

    There aren't too many examples of MSAA 16x online, but even at MSAA 8x performs very well, with sharp, non-blurry results and is often compared against. MSAA would produce very crisp edges devoid of aliasing and crawling during animation.

    Of course, MSAA isn't perfect -- it isn't terribly helpful for deferred renderers -- but it certainly doesn't hurt them when its costs are nothing, even if you are planning to do a screen-space pass in post.
  • toyotabedzrock - Friday, July 4, 2014 - link

    Oddly the best open source driver is for adreno GPU, perhaps you should ask that person what he knows about it?
  • ol1bit - Saturday, July 5, 2014 - link

    This is another fantastic job guys! Thanks to you and Thanks to Midgard for sharing!
  • cwabbott - Saturday, July 5, 2014 - link

    Well, if they won't cough up the information, then there's always freedreno... Rob Clark has reverse engineered basically everything you would want to know about the Adreno architecture, up to even more detail than this article. All that remains is to fill in the pieces based on the documentation he's written...

Log in

Don't have an account? Sign up now