Adaptive Scalable Texture Compression

As we’ve noted in our rundowns of OpenGL and OpenGL ES, the inclusion of ETC texture compression support as part of the core OpenGL standards has finally given OpenGL a standard texture compression format after a number of years. At the same time however, the ETC format itself is approaching several years old, and not unlike S3TC it’s only designed for a limited number of cases. So while Khronos has ETC right now, in the future they want better texture compression and are now taking the first steps to make that happen.

The reward at the end of that quest is Adaptive Scalable Texture Compression (ASTC), a new texture compression format first introduced by ARM in late in 2011. ASTC was the winning proposal in Khronos's search for a next-generation texture compression format, with the ARM/AMD bloc beating out NVIDIA and their ZIL proposal.

As the winning proposal in that search, if all goes according to plan ASTC will eventually become OpenGL and OpenGL ES’s mandatory next generation texture compression algorithm. In the meantime Khronos is introducing it as an optional feature of OpenGL ES 3.0 and OpenGL 4.3 in order to solicit feedback from hardware and software developers. Only once all parties are satisfied with ASTC to the point that it’s ready to be implemented into hardware can it meaningfully be moved into the core OpenGL specifications.

So what makes ASTC next-generation anyhow? Since the introduction of S3TC in the 90s, various parties have been attempting to improve on texture compression with limited results. In the Direct3D world where S3TC is standard, we’ve seen Microsoft add specialized formats for normal maps and other texture types that are not color maps, but only relatively recently did they add another color map compression method with BC7. BC7 in turn is a high quality but lower compression ratio algorithm that solves the gradient issues S3TC faces, but for a 24bit RGB texture it’s only a 3:1 compression ratio versus 6:1 for S3TC (32bit RGBA fares better; both are 4:1).


ASTC Image Quality Comparison: Original, 4x4 (8bpp), 6x6 (3.56bpp), & 8x8 (2bpp) block size

Meanwhile in the mobile space we’ve seen the industry’s respective GPU manufacturers create their own texture compression formats to get around the fact that S3TC is not royalty free (and as such can’t be included in OpenGL). And while Imagination Technologies in particular has an interesting method in PVRTC that unlike the other formats is not block based – and thereby can offer a 2bpp (16:1) compression ratio – it has its own pros and cons. Then of course there’s the matter trying to convince holders of these compression methods to freely license them for inclusion in OpenGL, when S3/VIA has over the years made a tidy profit off of S3TC’s inclusion in Direct3D.

The end result is that the industry is ripe for a royalty free next generation texture compression format, and ARM + NVIDIA intend to deliver on that with the backing of Khronos.

While ASTC is another block based texture compression format, it does have some very interesting functionality that pushes it beyond S3TC or any other previous texture compression format. ASTC’s primary trick is that unlike other block based texture compression formats, it is not based around a fixed size 4x4 texel block. Rather ASTC has a fixed size of 128bits (16 bytes) with a variable size block ranging from 4x4 to 12x12, in effect offering RGBA compression ratios from 8bpp (4:1) all the way up to an incredible 0.89bpp (36:1). The larger block size not only allows for higher compression ratios, but it also offers developers a much finer grained range of compression ratios to work with compared to previous texture compression formats.

Block Size Bits Per Px Comp. Ratio
4x4 8.00 4:1
5x4 6.40 5:1
5x5 5.12 6.25:1
6x5 4.27 7.5:1
6x6 3.56 9:1
8x5 3.20 10:1
8x6 2.67 12:1
10x5 2.56 12.5:1
10x6 2.13 15:1
8x8 2.00 16:1
10x8 1.60 20:1
10x10 1.28 25:1
12x10 1.07 30:1
12x12 0.89 36:1

Alongside a wide range of compression ratios for traditional color maps, ASTC would also support additional types of textures. With support for normal maps ASTC would also replace other texture compression formats as the preferred format for normal maps, and it would also be the first texture compression format with support for 3D textures. Even HDR textures are on the table, though for the time being Khronos is starting with only support for regular (LDR) textures. With any luck, ASTC will become the all-uses texture compression format for OpenGL.

As you can imagine, Khronos is rather excited about the potential for ASTC. With their strong position in the mobile graphics space they need to provide paths to improving mobile graphics quality and performance amidst the reality of Moore’s Law and the other realities of SoC manufacturing. Specifically, mobile GPU bandwidth isn’t expected to grow by leaps and bounds like shading performance, meaning Khronos and its members need to do more with what amounts to less memory bandwidth. For Khronos texture compression is key, as ASTC will allow developers to pack in smaller textures and/or improve their texture quality without using larger textures, thereby making the most of the limited memory bandwidth available.

Of course the desktop world also stands to benefit. ARM’s objective PSNR data for ASTC has it performing far better than S3TC at the same compression ratio, which would bring higher quality texture compression to the desktop at the same texture size. And since ASTC is being developed by Khronos members and released royalty free, at this point there’s no reason to believe that Direct3D couldn’t adopt it in the future, especially since all major Direct3D GPUs also support OpenGL in the first place, so the requisite hardware would already be in place.

With all of that said, there’s still quite a bit of a distance to go between where ASTC is at today and where Khronos would like it to end up. For the time being ASTC needs to prove itself as an optional extension, so that GPU vendors are willing to implement it in hardware. It’s only after it becomes a hardware feature that ASTC can be widely adopted by developers.

OpenGL 4.3 Specification Also Released OpenCL Gets A CLU
Comments Locked

46 Comments

View All Comments

  • bobvodka - Monday, August 6, 2012 - link

    Firstly using the Steam Hardware survey, which is the correct metric as we are a AAA games studio I'll grant you, at most, 5% of the market, the majority of which have Intel GPUs, for which the OpenGL implementation has generally been.. sub-par to put it mildly.

    Secondly all console development tools are on the PC and based around Visual Studio as such we work in Windows anyway.

    Thirdly the Windows version generally comes about because we need artists/developer tools . Right now it is also useful for learning about and testing 'next gen' ideas with an API which will be close to the XBox API

    Forthly; we have a windows version working which uses D3D11 and OpenGL offers no compelling reason to scrap all the work. Remember D3D had working compute shaders with a sane integration for some years now - OpenGL has only just got these and before doing the work with OpenCL was like opening a horrible can of worms due to the lack of standardised and required interop extensions which existed (I looked into this at the back end of last year for my own work at home and quickly dispaired at the state of OpenGL and its interop).

    Finally, OSX lags OpenGL development. Currently OSX10.7.3 (as per https://developer.apple.com/graphicsimaging/opengl... ) supports GL3.2 and I see no mention of the version being supported in 10.8. Given that OpenGL3.2 was released in 2009 and OSX10.7 was released last year I wouldn't pin my hopes on seeing 4.2 any time 'soon'.

    Now, supporting 'down market' hardware is of course a good thing to do however in D3D11 this is easy (feature levels) in OpenGL different hardware + drivers = different features which again increases engineering work load and the requirements for fallbacks.
    You could mandate 'required features' but at that point you start cutting out market share and that 5% looks smaller and smaller.

    Now, we ARE putting engineering effort into OpenGL|ES as mobile devices are an important corner stone from a business stand point thus the cost can be justified.

    In short; there is no compelling business nor technical reason at this junction to drop D3D11 in favor of OpenGL to capture a fragment of the 5% AAA 'home computer' market when there are no side benefits and only cost.
  • powerarmour - Monday, August 6, 2012 - link

    Yes because Carmack is always 100% right about everything, and the id Tech 5 engine is the greatest and most advanced around.
  • SleepyFE - Monday, August 6, 2012 - link

    id Tech 5 is awesome!! I don't like shooters (except for Prey) but i played Rage just to see how much "worse" OpenGL is. The game looks GREAT. I can't tell it from any other AAA game from the graphics alone. And that means OpenGL is good enough and should be used more. Screw what someone says, try it yourself then tell me OpenGL can't compete.
  • bobvodka - Monday, August 6, 2012 - link

    False logic - games are as good as their art work.

    OpenGL has shaders, so yes with good art work it can do the same as D3D - however the API itself, the thing the programmers have to work with - isn't as good AND up until now it was lacking feature parity with D3D11.

    Feature wise OpenGL is there.
    API/usability wise - it isn't.

    FYI; I used OpenGL for around 8 years from around 1.3 until 3.0 came out and, like a few, was so fed up of the ARB at this point that I gave up on GL and moved to a modern API, speaking from an interface design point of view.
  • Penti - Friday, August 10, 2012 - link

    Game engines are perfectly fine supporting different graphics API's. Obviously non Windows platforms won't run D3D. Microsoft does not license it. So while they do license stuff like ActiveSync/Exchange, exFAT (which should have been included in the SDXC spec under FRAND-terms but isn't), NTFS, remote desktop protocols, OpenXML, binary document formats, sharepoint protocols, some of the .NET environment etc most of the vital tech is against payed licensing. They don't even specifies the Direct3D API's for implementation for none hardware vendors. It's simply not referenced at all. OpenGL is thoroughly referenced in comparison.

    Even though PS3 isn't OGL (PSGL is OGLES based) you could still do Cg shaders, or convert HLSL or GLSL shaders or vise versa so it's not like skills are lost. Tools should be written against the game engines and middleware any way.

    Plus the desktop OGL is compatible with OGLES when it comes to the newer releases such as 4.1 and 4.3. Albeit with some tricks/configuration/compatibility modes. Then implementations sucks, but that will also be true for some graphics chips support for DX.
  • inighthawki - Monday, August 6, 2012 - link

    The tessellation feature you're referring to is a brand-specific hardware extension, and not the same class that DirectX's tessellation is. The tessellation hardware introduced for DX11 is a completely programmable pipeline that offers more flexibility. DirectX does not add support for hardware specific features for good reason.
  • djgandy - Tuesday, August 7, 2012 - link

    Tessellation was only added to the GL pipeline in 4.0. It was another one of those 'innovations' where GL copied DX, just like pretty much every other feature GL adds.

    What GL needs to do is copy DX when they remove stuff from the API. Scratch this stupid core/compatibility model, which just adds even more run-time configurations, remove all the old rubbish and do not allow mixing of new features with the old fixed function pipeline.
  • bobvodka - Tuesday, August 7, 2012 - link

    There was, 4 years ago, a plan to do just what you described in your second paragraph - Longs Peak was the code name and it was a complete change and clean up of the API with a modern design and it was a change universally praised by those of us following the ARB's news letters and design plans.

    In July 2007 they were 'close' to a release; in October they had 'some issues' to work out - they then went into radio silence and 6 months later, without bothering to tell anyone what was going on, they rolled out 'OpenGL3.0' aka 2.2 where all the grand API changes, worked on for 2 years, were thrown out the window, extensions bolted on again and no functionality removed.

    At this point myself, and quite a few others, made a loud noise and departed OpenGL development in favour of D3D10 and then D3D11.

    Four years on the ARB are continuing down the same path and I wouldn't bet my future on them seeing sense any time soon.
  • djgandy - Tuesday, August 7, 2012 - link

    The ARB think they are implementing features that developers want, and maybe they are, but AFAIK they have very few big selling developers anyway.

    It seems the ARB is unable to see the reason behind this, maybe because they are so concerned about the politics of backwards compatibility or least certain members of it are. For me this is the hardest part to understand, since it is not even real breaking of compatibility, it is simply ring fencing new features from old features thus saving a ton of driver writing hell (i.e what DX did). Instead you can still use begin end with your glsl arb and geometry shaders with a bit of fixed function fog over the top. How useful.

    I find it hard to even consider the GL API as an abstraction layer with the existing extension hell and the multiple profiles a driver can opt to support. The end result of this "compatibility" is anyone actually wanting to sell software using OpenGL has to pick the lowest common denominator...whatever that actually is, because you don't even know what you are getting till run time with the newer API, so then you just pick the ancient version of the API because at least you have a 99% chance that a GL 3.0 driver will be installed with all the old fixed function crud that you don't actually need, but glVertex3f is nice right?

    IMO GL's only hope is for a company like Apple to put it into a high volume product and actually deliver a good contract to developers (core profile only, limited extensions, and say GL 4.0).
  • bobvodka - Tuesday, August 7, 2012 - link

    Unfortunately Apple isn't very on the ball when it comes to OpenGL support.

    OSX10.7, released last year, only supports OpenGL 3.2, a spec released in 2009 and had Windows support within 2 months.

    Apple are focusing on mobile it would seem, where OpenGL|ES is saner and rules the roost.

Log in

Don't have an account? Sign up now