Complete guide to DirectX 11
22nd Nov 2009 | 09:00
Is the new tech really a revolution in graphics rendering?
Introducing DirectX 11
Have you read our coverage of AMD's new Radeon HD 5870 graphics card? If you have you'll already be marvelling at the sheer, giddy silliness of the frame rates it's capable of, not to mention basking in the glory of nearly three teraflops of raw processing power.
It's simply the sexiest slice of silicon since a sliver of Megatron's mainboard accidentally slipped down Megan Fox's frilly bits. Of course, the 5870 is also the very first graphics card to support DirectX 11.
For the uninitiated, that's the latest in a long line of multimedia APIs from Microsoft and apparently it's going to rock more than just your PC gaming world. At least that's what both AMD and Microsoft would have you believe.
But perhaps you've heard it all before. While similar claims were made for DirectX 10, that API pointedly failed to either ignite the imagination of gamers or generate enthusiastic support from game developers. Believe us, therefore, when we say we understand your skepticism.
Ultimately, only time will tell whether DirectX 11 really turns out to be a game changer.
There are many good reasons to expect that it will bring a big step forward in entertainment on the PC, at least since the concept of programmability appeared in DX8 and quite possibly ever. Certainly, some aspects of DX11 have far wider reaching implications than any previous DirectX API.
There's much more to DX11 than the usual tweaks to shader definitions or a spot of funky lighting technology.
Major new rendering tech
Not only does it debut a major new rendering technology in tessellation, it also aims to bring general purpose processing on the GPU into the mainstream, all the while delivering a more widely compatible DirectX API than ever before. It's ambitious stuff that's been a long time coming.
But with the arrival of Windows 7 and the Radeon HD 5800 series, along with the promise of a rapid uptake by game developers, there's plenty to get excited about.
Before we get down to business with details of DirectX 11, let's quickly remind ourselves of what exactly DirectX is and does.
In simple terms, it's a software layer known specifically in codemonkey jargon as an application programming interface, or API for short. Its job is essentially twofold.
Firstly, it makes it easier for application developers to access the multimedia capabilities of various PC components, including sound and graphics hardware. These days, it's most commonly associated with the latter, but graphics on the PC is actually only a subset of DirectX, known as Direct3D.
Anyway, along with providing a framework for software developers, DirectX sets the parameters for multimedia tasks on the PC in terms of the hardware definitions for various components, again most notably graphics and sound chips. Consequently, Microsoft works closely with the likes of Nvidia and AMD when developing the next version of DirectX.
Over time, the PC's remit has expanded to the point where today it has become arguably the most flexible and adaptable machine on the planet, equally adept at hardcore scientific number crunching as it is keeping the frag-happy teenagers entertained. Inevitably, as this remit has broadened, so has the scope of DirectX.
With the arrival of version 11 comes perhaps the most significant expansion yet for DirectX and the first of three key developments delivered by DX11. Known as Direct Compute, it effectively opens out DirectX, or perhaps more accurately the Direct3D pipeline, to almost any kind of computational task. The only really significant requirement is that the task lends itself to parallel processing.
The target component, of course, is the GPU, by far the most parallelised chip inside the PC and potentially the most powerful, if only its resources could be harnessed for general computing.
At this point, you may sense a whiff of déjà vu in the air. Isn't the idea of general purpose computing on the GPU already well established under the GPGPU banner?
There are indeed several ongoing GPGPU initiatives that predate the arrival of Direct Compute and DX11, the most high profile of which is Nvidia's CUDA platform. But the key difference with Direct Compute is that it sets out common standards to which both application and hardware developers must adhere.
In other words, where CUDA requires the presence of Nvidia graphics cards, Direct Compute will guarantee compatibility whatever the branding of your graphics card.
However, Direct Compute does more than just guarantee compatibility. It also provides hardware definitions that ensure graphics chips are actually up to the job of general purpose computation.
Admittedly, DirectX 10 also included an early version of the Direct Compute standard. But it was, frankly, an afterthought that reflected the abilities of graphics-centric hardware rather than attempting to extend to better support general purpose processing.
To take just one example, DX11 chips must provide 32kb of shared access memory for each general purpose thread. Previous versions of DirectX only required a pitiful 256 bytes per thread. Hence, you could say that all previous implementations of GPGPU have really been the side effect of efforts to create more programmable graphics rendering pipelines.
With Direct Compute 11, graphics vendors are required from the get-go to architect their graphics chips with general purpose computing in mind. Suffice to say here that the impact of a truly general purpose GPU on the likes of physics and AI simulation or media encoding will be massive.
For such highly parallel tasks, graphics chips could well turn out to be 10, 20, perhaps even 30 times faster than even the most powerful CPUs. And it will be DirectX's Compute Shader that enables all that parallelised goodness.
Tessellation in DirectX 11
Tiling the plane
Next up in our triumvirate of revolutionary DX11 features is a natty little thing known as tessellation. Normally we wouldn't get too excited about what, on the face of it, is just another isolated tweak to the Direct3D graphics pipeline. But this particular feature addresses the last great weakness of today's game engines in terms of graphical fidelity and realism.
With the advent of programmable shaders back in DX8, realism in games made a massive leap. Suddenly, surface effects such as reflectivity and opacity became possible, enabling the accurate simulation of materials such as metal, water and even human skin.
AVP:Even more realistic scenery and enemies await as a result of DX11 tesselation in the upcoming Alien Vs. Predator game
Likewise, programmable shaders have also enabled realistic dynamic lighting and therefore lifelike depth and contrast in rendered scenes.
At the same time, increases in memory bandwidth and pixel throughput have allowed much more detailed textures to be used, while image enhancements such as anti-aliasing and anisotropic filtering can increasingly be enabled with little to no performance hit.
But despite all this progress, one key aspect of generating a simulated 3D environment has fallen behind. Ironically enough, it's the very thing that gives rise to '3Dness'. That's right, it's geometry.
To be clear, the geometry throughput abilities of graphics chips hasn't been standing still. Not only do new chips support ever more triangles per second, but the shader pipeline has also been commandeered to create simulated geometry, using methods such as bump mapping.
But the rub here is twofold. Firstly, when it comes to geometry, the real world is impossibly complex. The closer you look, the more geometrical detail you'll see. Simulating that well enough to fool the human eye is a huge task indeed.
Map my bump
Making matters worse, geometry approximations such as bump and parallax mapping are pretty clumsy kludges. They often create as many problems as they solve, particularly when it comes to factors such as occlusion and accurate shadow rendering.
This in turn leads to kludges built upon kludges and you get convoluted solutions, such as parallax occlusion mapping. In the meantime, you're eating up processing resources that are more suited to generating other visual effects. Something better is needed.
That something is tessellation. Instead of torturing pixels and textures in an effort to approximate geometry, tessellation in a graphics rendering context is the process of auto-generating geometrical features.
Put very simply, instead of hard coding each and every triangle required to simulate an object, you give the GPU the gist of what you're trying to render and let it handle the fine details and generate the required triangles. Consequently, much less data and processing is required for a given level of detail.
Which leads us to the really clever bit about tessellation as implemented in DX11. Existing game engine data generated for the various mapping effects mentioned above can be used with minimal modification in the DX11 tessellation engine. Our understanding is that it's the work of only a few weeks to convert an existing mapping-heavy game engine for full tessellation. And don't forget, the geometry generated by the DX11 tessellation is composed of real triangles.
This means they behave in just the same manner (in terms of occlusion, self-shadowing and filtering) as the main or hard coded geometry.
The other kicker regarding tessellation is to do with performance. Take a heavily parallax-mapped scene in a typical DX10 game. Let's call that game Crysis – it certainly makes more widespread use of mapped 'geometry' than most.
Then convert the mapping data into tessellation data. Not only will you get a much more convincing, realistic result, but you'll also find the tessellated part of the scene runs around three times faster than the old mapped version. In turn, that allows you to ramp up the detail by a factor of three without slowing the rendering process down. In a word: win.
The last-but-not-least in our list of DX11 essentials is the trickiest to define, but it's crucial nonetheless. Broadly speaking, it involves the work Microsoft has done to ensure backwards compatibility, which is most beneficial to gamers and well received by the development community of DX11.
As good as previous versions of DirectX have been, they've often forced segregation. That could mean a gamer with a certain video card being cut off from a game or a version of Windows – or a developer from an API, and therefore a set of potentially beneficial technologies. Either way, it's bad news.
The first and most obvious step taken by Microsoft was to make DirectX 11 fully compatible with Windows Vista despite the fact that it's a key launch feature for Windows 7. By the time you read these words, Microsoft will have already released a redistributable enabling Vista to be updated with full DX11 support.
That's great for gamers who don't fancy forking out for a new operating system in order to make the most of the latest games. It also means developers know they'll be able to target the massive installed base of Vista machines when developing in DX11, not just the nascent Windows 7 ecosystem.
But that's just the beginning. One of the cleverest things Microsoft has done with DirectX 11 is to reduce the workload it presents to developers. When Vista was introduced, developers had to produce an extra codepath and executable if they wanted to support DX10, while retaining another for DX9 on Windows XP.
With DX11, a single executable supports DX9, DX10 and DX11 hardware on Vista and Windows 7. Admittedly, most developers will want to support older DX9 machines powered by Windows XP and that still requires its own codepath. But they won't need to make that tricky judgment call of whether or not it's worth creating an extra path to support a new operating system and API in the form of Windows 7 and DX11.
It's a familiar refrain – gaming is the one major software group on the PC that has so far failed to really make the most of modern multi-core CPUs.
It's one of the reasons why we recently hailed Intel's new quad-threaded Core i5 processor as the best gaming CPU you can buy, while demoting its superficially superior eight-threaded Core i7 sibling to second spot. There's only so many threads that today's games can make good use of.
However, there's little doubt that massively multi-core is the future of the PC processor. It would be good if games could play nicely with that fact. Funnily enough, one of the biggest obstacles to achieving that to date has been DirectX itself.
The problem is, all versions of DX up to 11 have restricted the rendering part of a game engine to a single thread. In practice, this has meant that at best, games can achieve what is known as task-level parallelism.
It's a rudimentary sort of multi-threading where each major task, for example, rendering, AI, physics and gameplay, gets its own thread. That may sound like a decent solution. But in reality you end up with unevenly matched threads.
Typically the rendering thread is far more demanding than the others. So what, you wonder? Well, games are essentially rendered frame by frame. What this means is that each thread must complete its work before the next frame can be rendered. And so you end up with a situation where several threads must sit idle and twiddle their virtual thumbs as they wait for the rendering thread to come lumbering over the horizon.
But not with DX11. Microsoft has introduced a new technology known as multi-threaded display lists. It allows for data-level multi-threading of the rendering engine, which in turn enables truly multi-threaded games to be implemented on the PC for the first time.
Even better, this feature is dependent only on the API, not your graphics hardware. So long as you're running DirectX 11 with a compatible graphics driver, you can enjoy the benefit of any game that takes advantage of this improved multi-threading support. You don't need a DX11 graphics card.
It's certainly nice to know you'll get a benefit from DirectX 11 whatever your current graphics card. But the real fun will come with fully compatible hardware. If nothing else, we're bored stiff of waiting for GPGPU to make the move from concept to widespread reality. Here's hoping, then, that DX11 turns out to be the tipping point on that subject.
Still, one thing we're extremely confident of is the impact tessellation will have on graphics quality. Pre-DX11 games will soon seem so blocky and square, they could well be known forever more as those that came BT (Before Tessellation).
First published in PC Format Issue 233
Liked this? Then check out DirectX 11 cards that won't break the bank
Sign up for TechRadar's free Weird Week in Tech newsletter
Get the oddest tech stories of the week, plus the most popular news and reviews delivered straight to your inbox. Sign up at http://www.techradar.com/register