The ultimate guide to graphics cards

30th Sep 2008 | 16:37

The ultimate guide to graphics cards

What you need to get the best out of the latest games

There's a good chance the most powerful chip inside your PC, in raw computational terms, is on your graphics card. So, how did graphics get so powerful, what are graphics cards good for right now and how on earth do you choose from the baffling array of 3D chipsets on offer?

A little history

The origin of today's 3D monsters can be traced back to ancient 2D cards designed to improve the resolution and colour fidelity of the PC. The very earliest cards had specifications that seem impossibly modest by today's standards.

The first truly modern graphics processing units (GPUs) arrived in 2001. Nvidia's GeForce 3 and the Radeon 8500 from ATI were the world's first GPUs to support so-called programmable shaders designed to enable more realistic lighting within graphical 3D simulations. Since then, no other company has been able to keep up with the relentless pace of those two brands (though ATI was bought by AMD in 2006).

Early programmable chips could typically apply their shading effects to just four pixels per operating cycle. Since then, GPUs have become ever more powerful, programmable and, most of all, parallel. AMD's Radeon HD 4800, for instance, packs a ludicrous 800 shader units (also known as stream processors in a nod to their increasingly programmable nature).

Current cards also sport huge memory buffers as big as 1GB, enabling them to drive extremely high-resolution displays, and are backed up by massive bus bandwidth thanks to the PCI Express interface in its latest 2.0 format. Finally, the very latest development in graphics technology is support for multiple cards sharing the rendering load.

But today's GPU's don't just pack painfully powerful 3D pixel pumping engines. They also support 2D video decode acceleration for modern HD video codecs like H.264 and VC-1, as used on Blu-ray movie disks.

That's how graphics cards got to where they are today. But what makes the latest chips tick?

3D rendering

This is the biggy, 3D rendering is the graphics card's raison d'etre. With the launch of Windows Vista, Microsoft introduced the 10th iteration of its DirectX graphics API. The DX10 API is now well established and fully compliant cards are extremely affordable. There's no need to compromise on DirectX support.

Whether it's pixel, vertex and geometry shaders, or support for high quality anti-aliasing and high-dynamic-range lighting, they're present in all DX10 GPUs. What you do need to worry about, however, is raw rendering horse power and below is everything you need to know to judge a chip's performance.

Pixel throughput

Broadly speaking, it's the texture and render output units (ROPs) that define the number of pixels a graphics chip can spit out every cycle. And remember, a 1,920 x 1,200 pixel grid on a typical 24-inch monitor works out at over two million pixels. For smooth gaming you'll need to refresh that at least 30 times a second. In other words, well over 60 million pixels per second.

Nvidia's top chip, the £300+ GeForce GTX 280 has a huge 32-strong array of ROPs and no less than 80 texture sampling and address units. AMD's best, the Radeon HD 4800 series, has 16 ROPs and 40 texture units, facts reflected in pricing that kicks off around the £180 mark.

Mid-range cards like Nvidia's GeForce 9600 series and the Radeon HD 4600 from AMD, typically have significantly reduced ROP and texture unit counts.

Shader processing

This is where the real computation grunt is housed and where those shimmering, shiny visual effects that dominate the latest games are processed. Intriguingly, AMD's relatively affordable Radeon HD 4800 packs 800 shaders to the Nvidia GTX 280's 240 units.

However, not all shaders are equal and it's worth noting that Nvidia's GPUs usually boast much higher shader operating frequencies than competing AMD chips. Again, mid-range chips typically suffer from cut-down shader counts in an effort to reduce chip size and cost.


The importance of memory for a graphics card is two fold. It's important to have enough memory to store all the data required to render a given 3D scene on board the graphics card itself. The alternative is dipping into the PC's main memory, and that means latency, lag and stuttering frame rates. Treat 512MB as a minimum for decent performance in modern games.

The other half of the story is the related issue of bandwidth. Keeping all those shaders and ROPs fed with pixel data takes some serious throughput. The latest cards therefore pack ultra fast memory chips that run as fast as 1GHz or more and are able to transmit data at least twice per cycle (hence the term DDR or double data rate). The latest GDDR5 (the G stands for graphics) is actually capable of four transmits per cycle. Bus width is another factor that affects bandwidth, the more bits the bus supports, the more data it can pump per cycle.

The biggest current bus is the GeForce GTX 280's 512-bit beast. However, large memory buses take up a huge amount of space on a graphics chip, so the introduction of GDDR5 memory will probably see bus technology scale back to 256-bit with the next generation of big GPUs.


The joker in the GPU pack is undoubtedly multi-GPU technology. Both of the big boys of PC graphics, AMD and Nvidia, offer multi-GPU platforms in the shape of Crossfire and SLI respectively. The idea is simple enough - to use multiple GPUs in parallel to provide even more rendering oomph.

When they work, the results can be spectacular. The problem is, all too often they don't and you are left with the performance of a single card or worse. Also note that special supporting motherboards are required and, in the case of Nvidia, that exclusively takes the form of an Nvidia motherboard chipset.

Integrated graphics

A final word, in terms of 3D performance, should go to integrated graphics as found on motherboards. In theory they offer the same feature set as discrete GPUs. However, in order to make integrated GPUs small enough and cheap enough for motherboards, the number of functional units is brutally cut down, typically by a factor of 20 or worse, compared with the fastest stand-alone solutions.

2D features: 2D acceleration

First up is hardware video acceleration. Here, the two big players are once again fairly level pegging. All the latest DX10 boards from both AMD and Nvidia have built-in 2D engines dedicated to accelerating modern and demanding codecs such as H.264 and VC-1.

2D features: video ports

VGA may have been revolutionary in 1997, but it looks pretty laughable compared to modern digital interfaces. Today, DVI remains the dominant standard on the PC and in dual-link form is good for up to 2,560 x 1,200 pixel resolutions. The HDMI standard as used on TVs is also creeping onto some cards, especially those designed for use in home theatre PCs, and includes both digital video and audio signals.

Joining these two well establishing interfaces is DisplayPort. Think of it as a cross between DVI and HDMI and you'll get the idea. It's intended to be more flexible and support higher resolutions than either DVI or HDMI.

Finally, there's the question of support for HDCP encryption (required for Blu-ray playback and other protected content). Most modern cards are HDCP compliant, but it's a feature that's always worth checking.

Form factor and power

Gone are the days of simple, single-slot boards that drop into almost any system. Today's cards vary wildly in size and shape. The biggest boards occupy the space of two PCI Express slots and may be long enough to cause fitting issues in standard ATX chassis.

Modern cards also often have mammoth power requirements. At least one six-pin supplementary power cable is usually required for a performance card and sometimes more.


Bringing graphics technology full circle is the idea of general purpose computing on the GPU or GP GPU for short. As graphics chips have become more programmable, the possibility of harnessing their immense parallel processing capabilities for tasks other than graphics has become more attractive.

Early applications are likely to be multimedia related - video encoding, photo editing, in-game physics and artificial intelligence, for example. Nvidia is currently leading the way in GP GPU, but such is the expectation of its importance, Intel has felt the need to get in on the game. Late next year, Intel's Larrabee chip is due to appear with a remit of graphics and GP GPU processing.

The final reccie

If that's the state of play in graphics, what are the current best-buy boards? Nvidia's GeForce GTX 280 series boards are awfully quick, but they are also awfully pricey. For that reason, AMD's Radeon HD 4870 is our pick from the top end. It's damned close for performance and nearly half the price at around £180.

Down around the £100 mark the 4870's cheaper 4850 sibling can just about be had and delivers fantastic performance for the money. Dipping below £100 brings Nvidia's 8800 GT into play. It's a slightly older chipset, but still a great all-rounder, especially now that it can be had for a piffling £80. Just be sure to get the full 512MB version and not the horrible 256MB hack.

And if even £80 is too much, do not despair. AMD has just released the new Radeon HD 4600 series. It's a very decent performer thanks to no less than 320 stream shaders, but it is imperative to go for the faster 4670 variant, yours for less than £60, rather than the bandwidth-hobbled 4650.

Read TechRadar's ultimate guide to motherboards and the ultimate guide to networking

Nvidia Larrabee GeForce Radeon
Share this Article

Most Popular

Edition: UK
TopView classic version