Turn your Linux PC into a gaming machine

23rd Dec 2012 | 10:00

Turn your Linux PC into a gaming machine

Linux-based PCs aren't second-tier machines any longer

Introduction

Linux-based operating systems have long been the alternative OS for us PC users, but there are several reasons why they haven't garnered the mainstream following they perhaps deserve.

Most of the issues stem from the unfamiliar way they work compared with the operating system we've all used a million times before: Windows.

For most of its life it's demanded that its users get elbow deep into command lines - something most of us forgot when Windows 95 happened. It's also had very patchy support for different components, and gaming on a Linux box was generally an exercise in needless frustration.

Things are changing quickly though, and the latest versions of Linux stalwart Ubuntu have been increasingly familiar and functional, and more importantly, easier for the general public to use and get their heads around.

With Windows 8 radically changing the way you interact with your operating system, demanding that you learn these new ways before you can feel comfortable in the new Windows' new surroundings, it's never been a better time to pick up a Linux distribution and learn the ropes on something that's a lot more customisable.

When it comes to gaming though, things are still a little tricky. Gaming through WINE is still rather hit and miss, but the good news is there's a version of steam built specifically for Linux setups that could ensure Linux gaming becomes a viable alternative to the Microsoft-dominated PC scene.

Parts for penguins

Component compatibility is something the manufacturers will have to become more au fait with though, and we're going to investigate just how well the big players are doing right now.

We've picked some of the most vital parts for penguin-based systems - the processor, graphics card and solid state drives - to see how they fair in the alternative operating system.

We'll see how they get on with the latest manufacturers' drivers and, where we can, with any open source drivers that might be available.

Can you still be a gaming, component-swapping guru running a Linux-based PC? We say hell yes, but you're going to have to be picky with the parts for penguins.

CPUs

There was a time when CPU performance came down to one thing: clock speed. A faster CPU could perform more operations in a given amount of time, and could therefore complete a given task before a slower CPU.

Clock speed is measured in hertz, which is the number of instructions that can be completed in a second (OK, we're simplifying a bit here - some instructions take more than one clock cycle).

Most modern processors run at around a few gigahertz (1GHz = 1,000,000,000hz). What constitutes an instruction depends on the type of processor. We'll be looking at the x86 processor family, which is used in most desktops and laptops.

You might also like...
The history of Linux: how time has shaped the penguinThe history of Linux

This instruction set started in 1978 on the 16-bit Intel 8086 chip. the main instructions have stayed the same, and new ones have been added as new features have become necessary. the aRM family of processors (used in most mobile devices) uses a different instruction set, and will have a different performance for the same clock speed.

As well as the number of operations, different processors perform the operations on different amounts of data. Most modern CPUs are either 32- or 64-bit - this is the number of bits of data used in each instruction.

So, 64-bit should be twice as fast as 32-bit? Well, no. It depends on how much you need - if you're performing an operation on a 20-bit number, it will run at the same speed on 64- and 32-bit machines.

This word length can also affect how the CPU addresses the RAM. One of the biggest aspects of CPU performance is the number of cores. In effect, each core is a processor in its own right that can run software with minimal interference with the other cores.

Threadbare

As with the word length, the number of cores can't simply be multiplied by the clock speed to determine the power of the CPU.

A task can take advantage of multiple CPU cores only if it has been multi-threaded. This means that the developer has divided the program up into different sub-programs, each of which can run on a different core.

Parts for penguins

Not all tasks can be split up in this way, though. Running a single-threaded program on a multi-core CPU will not be any faster than running it on a single core. However, you will be able to run two single-threaded programs on a multi-core CPU faster than the two would run on a single core.

We tend to think of memory as something a computer has a single lump of, and divides up among the running programs. But it's more nuanced than this.

Rather than being a single thing, it's a hierarchy of different levels. Typically, the faster the memory the more expensive it is, so most computers have a small amount of very fast memory, called cache, a much larger amount of RAM, and some swap that is on the hard drive and functions as a sort of memory overflow.

When it comes to CPUs, it's the cache that's most important, since this is on the chip. While you can add more RAM and adjust the amount of swap, the cache is fixed. Cache is itself split into levels, with the lower ones being smaller and faster than higher ones.

Configuration

So, in light of all this, it can be difficult to know how different configurations will perform in different situations, and when you throw in an operating system that doesn't always have the most reliable driver sets, things can get even more confusing.

We've taken a triumverate of different processors to see how they cope with the vagaries of Linux:

We're running all of them at their native clock speeds. You can always overclock later through the Bios, but for now we need to make sure the setup is as stable as we can get. Later on, when you're full of Linux love but craving a little more speed, you can dive into the Bios and start messing around with clock speeds and multipliers as much as you like.

We can see that the Intel processor out-performed the AMD ones in almost every area. This isn't surprising, as it costs twice as much as the cheapest one, but also shows Intel's Linux drivers are strong enough to keep the same large gap it manages with its processing tech through Windows.

In a few areas - the Apache static page test for example - it performed twice as well. The Sandy Bridge CPU almost always outperformed the Phenom II X6 despite having two fewer cores and only slightly faster clock speed.

The only significant exceptions to this were the John the Ripper password cracking test and some of the GraphicsMagic tests. These are highly parallel benchmarks, which take full advantage of the higher thread-count in the Phenom II X6.

Parts for penguins

Not all of the speed differences here are down to the CPU though. The different boards have different hardware on them, despite both running similar technology. The differences in the way the storage drives operated were pronounced.

This resulted in dramatically faster read speeds for files under 2GB, though there was no difference in files above this size. Write speeds were roughly even across the different setups too.

The choice of CPUs available today is probably more complex than it has ever been. There has been growth in simpler, low-power CPUs, complex processors, highly parallelised graphics chips and clusters.

More than ever, the question isn't "Which is the best processor?", but "What is the right solution for the task?" Answering this question requires knowledge of both what chips are on the market, what they cost and how these chips perform at different tasks.

The high-end Intel cores are the most powerful for everyday tasks, but speed comes at a price. The extra cores in the X6 proved enough to match, and sometimes outperform the i5 in the GraphicsMagic benchmarks, which simulate image manipulation, while leaving a significant chunk of cash in your wallet.

Unless you use KDE with every widget and effect though, the X4 is more than capable of performing most day-to-day computing tasks.

Benchmarks

bench 1

bench 2

GPUs

Perhaps the most subjective component in any hardware discussion is the one responsible for generating the graphics. this is because the best choice for you will depend on how important graphics are in your system.

If you use the command line or a simple window manager, for example, an expensive, powerful card will be a waste of money. This is because it's in the realm of 3D graphics that most graphical processing units (GPUs) differ, and they often differ dramatically.

Although 3D rendering capabilities used to be important solely for running 3D games, the mathematical powerhouses contained within a GPU are now used for lots of other tasks, such as HD video encoding and decoding, mathematical processing, the playback of DRM-protected content, and those wobbly windows and drop shadows everyone seems to like on their Ubuntu desktops.

A better hardware specification not only means that games run at a higher resolution, at a better quality and with a faster framerate - all of which adds to your overall enjoyment - it now means you also get a better desktop experience.

Processing

Like CPUs, the development of GPUs never seems to reach a plateau. Their power seems to double every 18 months, and this is both a good and a bad thing. The good is that last year's models usually cost half as much as they did when they were released. The bad is that your card is almost always out of date, even when you buy the most recent model.

For those reasons, and because most Linux gamers won't want cutting-edge gaming technology when there are no cutting-edge titles to use it on (unless you dual-boot to Windows), we're going to focus our hardware on value, performance, hardware support and compatibility.

At the value end of the market, we're going to look at models slightly off the cutting edge, including a couple of cheap options and a few that are more expensive. For performance, we've run each device against version 3.0 of the Unigine benchmark.

This is an arduous test of 3D prowess, churning out millions of polygons complete with ambient occlusion, dynamic global illumination, volumetric cumulonimbus clouds and light scattering. It looks better than any Linux-native game, and it tests both for hardware capabilities and the quality of the drivers.

As the Unigine engine is used by a couple of high-profile games, including Oil Rush, its results should give a good indication of how well a GPU might perform with any modern games that appear.

Parts for penguins

However, we also wanted to test our hardware on games that you might want to play now. We tested the latest version of Alien Arena, for example, as well as commercial indie titles such as World of Goo.

More importantly, we also tested the kit with some games from Steam running on WINE. Steam is a games portal for Windows, and it has become the best way of buying and installing new games for that operating system.

There's some very strong evidence that Steam will be coming to Linux before the end of 2012. If that happens, its WINE performance should give us some indication of how certain Steam titles will run on Linux.

Hardware

We tested five different GPUs. The first two are integrated, which means they're part of the CPU package rather than being discrete cards that you slot into your motherboard. These CPU and graphics packages were the norm in the old 2D days, but since 3D gaming demanded more power it went discrete. Now we're seeing more powerful GPUs integrated on die.

We started with Intel's HD 3000 on the i5-2500K CPU, and because Intel takes Linux driver development seriously, we expected great results from a single package. The other integrated part we tested has got a much better specification on paper; and that's the one that comes with AMD's A8-3850 APU package (aka AMD Fusion).

This is the rumoured core of a PlayStation 4, and although the GPU on our model is likely to be less powerful than Sony's eventual successor, it will still be possible to combine its computational power with another external Radeon card using the hybrid CrossFire option enabled from the Bios. It's listed as an AMD Radeon HD 6550D, and we used it with 512MB of assigned VRAM.

The remainder of the cards we looked at were discrete, and connect to a spare PCIe slot on the motherboard. With this method, you need to make sure you've got two spare slots, because a graphics card will often occupy an adjacent slot for extra cooling, and that your power supply is capable of providing enough raw energy.

We used a 600w supply, with two separate 12v rails for powering graphics hardware. Our cards needed additional power: a single additional six-pin connector, or two connectors for the most power-hungry - the Nvidia card.

The models we looked at were the cheap AMD Radeon HD 6670 (which is one of the cards designed to work with the A8-3850 APU), the more powerful AMD Radeon HD6850 and the Nvidia GTX 570, and we tested with both open source and proprietary drivers.

Testing: value cards

Results were mixed with Intel's HD 3000. Running Mesa 8.0.2, the Unigine benchmark barely ran, which means many modern games will be impossible to play. We had better luck with Alien Arena, which gave a comfortable 60fps, but we started to form an opinion that if you want to play games, you're going to need a proprietary driver.

The first Radeon GPU we tested was the HD 6550D integrated GPU on the A8-3850 APU, with version 0.4 of the Gallium open source driver. Desktop performance was good, and accelerated Unity on Ubuntu worked without any problems (as it did on the Intel).

Parts for penguins

Almost as impressively, the Heaven benchmark did run better than Sandy Bridge, which is more than can be said for the same demo on our ancient Nvidia 7600GTS, but the rendering was still broken.

We watched silhouettes move across the screen at seven frames per second, rather than colourful textures. That's why we then used the Catalyst proprietary drivers, which we installed manually.

Our next test was with Alien Arena, which ran at a surprisingly low 25fps - more than adequate for a bit of office mayhem, but nowhere near as good as Sandy Bridge. With the Heaven benchmark, however, the proprietary drivers rendered the graphics correctly, and also delivered a benchmark score of 10fps.

This might seem low, but when you consider it's an integrated chipset and the benchmark itself isn't optimised for playability, it's a good result. We tried the same test with both Unity 3D and Unity 2D to see if there was any difference when the desktop was using OpenGL, and we found none - proof that the recently-released Unity 5.12 did fix the problems with OpenGL performance.

We got a small step up in performance when we tested the Radeon HD 6670 1024MB. Alien Arena was now running at 55fps, and the heaven benchmark gave us 25.3fps, with a low of 11fps and a high of 46fps. This is a great result for a budget card, and if you opt for the passively cooled version, it would make an ideal option for a Linux games PC and movie player.

Testing: power cards

This leaves us with the two most powerful cards at our disposal - the Radeon HD6850 1024MB and the Nvidia GTX570. We started with the Radeon, and it was quickly scoring dramatically better results with the Heaven benchmark, returning a value of 46.2fps, minimum 15 and maximum 78.8.

Emboldened by this result, we thought we'd try a couple of other tests, firstly with the native (and ancient) version of Darwinia. This ran at an exceptional 160- 250fps, which means this card won't have any difficulty with older games.

However, we did experience problems when we then tried Steam. To get Bioshock to work, for example, we had to quit Unity 3D first. But even when it did work, the graphics weren't rendered correctly.

Parts for penguins

It was better news for Source games, though, as both Half Life 2 and the Lost Coast stress tests yielded good results - the latter running at 47.91fps despite its still spectacular rendering quality.

Now we get to the most expensive card in our set, Nvidia's GTX570 with 1,280MB of RAM. We first tried it with the open source nouveau drivers, but we had no success running our benchmarks, Darwinia or Steam games, and we guess that if you're intending to spend a considerable sum on graphics, you'll want the best possible drivers.

There are other advantages to using Nvidia's proprietary drivers, too. The custom setting utility, for example, which can be installed alongside the drivers, is a surprisingly powerful tool.

You can enable TwinView, which we've always found more stable than Xinerama for multiple screens, and switch between various resolutions for each screen without requiring a restart. The Catalyst drivers can do this too, but with Nvidia's you can also overclock your hardware and monitor the temperature of your GPU.

It's also quite handy for troubleshooting, and we've used the Settings tool to download EDID data from our screens and force other screens to use the same EDID data. With proprietary drivers, the GTX 570 was a clear winner.

It gave a strong result from the Heaven benchmark, at 66.6fps, and Bioshock ran perfectly from Stream running on WINE, so Nvidia hardware is going the way to go for native versions of Steam.

As to whether it's worth the extra money, this depends on how important gaming is to you.

Benchmarks

Graphics bench

SSDs

While processors, graphics cards, RAM and network connections have all got faster over the years, hard drive technology seems to have moved on very little.

Hard disk drives still use mechanical parts, and are therefore among the heaviest, slowest, least reliable and most power-hungry components in a typical computer.

SSDs (solid state drives) are changing that though, and are one of the most exciting developments in PC hardware in the past five years.

In this section, we're going to look at these miraculous devices. As well as comparing the two drives we have here, we're going to answer the most common questions people have about SSDs: '"Are they worth it?", "How long will they last?" and "How can I get the best out of mine?'"

Are SSDs worth it?

Traditional hard drives contain a spinning disk, which is coated with a magnetic material. This magnetic material gets manipulated by a read/write head as it flies over the disk, and is what stores the data.

In contrast, SSDs have no moving parts. Instead, they're made of millions of tiny transistors (of the floating gate variety), each one capable of storing one bit of information. Because they have no moving parts, they're quieter, lighter, more energy efficient, more durable and faster.

This is obviously great if you're intending to use the drive in a laptop, where space, energy use and noise are all major considerations. The increased speed of the drive will also have a huge impact on PC and application startup times (and any other operation that reads from the disk a lot), and can make your computer feel dramatically quicker.

All of these benefits sound great, but SSDs are not without their downsides, and you should take these into consideration before deciding to invest. Most notably, you can't buy SSDs that are as large as traditional-style mechanical hard drives, and they're much more expensive.

Parts for penguins

For example, the Crucial M4 128GB that we have on test costs around £80; the same cash will net you a 2TB hard drive. If you need a lot of space or are on a very tight budget, an SSD might not be for you.

The answer to the question of whether SSDs are worth it, then, is: "It depends on how you use your computer."

Lifespan

Two common concerns that people have about SSDs are how long they last, and whether the performance you get when they're new will last all the way to old age. These concerns certainly aren't unfounded.

The transistors in an SSD will last only for approximately 10 years, or 10,000 writes, whichever comes first - so they have a limited life. What's more, in some early models, badly designed firmware meant that performance could degrade significantly over time.

In modern drives, with a modern operating system and filesystem, the significance of these problems has been reduced massively thanks to something called TRIM. This helps the drive's firmware to manage the allocation of blocks of data, ensuring that each transistor is written to the minimum number of times without degrading performance.

How big an impact does TRIM have? In one of the most authoritative articles on the subject, Anand Lal Shimpi found that on an aged drive, write performance was just 52 per cent that of a clean drive without TRIM; with TRIM, the aged drive performed at 98 per cent that of the clean one. TRIM is worth enabling.

So, how do you get TRIM working? The first thing to do is make sure that your drive supports it. If it has been bought in the last few years, it almost certainly will, but anything older and you'll need to check whether it's supported. You can do this with the hdparm command, as follows:

hdparm -I /dev/<ssd> | grep "TRIM supported"

Remember to replace <ssd> with the device name of your SSD. If that command returns something, then you're ready to enable TRIM in the operating system.

To do this, you need to format your partitions with either the ext4 or btrfs filesystem. These are the only two that support TRIM. Here at PCF towers, we use ext4, since btrfs is still lacking a stable repair tool, which makes it less able to recover from disaster, and we recommend that you do the same.

Modify the mount

After that, you will need to modify the mount options of the filesystems, as they don't enable TRIM support by default. This can be done by editing the /etc/fstab file. Before making any modifications to the file, however, make sure you create a backup, as if you get things wrong in this file, it can stop you from booting.

cp /etc/fstab /etc/fstab.bk

If anything goes wrong at this point, you can always boot to a live CD, reverse the copy and then reboot your system to get it working again. With the backup in place, you need to modify, on each line that describes a partition on your SSD, the part that has the word 'defaults' in it. To this, you want to add ,discard, so that the entire line looks something like this:

/dev/sda1 / ext4 defaults,discard01

That's it. Now you just need to save the file, reboot, and your drive has TRIM support enabled. This is the most important tweak to apply to your SSD.

Extending life

There are other ways to tweak your drive and extend its life still further.

The easiest of these other techniques is to add the noatime option to your mount options, just like we did with discard above. Normally, Linux filesystems store the last time a file was read from, and the last time it was modified. With the noatime option, it will store only the last time it was modified, reducing the number of writes in order to keep this metadata up to date and increasing the life of your drive.

A word of warning, however: older applications, such as Mutt, won't function properly if you enable noatime, so first check application compatibility.

You can also increase the life of your drive by thinking carefully about what partitions you put on it. For instance, if you have a traditional hard drive available on your system as well, you might consider using the SSD for filesystems that don't change frequently, such as / and /home, while putting things such as /var, /tmp and swap on the spinning disk.

If this isn't an option, you can make other changes to reduce the frequency of writes to these directories. For instance, you can increase the severity of log messages which will be recorded by editing the /etc/rsyslog.conf file (see man rsyslog. conf for details), or you can decrease your system's 'swappiness', which encourages it to use swap space less frequently. You can do this by executing:

echo 1 > /proc/sys/vm/swappiness

The underlying storage technology in most SSDs varies little. What makes the biggest difference to their performance is the controller and firmware - the hardware that decides how and where to write your data on the drive. A bad controller can slow your drive down, particularly as it ages, and can lead to varying performance across different-sized writes (like 4k vs 9k).

The two test drives that we have represent two competing controller solutions. The Crucial M4 uses a Marvell controller, while our Intel 330 uses a Sandforce one. These same controllers are used on many different drives, so our results will be able to inform your buying decisions, even if you don't choose either of the specific drives we have on test.

We tested the drives using the Postmark, Compile Bench and Kernel Unpacking tests in the Phoronix Test Suite, with a view to seeing how the drives performed in real situations. All of the tests were carried out on an Ubuntu 12.04 system, with ext4 and the discard option set in /etc/fstab.

The Compile Bench test is perhaps the most interesting, as its operations attempt to simulate operations that age a filesystem - the most likely scenario to tax the controller. On these tests, the Intel drive, with a Sandforce controller, performed much better. That said, the Crucial drive was much quicker when it came to dealing with many small files in the PostMark test, and marginally better when unpacking the kernel.

Both drives are in the same price bracket, being available online anywhere from £84 and upwards.

Benchmarks

SSD bench

Linux PC gaming components TRBC
Share this Article
Google+
Edition: UK
TopView classic version