How to upgrade your Linux box for Steam
12th Nov 2012 | 11:38
Beef up your Linux box for some extra FPS
When you consider that none of us could have much of an interest in Linux if it wasn't for the hardware it runs on, x86 hardware gets relatively little attention.
This might be because Linux is now so stable, and performs well enough on older hardware, that we seldom need to think about it.
But as true as this is, we think there's another reason. And that's compatibility and performance. Despite compatibility being less of an issue than it was 10 years ago, none of us want to spend money on hardware with dubious Linux support, whether that's Intel's latest chipset, the graphics card or solid state storage. Which is why we've borrowed as many components as we could get hold of and tested them for both compatibility and performance.
To make our choices as practical as possible, we've avoided the cutting edge of technology, such as the latest CPUs and graphics cards. Not only does this give Linux and its distributions the chance to catch up with drivers and support, it also makes the prices of these peripherals much more reasonable.
We've also tried to cover competing products, such as AMD and Nvidia graphics cards, and Intel and AMD processors, with the hope of providing a more varied overview of what works well and what might not.
We've tested the difference between 32-bit and 64-bit performance, the enhancement an onboard SSD cache might make to your file-system, and whether open source graphics card drivers are good enough. And while we've not drawn any definitive conclusions on which hardware you should purchase, we've made our opinions clear on what we think works, and what doesn't.
Hardware: A complete guide
Let's start with the peripheral to which all other components are attached
Motherboards come in all shapes and sizes, but most will conform to one of the kinds of ATX form factor. This defines where the power connectors should lie and where the board should be connected.
The most common used to be the standard ATX size, and this is still used by many regular desktop and power users because it allows the largest amount of expansion.
But the Micro-ATX is popular, especially in set-top boxes or machines that need to be self-contained. Mini-ATX can be found with embedded systems, but anything smaller is the domain of specialists. For our purposes, you'll need an ATX or Micro-ATX.
2. CPU socket (cooling)
There are just two CPU manufacturers worth your consideration on the x86 platform, AMD and Intel, and both use a multiplicity of CPU sockets and cooling connectors. Which you need will depend on your CPU, and you'll need a motherboard to match.
Intel's latest socket is called LGA1155, and it supports both last year's Sandy Bridge processors and the just-released Ivy Bridge. AMD's latest CPU socket is AM3+, which we've used to look at the AMD Phenom II processor.
Both sockets require compatible coolers, although modern designs can be adapted for both with a screwdriver.
3. Power connectors
Modern machines need a modern power supply. Alongside the regular, wide 24-pin connector, which sometimes splits blocks of 20 plus 4, you'll need an 8-pin/12v connector for the CPU. The design should be foolproof, so it's usually impossible to fit the wrong connector to the wrong socket.
Low-end graphics cards seldom need extra power, but mid-range might need an extra PCI express 6-pin power lead, while a powerful card can require two. These need to come from one PSU, and for a powerful desktop we'd recommend one that can output 600 watts, with separate 12v rails for the graphics card.
4. Memory slots
Memory is tied closely to the CPU, so needs to be chosen specifically to match your platform. This is a lot easier now modern motherboards for Intel and AMD use the same DDR3 packages (until DDR4 is available later this year), and you just need to buy memory that's faster than you need.
If your memory is too slow for your processor, it either won't work or won't make best use of it. If it's faster, your only loss is paying too much. We used 4GB of G.Skill Ripjaw Gaming Series Memory (F3-12800CL7D), which has a clock speed of 1600 Mhz. Most motherboards will now support up to 32GB of memory.
5. SATA ports (2 and 3)
Very few motherboards come with the old IDE connectors for storage and optical drives. Everything now needs to use the much easier SATA connectors.
Despite all using the same cables, most devices are SATA2-compatible, which has a theoretical transfer limit of 3Gb/s, although all the boards we looked at offered SATA 3 as well, which will double this limit if you use compatible storage.
6. USB ports
Similarly, now that everyone has settled into using USB 2 connections for all of their devices, this is being slowly supplanted by USB 3. USB 3 boosts the 480Mb/s transfer speed of the older standard to 5Gb/s, which both puts it on a par with SATA 3, and makes it considerably faster than Firewire 800.
However, data transfer is never a completely black and white subject. Instead, it depends on your operating system, the devices involved, the drivers for your chipset and your usage. Many video editors are convinced that Firewire 800 delivers better performance than USB 3, for instance.
7. PCI slots
You're now likely to connect your expansion cards using the PCI Express x1 or the PCI Express x16 slots. The latter usually includes one high-powered slot for a graphics card, labelled as 'PCIEX16' and situated closest to the CPU, and some slower slots, labelled 'PCIE4'.
8. Video out
Now that many Intel and AMD platforms contain a GPU for graphics, it's common to find a video out connector. These are usually either DVI or HDMI connectors for easy interfacing with a television or modern screen, and the latter will also contain the digital audio output.
9. Audio out
You'll find analogue outputs as well as digital, usually in the form of optical or coaxial connections for an amplifier. Many motherboards use a Realtek chipset for sound, and this can produce multi-channel audio. The best bet is to keep audio within the digital domain, as it won't require any conversion if you're playing movies with a compatible amplifier, and won't suffer interference.
You're probably familair with Ethernet/wired network connections. Transfer limits haven't changed for a while, which means the speed of your network depends on the speed of the connected devices. All modern boards will support 10/100 and 1000 Mb/s (Gigabit) connections.
There's more to processing power than the speed of the processor
There was a time when CPU performance came down to one thing: clock speed. A faster CPU could perform more operations in a given amount of time, and therefore could complete a given task before a slower CPU.
Clock speed is measured in Hertz, which are the number of instructions that can be completed in a second (OK, we're simplifying a bit here. Some instructions take more than one clock cycle). Most modern processors run at around a few Gigahertz (1GHz = 1'000'000'000Hz).
What constitutes an instruction depends on the type of processor. We will look at the x86 processor family, which are used in most desktops and laptops. This instruction set started in 1978 on the 16-bit Intel 8086 chip. The main instructions have stayed the same, and new ones have been added as new features have been needed.
The ARM family of processors (used in most mobile devices) uses a different instruction set, and so will have a different performance for the same clock speed. As well as the number of operations, different processors perform the operations on different amounts of data.
Most modern CPUs are either 32- or 64-bit – this is the number of bits of data used in each instruction. So, 64-bit should be twice as fast as 32-bit? Well, no. It depends on how much you need - if you're performing an operation on a 20-bit number, it will run at the same speed on 64- and 32-bit machines.
This word length can also affect how the CPU addresses the RAM. See the 32- vs 64-bit processors below for how different lengths affect performance.
One of the biggest aspects of CPU performance is the number of cores. In effect, each core is a processor in its own right that can run software with minimal interference with the other cores. As with the word length, the number of cores can't simply be multiplied by the clock speed to determine the power of the CPU. A task can take advantage of multiple CPU cores only if it has been multi-threaded. This means that the developer split the program up into different sub-programs that can each run on a different core.
Not all tasks can be split up in this way. Running a single-threaded program on a multi-core CPU will not be any faster than running it on a single core - however, you will be able to run two single-threaded programs on a multi-core CPU faster than the two would run on a single core.
We tend to think of memory as something a computer has a single lump of, and divides up among the running programs. But it's more nuanced than this. Rather than being a single thing, it's a hierarchy of different levels.
Typically, the faster the memory the more expensive it is, so most computers have a small amount of very fast memory, called cache, a much larger amount of RAM, and some swap that is on the hard drive and functions as a sort of memory overflow.
When it comes to CPUs, it's the cache that's most important, since this is on the chip. While you can add more RAM and adjust the amount of swap, the cache is fixed.
Cache is itself split into levels, with the lower ones being smaller and faster than higher ones. So, in light of all this, it can be difficult to know how different configurations will perform in different situations. Rather than try to work out how computers should perform with different CPU configurations, we've run a series of tests on them to find out how they perform.
The processors we're looking at are:
AMD Phenom II X4 3400Mhz Quad Core (Cache: 4x64KB level 1, 4x512KB level 2 and 6MB level 3) £79.00
AMD Phenom II X6 Six Core 3300Mhz (Cache: 6x512KB level 2, 6MB Level 3) £100.27
Intel i5-2500K 3.6Ghz (Cache: 2x32KB level 1, 256KB level 2, 6MB level 3) £162.43
We're running all of them at their recommended clock speeds. Over-clocking is an art in itself, and could squeeze additional performance out of each of these processors, but it's beyond the scope of this article.
In an ideal world, we'd test each of these with exactly the same motherboard so we could eliminate any differences here. However, different CPUs have different pin setups, and so they don't fit physically into the same motherboards (and they wouldn't work if they did).
We can see that the Intel processor outperformed the AMD ones in almost every area. This isn't surprising, as it costs twice as much as the cheapest one. In a few areas - the Apache static page test for example - it performed twice as well.
What is perhaps more surprising is that it almost always outperformed the Phenom II X6 despite having two fewer cores and only slightly faster clock speed. The only significant exceptions to this were the John the Ripper password cracking test and some of the GraphicsMagic tests. These are a highly parallel test, which could take full advantage of the extra processing in the extra processing power of the X6.
8 of the best tiny Linux distros
How to dual-boot Linux and Windows
24 things we'd change about Linux
Beginner's guide to Linux
Not all of the speed differences here are down to the CPU. As we mentioned, we tested them on different motherboards. The Intel motherboard had an onboard SSD that it used for caching data sent to the main SSD. This resulted in dramatically faster read speeds for files under 2GB, while there was no significant difference in files above this size. Write speeds were roughly even across the different setups.
The choice of processing units available today is probably more complex than it has ever been. There has been growth in simpler, low-power CPUs, complex processors, highly parallelised graphics chips and clusters. More than ever, the question isn't 'which is the best processor?', but 'what is the right solution for the task?'.
Answering this question requires knowledge of both what chips are on the market, what they cost and how these chips perform at different tasks. The high-end Intel cores are the most powerful for everyday tasks, but that speed comes at a price. And the extra cores in the X6 proved enough to match, and even sometimes outperform, the i5 in the GraphicsMagic benchmarks, which simulate image manipulation, while leaving a significant chunk of cash in your wallet.
But then, unless you use KDE with every widget and effect, the X4 is more than capable of performing most day-to-day computing tasks.
How multiple cores affect performance
We can see how adjusting the number of cores affects performance by using VirtualBox to simulate different CPUs. We can allocate a certain number of cores from the host to a guest, and so see how the system will perform with an arbitrary number of cores.
Here you can see how the system performed in the benchmarks with between one and three cores. The performance difference from increasing the number of cores varied significantly depending on the task.
In several cases, increasing the number of cores slowed down the execution because of the overheads of scheduling processes across several cores. In other cases, such as password cracking, we saw a roughly linear improvement as we increased the processing units available.
It's worth noting that we performed these tests sequentially. Had we performed more than one task at a time, we would expect the results to favour the multi-core approach more strongly.
When selecting a CPU, it's worth considering how many intensive tasks you'll be running at once. For server use, check whether the particular services you use can take advantage of the number of cores in the CPUs you're considering. Tasks that perform well on multi-core machines often perform even better when running on graphics cards using CUDA or OpenCL.
64- vs 32-bit processors
Even if you have a 64-bit processor, you may not be taking advantage of the 64-bit features of the CPU. To keep backwards compatability, 64-bit processors were designed to run 32-bit code.
Here, we've run the set of benchmarks using a 64-bit processor running both 32- and 64-bit versions of Linux to see how this affects performance. 64-bit generally runs faster, but not that much faster for most tasks. For general day-to-day computing, you're unlikely to notice much difference, but if you're crunching numbers, then the longer word length will speed things up.
Hardware: Graphical processing units
With Steam coming to Linux, and a renaissance in indie game development, now's the time to upgrade your graphics hardware. Perhaps the most subjective component in any hardware discussion is the one responsible for generating the graphics. This is because the best choice for you will depend on how important graphics are in your system.
If you use the command line or a simple window manager, for example, an expensive, powerful card will be a waste of money. This is because it's in the realm of 3D graphics that most graphical processing units (GPUs) differ, and they often differ dramatically.
And while 3D rendering capabilities used to be important solely for running 3D games, the mathematical powerhouses contained within a GPU are now used for lots of other tasks, such as high-definition video encoding and decoding, mathematical processing, the playback of DRM-protected content and those wobbly windows and drop shadows everyone seems to like on their desktops.
A better hardware specification not only means games run at a higher resolution, at a better quality and with a faster framerate - all of which adds to the enjoyment of playing a game - it now means you also get a better desktop experience.
Like CPUs, GPU development never seems to plateaux. Their power seems to double every 18 months, and this is both a good and a bad thing.
The good is that last year's models usually cost half as much as they did when they were released. The bad is that your card is almost always out of date, even when you do buy the most recent model. For those reasons, and because most Linux gamers won't want cutting-edge gaming technology when there are no cutting-edge titles to use it on (unless you dual-boot to Windows), we're going to focus our hardware on value, performance, hardware support and compatibility.
For value, we're going to look at models slightly off the cutting edge, including a couple of cheap options and a couple that are more expensive. For performance, we've run each device against version 3.0 of the Unigene benchmark. This is an arduous test of a GPU's 3D prowess, churning out millions of polygons complete with ambient occlusion, dynamic global illumination, volumetric cumulonimbus clouds and light scattering. It looks better than any Linux-native game, and it tests both for hardware capabilities and the quality of the drivers.
As the Unigene engine is used by several high-profile games, including Oil Rush, its results should give a good indication of how well a GPU might perform with any modern games that appear.
However, we also wanted to test our hardware on games you might want to play now. We tested the latest version of Alien Arena, for example, as well as commercial indie titles such as World of Goo. More importantly, we also tested the kit with some games from Steam running on Wine.
Steam is a games portal for Windows, and it has become the best way of buying and installing new games for that operating system. There's some very strong evidence that Steam will be coming to Linux before the end of 2012. If that happens, its Wine performance should give us some indication of how certain Steam titles will run on Linux.
We tested five different components. The first two are integrated, which means they're part of the CPU package rather than being extra cards you slot into your motherboard. These CPU and graphics packages are often referred to as APUs - accelerated processing units.
We started with Intel's Sandy Bridge APU on the i5-2500K CPU, running at 3.30GHz, and because Intel takes Linux driver development seriously, we expected great results from a single package.
The other APU we tested has got a much better specification on paper; and that's the one that comes with AMD's A8-3850 APU package (aka AMD Fusion). This is the rumoured core of a PlayStation 4, and although the GPU on our model is likely to be less powerful than Sony's eventual successor, it will still be possible to combine its computational power with another external Radeon card using the hybrid CrossFire option enabled from the BIOS. It's listed as an AMD Radeon HD 6550D, and we used it with 512MB of assigned RAM.
The remainder of the cards we looked at were external, and connect to a spare PCIe slot on the motherboard. With this method, you need to make sure you've got two spare slots, because a graphics card will often occupy an adjacent slot for extra cooling, and that your power supply is capable of providing enough raw energy.
We used a 600w supply, with two separate 12v rails for powering graphics hardware. Our cards needed additional power: a single additional 6-pin connector, or two connectors for the most power-hungry - the Nvidia card.
The models we looked at were the cheap AMD Radeon HD 6670 (which is one of the cards designed to work with the A8-3850 APU), the more powerful AMD Radeon HD6850 and the Nvidia GTX570, and we tested with both open source and proprietary drivers.
Testing: value cards
Results were mixed with Sandy Bridge. Running Mesa 8.0.2, the Unigene benchmark barely ran, which means many modern games will be impossible to play. We had better luck with Alien Arena, which gave a comfortable 60fps, but we started to form an opinion that if you want to play games, you're going to need a proprietary driver.
The first Radeon GPU we tested was the HD 6550D integrated GPU, with version 0.4 of the Gallium open source driver. Desktop performance was good, and accelerated Unity on Ubuntu worked without any problems (as it did on the Intel).
Almost as impressively, the 'heaven' benchmark did run better than Sandy Bridge, which is more than can be said for the same demo on our ancient Nvidia 7600GTS, but the rendering was still broken. We watched silhouettes move across the screen at seven frames per second, rather than colourful textures. Which is why our next test was to use the Catalyst proprietary drivers, which we installed manually.
Our next test was with Alien Arena, which ran at a surprisingly low 25fps - more than adequate for a bit of office mayhem, but nowhere near as good as Sandy Bridge. With the 'heaven' benchmark, however, the proprietary drivers rendered the graphics correctly, and also delivered a benchmark score of 10.3. This might seem low, but when you consider it's an integrated chipset and the benchmark itself isn't optimised for playability, it's a good result.
We tried the same test with both Unity 3D and Unity 2D to see if there was any difference when the desktop was using OpenGL, and we found none - proof that the recently-released Unity 5.12 did fix the problems with OpenGL performance.
We got a small step up in performance when we tested the Radeon HD 6670 1024MB. Alien Arena was now running at 55fps, and the heaven benchmark gave us 25.3fps, with a low of 11fps and a high of 46fps. This is a great result for a budget card, and if you opt for the passively cooled version, it would make an ideal option for a Linux games PC and movie player.
Testing: power stations
This leaves us with the two most powerful cards at our disposal - the Radeon HD6850 1024MB and the Nvidia GTX570. We started with the Radeon, because we had the drivers installed, and it was quickly scoring dramatically better results with the 'heaven' benchmark, returning a value of 46.2 fps, min 15 and max 78.8.
Emboldened by this result, we thought we'd try a couple of other tests, firstly with the native (and ancient) version of Darwinia. This ran at an exceptional 160- 250fps, which means this card won't have any difficulty with older games. However, we did experience problems when we then tried Steam.
To get Bioshock to work, for example, we had to quit Unity 3D first. But even when it did work, the graphics weren't rendered correctly. It was better news for Source games, though, as both Half Life 2 and the Lost Coast stress tests yielded good results - the latter running at 47.91fps despite its still spectacular rendering quality.
Now we get to the most expensive card in our set, Nvidia's GTX570 with 1.25GB of RAM. We first tried it with the open source nouveau drivers, but we had no success running our benchmarks, Darwinia or Steam games, and we guess that if you're going to spend a considerable sum on graphics, you'll want the best drivers.
And there are other advantages to using Nvidia's proprietary drivers. The custom setting utility, for example, which can be installed alongside the drivers, is a surprisingly powerful tool. You can enable TwinView, which we've always found more stable than Xinerama for multiple screens, and switch between various resolutions for each screen without requiring a restart. The Catalyst drivers can do this too, but with Nvidia's you can also over-clock your hardware and monitor the temperature of your GPU.
It's also quite handy for troubleshooting, and we've used the Settings tool to download EDID data from our screens and force other screens to use the same EDID data.
With proprietary drivers, the GTX570 was a clear winner. It returned a strong result from the heaven benchmark, at 66.6fps, and Bioshock ran perfectly from Stream running on Wine, so Nvidia hardware is going the way to go for native versions of Steam. As to whether it's worth the extra money, this depends how important gaming is to you.
Solid State Storage
Upgrade your storage to a drive that's driven only by electrons
While processors, graphics cards, RAM and network connections have all got faster over the years, hard drive technology seems to have moved on very little. They're still using mechanical parts, and as such are among the heaviest, slowest, least reliable and most power-hungry components in a typical PC.
SSDs (Solid State Drives) are changing that, however, and are one of the most exciting developments in PC hardware in the past five years. In this section, we're going to look at these miraculous devices. As well as comparing the two drives we've got, we're going to answer all those perennial questions that people have about SSDs: 'are they worth it?', 'how long will they last?' and 'how can I get the best out of mine?'.
Are SSDs worth it?
Traditional hard drives contain a spinning disk, which is coated in a magnetic material. This magnetic material gets manipulated by read/write heads as it flies over the disk, and is what stores the data.
In contrast, SSDs have no moving parts. Instead, they're made of millions of tiny transistors (of the floating gate variety), each one capable of storing one bit of information. Because they have no moving parts, they're quieter, lighter, more energy efficient, more durable and faster. This is obviously great if you're planning on using it in a laptop, where space, energy use and noise are all major considerations.
The increased speed of the drive will also have a huge impact on PC and application startup times (and any other operation that reads from the disk a lot), and can make your computer feel dramatically quicker.
All of these benefits sound great, but SSDs are not without their downsides and you should take these into consideration before deciding to buy. Most notably, you can't buy SSDs that are as large as traditional-style hard drives, and they're much more expensive.
For example, the Crucial M4 128GB that we have on test costs £107.99; for £79.99, you can get a 2TB hard drive. If you need a lot of space or are on a very tight budget, an SSD might not be for you.
The answer to the question of whether SSDs are worth it, then, is 'it depends on how you use your computer'.
Two common concerns that people have about SSDs is how long they last, and whether the performance you get when they're new will last all the way to old age. These concerns aren't unfounded.
The transistors in an SSD will last only for about 10 years, or 10,000 writes, whichever comes first - so they have a limited life. What's more, in some early models, badly designed firmware meant that performance could degrade significantly over time.
In modern drives, with a modern operating system and file-system, the significance of these problems has been reduced massively thanks to something called TRIM. This helps the drive's firmware to manage the allocation of blocks of data, ensuring that each transistor is written to the minimum number of times without degrading performance.
How big an impact does TRIM have? In one of the most authoritative articles on the subject, Anand Lal Shimpi found that on an aged drive, write performance was just 52% that of a clean drive without TRIM; with TRIM, the aged drive performed at 98% that of the clean one. TRIM is worth enabling.
So, how do you get TRIM working? The first thing to do is make sure that your drive supports it. If it has been bought in the last few years, it almost certainly will, but anything older and you'll need to check if it's supported. You can do this with the hdparm command, as follows:
hdparm -I /dev/<ssd> | grep "TRIM supported"
replacing <ssd> with the device name of your SSD.
If that command returns something, then you're ready to enable TRIM in the operating system. To do this, you need to format your partitions with the ext4 or btrfs filesystems. These are the only two that support TRIM.
Here at LXF towers, we use ext4, since btrfs is still lacking a stable repair tool, making it less able to recover from disaster - we recommend that you do, too.
Modify the mount options
After that, you will need to modify the mount options of the file-systems, as they don't enable TRIM support by default. This can be done by editing the /etc/fstab file.
Before making any modifications to the file, however, make sure you create a backup, as if you get things wrong in this file, it can stop you from booting.
cp /etc/fstab /etc/fstab.bk
If anything goes wrong now, you can boot in to a live CD, reverse the copy, reboot and your system will be working again.
With the backup in place, you need to modify, on each line that describes a partition on your SSD, the part that has the word 'defaults' in it. To this, you want to add ',discard', so that the entire line looks something like:
/dev/sda1 / ext4 defaults,discard 0 1
That's it. Save the file, reboot, and your drive has TRIM support enabled. This is the most important tweak to apply to your SSD.
There are other ways to tweak your drive and extend its life still further. The easiest of these other techniques is to add the noatime option to your mount options, just like we did with discard above.
Normally, Linux file-systems store the last time a file was read from, and the last time it was modified. With the noatime option, it will store only the last time it was modified, reducing the number of writes in order to keep this metadata up to date and increasing the life of your drive.
A word of warning, however: older applications, such as Mutt, won't function properly if you enable noatime, so first check application compatibility.
You can also increase the life of your drive by thinking carefully about what partitions you put on it. For instance, if you have a traditional hard drive available on your system as well, you might consider using the SSD for filesystems that don't change frequently, such as / and /home, while putting things such as /var, /tmp and swap on the spinning disk.
If this isn't an option, you can make other changes to reduce the frequency of writes to these directories. For instance, you can increase the severity of log messages which will be recorded by editing the /etc/rsyslog.conf file (see man rsyslog. conf for details), or you can decrease your system's 'swappiness', which encourages it to use swap space less frequently. You can do this by executing:
echo 1 > /proc/sys/vm/ swappiness
Our test drives
The underlying storage technology in most SSDs varies little. What makes the biggest difference to their performance is the controller and firmware - the hardware that decides how and where to write your data on the drive.
A bad controller can slow your drive down, particularly as it ages, and can lead to varying performance across different-sized writes (eg, 4k vs 9k).
The two test drives that we have represent two competing controller solutions. The Crucial M4 uses a Marvell controller, while our Intel 330 uses a Sandforce one. These same controllers are used on many different drives, so our results will be able to inform your buying decisions, even if you don't choose either of the specific drives we have on test.
We tested the drives using the Postmark, Compile Bench and Kernel Unpacking tests in the Phoronix Test Suite, with a view to seeing how the drives performed in real situations. All of the tests were carried out on an Ubuntu 12.04 system, with ext4 and the discard option set in /etc/fstab.
The Compile Bench test is perhaps the most interesting, as it's operations attempt to simulate operations that age a file-system - the most likely scenario to tax the controller. On these tests, the Intel drive, with a Sandforce controller, performed much better.
That said, the Crucial drive was much quicker when it came to dealing with many small files in the PostMark test, and marginally better when unpacking the kernel.
Both drives are in the same price bracket, being available online anywhere from £84 and upwards.