Solid state drives: all you need to know
13th Oct 2009 | 09:30
Don't waste cash on an SSD that doesn't live up to its hype
The trouble with SSDs
With processors and graphics chips becoming faster seemingly by the day, the relatively slovenly development of messy mechanical hard disks has become a serious drag.
Sure, hard drives have gotten a lot bigger in recent years, but they're barely any faster. As a result, you might achieve great frame rates in-game, but you'll still be waiting just as long for those tedious level loads.
Enter, the solid state drive. By replacing conventional hard disks based on spinning magnetic platters with integrated circuits, SSDs were supposed to be the final piece of the PC performance puzzle.
At last, storage will benefit from the ever smaller, faster and cheaper electronics that enable CPUs and GPUs to pretty much double in performance and all-round prowess every couple of years.
Factor in better reliability, less noise and even reduced power consumption and a shift to solid state technology for storage is quite simply a no-brainer. Unfortunately, it hasn't quite worked out that way. In fact, the early history of SSD technology has been a big, smelly letdown.
Specifically, SSDs have often flattered to deceive with great out-of-the-box performance rapidly turning into a laggy, stuttering mess with extended use. Matters have been made worse due to confusion caused by firmware updates and a general lack of transparency regarding the problems afflicting SSDs and the steps being taken to address them.
The SSD lottery
In short, buying an SSD currently feels like a total lottery. You're not quite sure what you're getting and whether it's going to keep on working properly. With all that in mind, what exactly has been holding solid state storage technology back, what's being done about it and when will it be safe to go solid state?
To understand why SSDs have been slightly sucky, you have to appreciate the foibles of the flash memory that provides the storage. The first problem springs from the odd fact that flash memory wears out with use. Write and erase data from a flash memory cell enough times and it'll eventually become unresponsive.
Typical multi-level cell memory, as used in consumer SSDs, has a life expectancy around 2,000 to 10,000 write-and-erase cycles. The solution is so-called wear levelling. The idea here is intelligent management of the available cells.
The drive's controller chipset keeps track of cell usage and adjusts write and erase calls with a view to spreading wear evenly. The point is that in an attempt to keep memory cells healthy, commonly used data sets may have to be regularly shunted around the drive.
That in turn translates into disk activity that isn't directly related to getting data in and out of the drive. And that means less performance during periods of peak disk activity.
Writing data to SSDs
The other major issue involves the mechanics of how data is written and stored in flash memory. Basically, memory cells are organised in blocks, typically 512k in size. Problem is, whenever any data is written it must be done so by the block, even if the total amount is much less than 512k.
In other words, even when writing a small amount of data, perhaps a few k, an entire memory block is reserved. That's just fine when you have lots of spare blocks. But when you don't, it becomes necessary to reuse partially filled blocks. And that requires the contents of a block to be copied to cache before adding the new data and then writing the whole lot back into the block. What a palaver.
If that wasn't bad enough, current SSDs generally don't actually erase blocks when data is deleted from them. Blocks are simply marked as available for writing by the file system. Erasing only happens when the time comes to refill the blocks with data. Put it all together and you have a perfect storm of stuttering disk access.
Imagine requesting lots of small, individual disk writes. Each one might require juggling all kinds of partially filled and marked for-deletion blocks. We therefore put it to you that it's easy to see why SSD performance goes down the crapper as capacity dwindles. What, then, is the answer?
Improved wear-levelling algorithms help. Intel's X25-M is a case in point. Early examples of that drive suffered from rapid and rather hideous performance degradation. Intel has since released a new firmware with improved wear levelling that did a very good job of cleaning up performance.
As for the problems relating to write and erase methods as capacity is used up, there are a number of different efforts in various stages of development, some more effective than others (see the "Give your SSD a TRIM" and "Heal the pain" sections on the next page).
But the overall moral is that the race is on and progress is being made. It's just possible that a year from now, all these SSD woes will be but a distant memory.
TRIM comes to the rescue
Give your SSD a TRIM
As we've explained, one of the biggest problems affecting SSD performance involves memory blocks being filled with redundant data. When the time comes to put new data in these blocks, they must first be erased before refilling. It all takes time.
And time means performance in SSD land. Now, you might think this could be easily solved by the drive itself. Just delete the frigging data in the first place, you cry. Sadly, it's not that simple. Some of this redundant data is hidden by the operating system's file system. In other words, the SSD itself doesn't actually know it's dead data until the time comes to rewrite.
SHAVE AND A HAIR CUT:TRIM – a handy disk command coming soon to a sluggish SSD near you
Clearly, an OS-level solution is required. That's exactly what's coming in the form of a new disk command known as TRIM. Due to be supported by Windows 7, though possibly in an update rather than at launch, TRIM takes into account the needs of SSDs.
Instead of just keeping the file system updated, TRIM sends a message to the drive to delete blocks as required. The downside is that deleting files can take longer, but generally it's write performance that is most critical.
Already two of the key SSD controller makers, Samsung and Indilinx, have announced their intention to support TRIM. Intel hasn't confirmed one way or the other, but we expect it to jump on-board soon enough. TRIM looks like being an essential SSD feature for the future.
Heal the pain
So, you've had your SSD for a few months and your drive has lost its pep. Its memory blocks are used and tired, write speeds are on the wane and the honeymoon period is well and truly over. Is there anything you can do?
If you own an older drive, you're out of luck. Chalk it down to experience and the cost of being an early adopter. However, with some of the latest drives, there is hope, at least in theory. Samsung's second generation SSD technology has a feature known as 'self healing'.
PROBLEM SOLVED?:'Self healing' and 'wiping' –the solution to SSD lag or papering over the cracks?
To cut a long story short, it's claimed to be able to detect memory blocks with redundant data and then completely erase them, ready for faster rewriting. The tricky bit is that it only operates after a cold boot, following which the system must be left idle for an hour. It doesn't just happen as you go along.
Following our suite of benchmarks for our SSD group test, we tested this feature courtesy of Corsair's P128 128GB drive. It's essentially a rebadged second generation Samsung SSD.
Having cold booted the drive, we left it to idle for several hours. The time taken for the healing process to complete varies depending on how "used" the drive is. What's more, there's no way to know for sure that it's happened or if it has finished the process.
In any case, our follow-up testing did indeed reveal significant improvement. Admittedly, peak sequential read and writes didn't change much. But some of the real-world tests, such as file decompression, were as much as 30 per cent faster. It's a nice feature, we only wish it was more transparent in use.
Samsung aside, SSD memory controller Indilinx has developed a "wiper" program that does much the same job for drives using its Barefoot controller such as the OCZ Vertex and Patriot Torqx. The only difference is that it must be executed by the end user.
In our testing, it was similarly effective as Samsung's semi-automated self-healing feature. If that sounds promising, there's a catch with all these healing measures. It can take as little as a few hours for the "restored" performance of these drives to begin to noticeably drop off again.
In other words, these features are very much a temporary solution. What we want is a technology that prevents performance degradation in the first place. Here's hoping the upcoming TRIM command really delivers.
First published in PC Format Issue 231
Liked this? Then check out 6 super-fast SSDs to speed up your PC
Sign up for TechRadar's free Weird Week in Tech newsletter
Get the oddest tech stories of the week, plus the most popular news and reviews delivered straight to your inbox. Sign up at http://www.techradar.com/register