Thanks for the memory: the story of storage

25th Dec 2008 | 14:00

Thanks for the memory: the story of storage

The history of computer storage is littered with bizarre ideas

In an age when RAM is measured in gigabytes and disk space is measured in terabytes, it's hard to imagine a time when storage had to be built by hand and every bit was sacred.

But the history of computers is also the story of our ability to store data in myriad forms.

It's arguable that the rapid development from early RAM and permanent storage devices accelerated the development of computer technology as much as the introduction of the transistor did for CPU speeds.

Initially, however, computers didn't have memories as we would recognise them.

When the iconic Manchester Baby computer first ran in 1948, it was revolutionary because it stored its programs in the form of RAM. It sounds obvious now, but if you wanted to run a fresh program on a computer at the time, weeks of rewiring were usually required to make it possible. Baby changed all that. Now you could enter and run new programs in a matter of hours.

Baby's amazing ability was down to an ingenious storage device called the Williams Tube. The memory worked on the principle that when a beam of electrons was fired down a vacuum tube and hit a phosphorescent coating at the other end, small static charges built up at the points where the beam hit the phosphor.

A set of pickup plates in front of the coating then detected the charges. However, because the charges faded quickly, a refresh circuit needed to read which bits were set and use the electron beam to refresh them every few milliseconds. Williams Tubes could store around 1Kb and, although they sound cumbersome, have a modern parallel in today's DRAM chips. These work by storing tiny electrical charges in microscopic capacitors that are topped up every few microseconds.

During the post-war years, the Americans also used phosphor dots to store data. Encouraged by the Institute for Advanced Study's computing pioneer John von Neumann, the Radio Corporation of America (RCA) began work on its Selectron tube in 1946.

This space age device was about the size of a child's forearm and, with a cathode running up the middle, was packed with electronics. Different models could store from 256 to 4,096 bits of data on individual phosphor dots. The 256-bit Selectron was projected to cost about $500 to build, and it was both faster and more reliable than the Williams Tube.

However, the Selectron was complex to make and expensive to produce, so engineers began to develop other weird and wonderful forms of memory. Delay lines – the invention of computer pioneer J Presper Eckert – must rank among the strangest.

The idea was to convert individual bits into mechanical vibrations and send these one by one through a dense medium – such as a tank of mercury – so that they travelled relatively slowly. When each vibration reached the other end, a piezoelectric crystal picked it up, converted it back into an electrical impulse and sent it back to the start again. Delay-line memory was a refreshable memory, and as opposed to modern RAM, it was serial access.

To access a certain bit in a delay-line memory, the computer had to wait a few milliseconds until the relevant vibration reached the end of the tank. Delayline memory also required complex equipment to focus the vibrations so that they didn't reflect off the inside walls of the tank and cause interference. Because of this, delay-line memory was too bulky and limited to survive.

Both the Selectron and the Williams Tube were superseded in the market by a far more convenient and cost-effective form of main memory that was about to take the world of computing by storm.

Hard core

From its introduction in the early 1950s, so-called 'core' memory quickly became the dominant form of technology. Non-volatile and cheap to make, it survived well into the late '70s – even beyond the introduction of DRAM chips.

In a core memory, ferromagnetic rings just a few millimetres across are threaded onto wires running vertically, horizontally and diagonally to form a large mesh. The horizontal and vertical wires are called X and Y respectively, and are used to address individual bits for both reading and writing. The diagonal wires are called 'sense wires'. A current representing the present state of individual bits is induced in these sense wires when the X and Y wires are active.

Magnetising requires a certain amount of electricity, and by supplying only half that power to a single pair of X and Y wires, those surrounding the bit to be read remain unchanged. The problem is that the core being addressed is also overwritten when its bit is read. After reading the bit's value in such a system, it must be restored by a refresh circuit that repeats the addressing procedure and returns the core to its original magnetic state.

Despite this complicated method of storing and retrieving data, early core memory had a read/write cycle time of just six microseconds. A Java applet hosted by the US National High Magnetic Field Laboratory in Florida enables you to see the magnetic core memory process in motion: head over to www.tinyurl.com/5w4dcj to take a look at it.

By the time it was finally usurped as the prime RAM technology, core memory access speeds were down to nearly a single microsecond – just 1.2ms. However, the time of core memory was very nearly up. A team at Bell Labs had been developing a revolutionary technology – the transistor – since 1948. 20 years later, that technology would change the face of computing almost overnight.

The first RAM chip

The first commercial transistors were small, cheap and above all, reliable. Transistors soon took over from bulky and unreliable glass valves as the main component of the logic gates and registers in the CPUs of the early 1950s.

They could switch at far higher frequencies than valves – which made for CPUs that could go faster – but took a fraction of the power. However, partly because of the amount of work required to create large memories from umpteen identical transistor-based circuits, it wasn't until 1970 that core memory saw its position as the dominant form of RAM seriously threatened.

It was then that Intel released the first general-purpose commercial DRAM chip: the model 1103. It held just 1,024 bits, but its physical size (about 25mm in length), low-power consumption and reliability changed computing as much as core memory had done in the 1950s. With each bit formed from just one microscopic transistor and capacitor prefabricated into a silicon chip containing thousands of identical component pairings, the 1103 was as simple to make as a microprocessor.

By 1974, the combination of increasingly voluminous DRAM chips and low-cost microprocessors made possible the first mass-produced home computers. Yet again, storage had led the way to increasing global computing power.

Paper memories

The development of main memory is paralleled by the need to store programs, data and results permanently for easy access. Even by the late 1940s, entering all of this data by hand was becoming a serious bottleneck, limiting the amount of work that each new computer could do – even when working 24 hours a day.

As scientists and industrialists began to realise what computers could do for them, problems ranging from calculating an entire payroll on time with 100 per cent accuracy to the fiendishly complex calculations required to make hydrogen weapons all found solutions – but too slowly. After all, computers were growing up during the height of the Cold War.

A technological lag by the West could spell doomsday. Punched cards and paper tape were the most obvious low-tech solutions for data and program input, and they were universally successful. In fact, the original Colossus machines at Bletchley Park used paper tape input to prevent synchronisation problems.

As mainstream computer use exploded, lowly data entry clerks would transfer information from written forms; punching holes into cards and paper tape ready for loading into the computer. The cheapness of this method meant that paper-based storage survived well into the late 1970s. In fact, my first experience of computing at secondary school was punching cards for my O-level Computer Studies assignments and sending them to Manchester University to be loaded into one of its computers.

Punched cards had a unique drawback: each 80-column card corresponded to a single statement, so your finished program was a stack of punched cards. Problems occurred if you dropped or knocked over the stack, which then needed to be put back in order before it could be loaded into the computer's card reader.

Going magnetic

Magnetic tape came into use from the early 1950s as a general data-storage medium. Though it was fast, could store far more data than paper tape and was rewriteable, it was still only capable of serial access. This meant that if you wanted to insert a record into the data stored on a tape, you generally read the data on one tape drive and wrote it to a tape mounted on another.

At the appropriate point, you inserted the new record into the data stream. Though a large tape library gave computers access to huge amounts of backing storage, what was also required was a form of storage that didn't waste time waiting for an operator to fetch it from the library, and which was also truly random access.

The first solution was magnetic drum storage, which became available from the mid-1950s. Inside the unit, each drum – which was coated with iron oxide – rotated several thousand times per minute. A row of read-write heads traversed the drum – one for each track – and read or wrote information quickly and at will.

Magnetic drums quickly led to the development of the virtual memory that we still see in use in today's operating systems. Computer manufacturers realised that when a program runs, it doesn't need all of its code or working data in RAM all of the time. Because access to data stored on a drum was fast, RAM could be freed up by copying blocks of memory out onto the drum until the operating system required them. Suddenly, computers could have huge 'virtual' memories and run programs larger than the physical RAM would normally allow.

Into the light

The physical properties of materials have been a fertile ground for computer scientists looking to increase both storage density and speed of data access. Now scientists are beginning to investigate the possibilities of using light as a storage medium in the not-too-distant future.

The data density offered by optical storage dwarfed PC hard disks when the Compact Disc was introduced in 1982. At a time when hard disks still held around 20MB, the first CD-ROMs could store 650MB. Though magnetic disks have since regained the storage crown, optical disks could eclipse them once again. As scientists learn more about how to exploit light's properties, the possibilities for not only storage and communications but also computing itself are becoming increasingly apparent.

Upcoming light-based storage methods easily outstrip the capacity of current hard disks. Holographic techniques, for example, promise optical disks that are able to hold 3.9TB. With research into light-based microprocessors advancing every year, we may finally witness the "white heat of technology" talked of in the 1960s. Back then, no one could have predicted where the story of storage has already led us – and who knows where we'll be in 2050.

-------------------------------------------------------------------------------------------------------

First published in PC Plus Issue 277

Now read The ultimate guide to Intel's Core i7

Computing Storage RAM Memory TRBC
Share this Article
Google+

Apps you might like:

Most Popular

Edition: UK
TopView classic version