Beyond silicon: the processor tech of 2035

18th Nov 2011 | 12:30

Beyond silicon: the processor tech of 2035

The sci-fi tech that will revolutionise the PC

Next-gen computing: Introduction

Boffins and analysts regularly suggest that silicon's days are numbered. For various technical reasons that we'll explore, they consign silicon to the dustbin.

We wanted to know if they were right to ring the knell. Assuming silicon chips really have evolved as far as they can, what's going to replace them?

Will the 2035 processors be optical, biological or quantum in nature, or are neural networks set to rule the world?

As we wrote this piece, we couldn't help smiling when the following story broke: IBM creates chip that mimics the human brain. This is SyNAPSE – a project that aims to create chips capable of 'cognitive computing', which means they instantly react and re-engineer themselves based on external stimuli and events.

Two prototype chips have been created, using advanced algorithms and silicon circuitry to recreate the neurons and synapses of biological systems. This lets the processors 'learn' from experiences, find correlations, create hypotheses and mimic the brain's structural and synaptic plasticity.

Dharmendra Modha, the leader of the project, explained that IBM aims to create "a system that not only analyses complex information from multiple sensory modalities, but also dynamically rewires itself as it interacts with its environment – all while rivalling the brain's compact size and low power usage."

Modha illustrated the idea behind the chips with the concept of traffic lights that are able to "integrate sights, sounds and smells, and flag unsafe intersections before disaster happens".

What amazes me about the prospect of a post-silicon era of computing is that it won't just be faster. Today we're well used to chips that can muscle their way through floating point mathematics at ever increasing rates.

Tomorrow's machines won't just be faster – they may be radically different. By aping how the brain works or making use of sci-fi grade goo, tomorrow's computers will likely be unrecognisably by today's standards and definitions. In 25 years' time we may look back on silicon in the same way we now regard Babbage's Difference Engine.

However the future shapes up, one thing is for certain: it'll be amazing fun being part of it. To the future!

The next generation of processors

Processor

The phrase 'silicon chip' is associated so closely with computing that most people assume it's always been that way and always will be. Yet in the early days of computing, several technologies came and went before microprocessors established themselves as the dominant force.

First we had valve-based computers, then computers made by wiring together individual transistors. This approach then gave way to the use of small integrated circuits before the microprocessor came along. Now some experts are now suggesting that the silicon chip's 40 years in the limelight might be drawing to a close, as several competing technologies are waiting in the wings.

We'll take a look at some of these alternatives, but first we need to understand why the silicon-based microprocessor might just be running out of steam.

The microprocessor's history is one of phenomenal innovation. After all, in those 40 years chips have increased in speed a couple of million times – a feat that has involved, among other things, shrinking the minimum feature size from 10 microns (10,000nm) to 32nm and increasing the transistor count from 2,300 to around one billion.

Other trends haven't continued though – most notably that of increasing clock speed, which has been stuck at slightly over 3GHz for almost a decade. This hints at the possibility that other obstacles may prove equally insurmountable.

Take feature size as an example. Certainly the imperative to miniaturise chips to clock them ever faster no longer applies, but unless features continue to shrink, the race to provide ever larger caches, more cores and innovative new architectural features will result in some seriously large chips. Because semiconductors are manufactured using an optical process to etch the circuit onto the silicon, the progressive reduction in feature size has involved using light of an ever-shorter wavelength.

Today, deep ultraviolet is used instead of visible light, and the technology has been stretched out by employing clever techniques that allow the creation of features much smaller than the wavelength of the light. Even so, we have to accept that eventually the fundamental laws of physics will draw the era of fabrication using photolithography to a close, and even with the planned shift to extreme ultraviolet, it's been suggested that 11nm could be about the limit.

After that, we're well into uncharted territory. There's been talk of using electron beams or X-rays, and IBM has carried out research into chips that assemble themselves using molecular 'scaffolding', but so far all these alternatives are nothing more than research projects.

Hot topic

Then we have power consumption – the very thing that prevented chips from being clocked much faster than 3GHz. The more electrical power a processor consumes the more heat it generates, and unless that heat can be removed it will fry.

Admittedly, advanced techniques for moderating power consumption have been introduced in recent years and, as a result, top-end chips have dropped to 130W from around 150W a few years ago, despite massive performance gains. Even so, unless manufacturers have further power-saving tricks up their sleeves, it's clear that there's going to be trouble ahead.

Needless to say, if everything else stays the same, doubling the number of cores doubles the power consumption. However, if the feature size shrinks from 32nm to 22nm in the process, the area of silicon stays about the same.

Fortuitously, with the move to 22nm, Intel will be introducing its lower-power 3D transistors. This will cancel out that doubling of power from the same area, but it's a trick that can be done only once. When we bear in mind that the power density is already approaching 100W/cm2 – a figure significantly greater than that of an electric hotplate – it's clear that the future is going to be a hot one.

There's a host of esoteric cooling methods, some of which are well known in the overclocking community, but as the cost of preventing processors burning escalates, we have to question whether there's a better way. All we've seen so far could be described as technological hurdles, but there are more fundamental limits to what the laws of physics will allow, and these are even more worrying.

The fact is – and you'll have to take our word on this – as dimensions get smaller, electrons start to behave in ways that can only be described as downright weird because quantum effects come into play. Even the renowned physicist Neils Bohr admitted that quantum behaviour was peculiar when he famously declared, "If quantum mechanics hasn't profoundly shocked you, you haven't understood it yet".

For example, electrons can be in two places or in two states at the same time. Actually this can be used to good effect, as we'll see when we look at quantum computers, but on the reverse side of the coin, a major drawback for electronic circuits is that they can miraculously appear on the opposite side of a thin barrier even though there's no hole in it. If that thin barrier happens to be a layer of silicon dioxide, which is used as an insulator in chips – just let's say we've got problems.

Rising prices

Ironically though, for an industry that turns over almost $50billion, economists believe that financial rather than technological or scientific hurdles might herald the end of the line for silicon chips. Intel is reportedly spending $6-8billion on new facilities to manufacture chips based on the forthcoming 22nm process.

Yet this is the cost of an upgrade that could be described as evolutionary rather than revolutionary. We'll let you draw your own conclusions as to how much manufacturing costs would escalate if photolithography as we know it had to give way to a totally new method of producing silicon microprocessors.

We might have painted a rather bleak picture here, but there is a glimmer of hope – or, to be accurate, several specks of light – on the horizon.

In research establishments around the globe, scientists are intent not only on giving silicon technology an extra lease of life, but on offering a totally new technology for when today's silicon is eventually pensioned off.

Some initiatives stick with electronics but provide a more efficient material than silicon from which to build circuits. Others abandon the flow of electrons entirely, looking instead to the use of photons or chemicals to carry the signals required to perform arithmetic or logical operations.Others still – and here we could mention quantum and hypercomputers – adopt an entirely alien model of computation.

As we turn our attention to technologies that might knock the silicon chip off its pedestal, don't be too quick to dismiss them on the basis of performance. Some of the alternatives have shown speed beyond our wildest expectations while others are pedestrian, but many of these technologies are still in their infancy and all we've seen are their first steps.

Indeed, some experts may surely have dismissed the Intel 4004 – the world's first microprocessor, introduced in 1971 – for much the same reasons. It averaged just 60,000 four-bit instructions per second and could address 4kB of memory, but IBM's contemporary 360/195 mainframe could rattle through 10-15 million 32-bit instructions in the same time and access 4MB of memory. From humble beginnings…

Computing in a test tube

DNA

Chemistry experiments might not seem to have much in common with electronic circuits, but scientists as the California Institute of Technology are using chemical reactions to perform arithmetic functions. More specifically, their chemical computations involve DNA: the building block of life and a hugely complicated molecule.

Without wires

Professor Erik Winfree's work uses short single strands of DNA as data, and short double strands as processing elements. Further reporter molecules are used to communicate the result of the computation by fluorescing in various colours.

Unlike an electronic circuit, those processing elements don't have to be wired up somehow – if properly designed, their presence in the test tube is enough to ensure the correct operation of a logical function.

In one demonstration, Professor Winfree used a combination of 74 DNA molecules to create the six logic gates needed to calculate the square root of a four-bit binary number that was represented by another eight DNA molecules. This wasn't a quick calculation though, and took between six and 10 hours. According to Professor Winfree, this is about what you'd expect of a DNA hybridisation reaction.

So will it ever compete with electronic computers for speed? "Not in my lifetime," Professor Winfree told us, "I'd put money on it." Even so there are potential ways to increase the speed. "It would certainly be interesting to exploit massive parallelism in the future", he said, in a reference to earlier work by Professor Leonard Adleman of the University of Southern California.

Travelling salesman

test tubes

Adleman's work, which involved the very first DNA computation, was designed to solve one particular problem. Called the Travelling Salesman Problem, it asks whether there's a route from one specified city to another, using scheduled flights, visiting a list of other cities once and once only en route.

The conventional solution involves trying out every possible permutation. The snag is that the processing time escalates rapidly with the number of cities so, for example, if you've managed to solve it for 49 cities and decide to attempt a solution for 50 cities, the processing time will immediately become 50 times longer. This is why it's considered a particularly difficult computational problem, and is exactly the reason Professor Adleman chose it for his demonstration of DNA computing.

Adleman's solution involved mixing together various DNA strands, some representing a connection between two cities and others the cities themselves, so they would react with each other to create various routes. Chemical reactions were then carried out to eliminate any that didn't meet the criteria, such as those that visited a city more than once. Any that remained were valid results.

What made the DNA solution so efficient is that in just a gram or so of chemicals, Adleman introduced trillions of molecules that reacted at the same time. This is a much greater degree of parallelism than even the fastest multi-processor supercomputers.

The DNA PC

Adleman's DNA computer might have offered massive parallelism, but in creating logic gates, Professor Winfree's solution is more flexible and doesn't exclude parallelism. As Winfree pointed out though, it's still far from being a general-purpose computer.

"A general purpose computer must be programmable and capable of carrying out programmed instructions, which in the molecular world, one might expect to be encoded on a piece of DNA. Our small DNA circuits are, well, just small circuits," he explained.

And while Professor Winfree was dismissive of our questions about applications, suggesting there's merit in pure scientific research, don't forget that mighty oaks from little acorns grow. Even so, it's probably a bit early to start stocking up on DNA cartridges for your molecular laptop just yet.

The memristor alternative

memory

Even those who have never dabbled with electronics will probably have heard of resistors, capacitors and inductors – generally accepted as the three fundamental types of passive component. It now transpires that a fourth component called a memristor makes up the set.

Its resistance increases when an electrical current passes through it in one direction, and decreases as a current passes in the opposite direction. What's more, if you remove that current it remembers its resistance, hence the name memristor, which is a contraction of the words 'memory resistor'.

Today's memristors are fabricated from 2-3nm islands of titanium dioxide sandwiched between platinum electrodes, and can be stacked in three dimensions for phenomenal circuit densities.

Fast and reliable

Needless to say, memristors are being promoted as memory elements, but their potential goes far beyond providing a replacement for today's dynamic RAM chips. According to memristor expert Stan Williams of HP Labs, "This opens up a whole new door in thinking about how chips could be designed and operated."

Since memristors remember their resistance even when the power is removed, they could replace both system memory and non-volatile storage like hard disks. They are potentially faster than magnetic media, which could lead to PCs that start up instantaneously and, being more fault tolerant, would provide greater reliability. They also consume less energy and occupy less space than flash memory.

This is just a start to the promise of the memristor though – the wonder component can also act as a logic element.

Two-in-one

memristor

What's more, in offering an alternative technology for both processor and memory, the memristor could provide a solution that's greater than the sum of its parts. "The processor and memory could be exactly the same thing," Dr Williams suggested. "That allows us to think differently about how computation could be done."

Unlike today's technology, in which the processor and memory are separate, both could be on the same chip, offering huge gains in terms of speed and power usage. This might be some way off but even in the short term, memristor memory and silicon transistors could be used in combination as Dr Williams explains.

"A hybrid integrated circuit containing both memristors and transistors should, in principle, be much more efficient than those containing transistors alone. It should be possible to reach into a particular integrated circuit, rip out 10 transistors and replace them with one memristor."

Unlike some of the other technologies we've looked at, memristors in home PCs don't seem too far away from becoming a reality. While an integrated chip containing memory and processing elements could be a decade away, Stan Williams suggested that they're now moving out of the lab and towards fabrication – and according to Dr Williams, "It could provide a way of getting a ridiculous amount of memory on a chip."

Biological processors

bio processors

DNA computing uses the molecule on which life depends and artificial neural networks mimic the operation of the brain, but Professor Andrew Adamatzky of the University of West of England in Bristol is performing computations using real, living organisms.

That might conjure up memories of Frankenstein's Monster, but the living organism in question is called physarum polycephalum. Otherwise known as slime mould, it's a microbe that can be seen without a microscope and is found on decaying leaves or wood.

Adamatzky has built fully programmable machines from slime mould that can solve many problems of computational geometry. They've also been used to create logical gates, carry out the programmable transportation of substances, generate music and investigate the physical imitations of man-made transport networks.

So what makes this substance so suitable for these types of application? "Slime mould acts as an amorphous massively parallel computing device," Professor Adamatzky says. "It senses inputs with its whole branching body, makes a decision using propagation of excitation and contractile waves, and can implement parallel actuation."

Adamatzky envisages niche applications only. "Physarum machines will not compete with conventional PCs, but they can form a new family of soft-bodied robots with embedded computing power, which will be capable of entering any confined spaces and implement sophisticated tasks."

Working at the speed of light

optical computing

According to Albert Einstein, nothing travels faster than light and, unlike the electrons in an electronic circuit, the passage of photons doesn't generate heat. Light is faster and cooler than the electronic equivalent, so surely an all optical computer would provide the best of both worlds?

It will come as no surprise that research into optical computers has a long heritage, but so far there's been little to show for that work – until recently, as Professor Vahid Sandoghdar of the Max Planck Institute for the Science of Light explained.

Small problems

We started by asking why progress has been so slow. "One of the main challenges in making competitive optical computers is miniaturisation," he told us. "While an electronic wire can be as narrow as a few tenths of a nanometer, an optical wire – a waveguide – is much thicker. In addition, no one has managed to transmit light around sharp bends. Another major difficulty is the miniaturisation of elements like transistors."

Referring to work he started while at the ETH Research Centre in Zurich, Professor Sandoghdar suggested that all this could be about to change. "Our work has shown that a single quantum emitter like a single molecule can function as a transistor because its interaction with light is intrinsically non-linear.

"However, packaging is still an issue because you can easily perturb the optical fields around the emitter if you bring anything else too close to it. Furthermore, our system has the practical problem that it works at liquid helium temperature. In all these efforts, our goal is to optimise the efficiency of the interaction between photons and emitters. This means that losses become less important."

Atoms as memory

Optical silicon

But there's another key element, as Sandoghdar reminded us. "Optical memories, at least for transitory purposes, are of course also important, but I could imagine that you could use micro-resonators for times as long as microsecond. But again, these things are huge compared to their equivalent electronic components. Here it's thinkable to use single atoms, ions, or colour centres as memory in a similar fashion as we have demonstrated transistor and phase shifting functionalities."

So does this mean that the all-optical computer is on the way? "I certainly believe that it should be manageable to build an optical computer one day," Professor Sandoghdar said, "but I really can't say anything about its size and cost. To that end, it's hard to believe my laptop would be replaced by such a device."

On the other hand, a recent IBM development suggests a halfway solution might be closer. Its so-called silicon nanophotonic chip maintains silicon for switching, but connects everything together using high-speed optical links.

Where common sense ends

D-Wave

Although particles like electrons and photons behave in what we think of as a common sense fashion en mass, individually they obey the laws of quantum physics, which are counter-intuitive.

Qubits explained

For example, electrons have a property called spin, which can be up or down, but a single electron can also have an up and a down spin at the same time. This is called a state of superposition, and it forms the basis of quantum computing.

If you use electrons to represent a binary data, they can represent either a 0 or a 1, or they could be in superposition. Because of this latter possibility, the term 'qubit' (quantum bit) is used instead of the more familiar 'bit'. Now let's imagine a 64-qubit computer.

Because each qubit can represent both a 0 and a 1 simultaneously, each 64-bit register is capable of holding 18,446,744,073,709,551,616 different values at the same time. If you were to carry out some computation, it would therefore be carried out on all those values at once.

This might be the ultimate in terms of parallelism, but the technique isn't without its problems. First you have to work out some pretty clever algorithms, because as soon as you try to read the result of any computation the state of superposition collapses and you end up seeing just one result. Much the same problem makes it rather tricky to even build a quantum computer.

Absolutely any interaction with the outside world destroys the superposition and preventing this from happening gets more difficult each time you add another qubit to the register width. For this reason, most claims from academia have involved modest numbers of qubits, demonstrations often involving fewer than eight.

Despite this, a company called D-Wave Systems launched the world's first commercially available quantum computer earlier this year. Called the D-Wave One, it boasts 128 qubits, but many in the research community are sceptical.

Silicon, but not as we know it

Storage

Silicon is a type of element known as a metalloid from column IV in the periodic table. It's a poor electrical conductor, but can be made more conductive by doping – adding small amounts of elements from adjacent columns in the periodic table.

Doping with a group V element (often phosphorous or arsenic) produces an n-type semiconductor, while doping with a group III element (typically boron) produces a p-type semiconductor. Combining n-type and p-type semiconductor materials allows the basic building block of digital electronics, the MOSFET transistor, to be created.

Silicon isn't the only metalloid with this property; in fact, some of the first transistors were made of germanium. Recently though, attention has turned to using compound semiconductors – that's semiconductors made from more than one metalloid, usually one from group III and the other from group V, although they can also be made from more than two metalloids.

Switching speed

As we look at the performance of new types of transistors (including ones made of materials other than compound semiconductors that we investigate elsewhere) we'll refer to the maximum switching speed, but don't confuse this with processor speeds. Although microprocessors have plateaued at just less than 4GHz, this is by no means a fundamental speed limit for silicon.

Individual silicon transistors have been demonstrated at over 100GHz – the fact that processors are slower than this is mainly due to how difficult it would be for millions of them to work on the same chip without overheating.

Using a single chemical element as a semiconductor doesn't give designers a lot of choice, especially when they have so few to play with. Essentially they're stuck with what's available. But lots of alloys have semiconductor properties and since the constituent elements and their ratios can all be controlled, this allow the various properties (band gap, carrier mobility and lattice constant) to be fine-tuned in the search for the perfect transistor.

As Rob Willoner, Technology Analyst at Intel's manufacturing group explained, "The main advantage of compound semiconductors is better mobility. Electrons flow more smoothly than through silicon, much like a particularly well oiled machine. This higher mobility means faster switching speeds and lower power (think less friction) at the same voltage, at the same speed at a lower voltage, or a combination".

Compound semiconductors like these are usually referred to by the chemical symbols of their constituents, and include GaAs, InP, AlGaAs and GaInNAs.

Inside Intel

Intel has a well-established compound semiconductor programme, and in 2005 it announced an InSb (indium and antimony) transistor that was a remarkable five times faster than its silicon equivalent, with a 10th of the power consumption.

More recently, Intel has been working with InGaAs (indium, gallium and arsenic) and, in some variants of the technology, combining it with germanium, but except for referring to "very high performing devices", it has been tight-lipped about what sort of performance it's seen.

With the world's largest manufacturer of x86 processors apparently committed to compound semiconductors, it might seem reasonable to assume that they'll soon be the new silicon. However, Intel's Rob Willoner indicates a rather more cautious technical approach.

"Basically, compound semiconductors are being looked at as a replacement for the silicon channel of transistors – the region where current flows when transistors are on," he says. "They aren't being looked at as a replacement for silicon substrates. Wafers will continue to be made of pure silicon, because that's what the industry has about half a century of experience with."

Semiconductor PCs

So what does all this mean to the likelihood of finding a compound semiconductor processor in a mainstream PC in the near future?

Again Rob Willoner was cautious in his outlook. "We have been working on this for many years, and there are many years of work still ahead before this technology gets into production, if ever", he told us.

"A huge challenge is integrating these new materials with the silicon substrate. They have different lattice spacings – that's the spacing between the atoms – and this leads to dislocations at the junction." The research is promising, but domestic applications are a long way off.

Tiny terahertz tubes

Graphene

For years, two forms of carbon were known: diamond and graphite. This all changed in 1985 with the discovery of Buckminsterfullerene – a form of carbon with 60 atoms arranged as a spherical cage.

Lots of other forms of carbon have been produced since then, and two of them – graphene and carbon nanotubes – are showing potential as the basic material of high-speed transistors.

Graphene is a one-dimensional form of carbon, in which each atom bonds to its four closest neighbours in a hexagonal arrangement to form huge flat sheets. Dr Xiangfeng Duan of the University of California in Los Angeles says it's a wonder material for creating electronic circuits.

"Graphene is of significant interest for highspeed transistor applications because of its combination of several important characteristics, including the highest carrier mobilities of any known material" he told us.

Carrier mobility is the speed at which charge carriers (an electron or a 'hole') propagate through a metal or a semiconductor when an electric field is applied. Partly as a result of this super-high mobility, switching speeds of 300GHz have already been demonstrated, and over 1THz (1,000GHz) is thought to be entirely possible.

But although graphene transistors are fast, they don't switch on and off positively enough, so they're currently being targeted at analogue applications like generating the radio signals for Wi-Fi and other data communication equipment.

Mind the band gap

That doesn't mean scientists have given up on them for logic applications though, and Dr Duan described several initiates that might produce the necessary 'band gap', to use the technical jargon.

"There is still a substantial challenge for application in processors", he said – and it doesn't end with the band gap. "In general, the conventional dielectric integration and device fabrication processes cannot be readily applied to graphene transistors because they can often introduce defects into the monolayer of carbon lattices and degrade device performance."

However, if these obstacles are overcome, the benefits could be huge, as Dr Duan explained. "If implemented for logic applications, it could create a paradigm-shift by moving electronics from a single crystal substrate onto glass or a flexible substrate."

Dr Duan also enthused about faster, lower cost processors and devices with a variety of form factors including flexible, wearable and disposable computing devices. As for whether graphene will supplant silicon, we got a familiar response. "It's going to impact various areas of electronics, but it's difficult to say whether it would eventually replace silicon."

Carbon nanotubes can be thought of as long, hollow cylinders made by rolling up sheets of graphene. In fact there are lots of different types of nanotube, which differ in their diameter and chirality – the angle at which they're rolled up with respect to the molecular lattice. Some types of nanotube are inherently semiconducting, even without doping (the addition of small amounts of other elements needed to turn silicon into a semiconductor).

Fast and efficient

Professor Subhasish Mitra of Stanford University is one of the leading researchers in this field and gave us a feel for what could be just round the corner.

"If you do the correct optimisations, then potentially you can obtain a nanotube processor that can run five times faster than a silicon-CMOS processor at 11nm technology at the same power."

Indeed, figures as high as a terahertz have been suggested as the potential for carbon nanotube transistors, though the Stanford researchers are more concerned with the density at which nanotubes can be grown to form a chip, making the design immune to imperfections, making good connections to the nanotubes and engineering the doping process.

Professor Mitra was quick to point out that good progress is being made. Indeed, moving on from the individual transistors demonstrated several years ago, the team at Stanford have managed to create working arithmetic circuits and memory, and have developed techniques for building 3D chips.

"I think we have come a long way, and this is an exciting time", Professor Mitra said. "Carbon nanotube technology should have the ingredients to be a great complement to silicon CMOS."

Plastic chips

plastic chips

Although most everyday plastics are electrical insulators, several plastics have been produced that are conducting or have similar semiconductor properties to silicon. As a result, complete electronic circuits can be made out of plastics and, because they can be printed from solution using equipment similar to a desktop an inkjet printer, manufacturing costs are miniscule compared to fabricating silicon chips.

What's more, the circuits are flexible – they've already found applications in roll-up displays for use in e-readers. The difficulty in this technique is that plastic transistors aren't as reliable as silicon ones – they don't all turn on and off at the same voltage.

What's more, unlike a stuck pixel in a display, which barely matters, a single non-functioning transistor in a processor is a show-stopper. Earlier this year, scientists at Belgium's Imec research centre announced the first plastic processor, using an extra gate to tame errant transistors.

With rock-bottom prices, plastic processors will find new applications, as one of its developers explained: "Wrapped around food and pharmaceuticals, they might indicate that your tuna is rancid or that you forgot to take your pills".

Cloud computing

Cloud computing

Provided it did the job, the majority of people wouldn't be particularly bothered about whether the processor in their PC was made from silicon or nanotubes, and whether it worked by simply executing a series of instructions in sequence or by combining esoteric chemicals. The ultimate in removing us, as users, from the nitty gritty of what goes on behind the scenes, though, is the concept of cloud computing.

Locally, all you have on your tablet or netbook is sufficient computing power to drive the display and transmit data to and from the cloud. You'll never know exactly what's in the cloud, but in addition to the storage that today's internet provides, it also proves enough computing resources to carry out the task at hand.

Cloud computing is in its infancy, but even so, the chances are that not all of the computing resources you currently use will involve x86 processors running Windows. Sure, they'll contain silicon chips, but they may well be UNIX or Linux servers.

If some of the technologies we've discussed here come to fruition, they too could go into the cloud melting pot. Thanks to memristors, artificial neural networks might come of age and analogue computers may be reborn.

Neither are general-purpose models of computation though, so they're not going to replace silicon. And unless you're serious about simulation or facial recognition, you're not going to want to buy a neural or analogue co-processor for your laptop. In the cloud, however people and businesses could pay for these specialist resources only when they needed them.

The new analogue

Analogue pc

Back in the early days of electronic computing, digital computers lived alongside their analogue counterparts. The idea was that while digital computers were good at some tasks, analogue computers were good at others.

Analogue computers were used mostly for simulation, and were programmed using patch leads to wire up elements like summers and integrators that worked on continually varying voltages to solve differential equations.

Simulation could be done digitally, but without fast computers this was a very slow job. As digital computers became ever faster, analogue computers were eventually sidelined – until now.

As we explained above, memristors are an up-and-coming technology for digital computers, but they may just give a new lease of life to analogue computers too. Digital computers might have come on in leaps and bounds since the days of their analogue counterparts, but solving some of the planet's toughest simulation exercises still requires vast computing resources like the Japanese K computer, which uses 68,544 eight-core processors and cost $1.25bn.

According to the authors of a recent academic paper on memristors, a new generation of analogue computers, working alongside digital supercomputers, could be exactly what's needed for large-scale simulations like those used in climate change research.

-------------------------------------------------------------------------------------------------------

First published in PC Plus Issue 313. Read PC Plus on PC, Mac and iPad

Liked this? Then check out Your PC in a year: how Windows 8 will change hardware

Sign up for TechRadar's free Week in Tech newsletter
Get the hottest tech stories of the week, plus the most popular news and reviews delivered straight to your inbox. Sign up at http://www.techradar.com/register

Follow TechRadar on Twitter * Find us on Facebook

processors memory quantum computing cloud computing
Share this Article
Google+

Apps you might like:

Most Popular

Edition: UK
TopView classic version