Taking Control of Mobile Network Power Consumption

11th Feb 2014 | 13:00

Taking Control of Mobile Network Power Consumption

New techniques to improve the energy efficiency throughout the mobile network

Explosive growth in mobile Internet traffic is driving up network power consumption, and focusing attention on new techniques to improve the energy efficiency throughout the network - particularly in basestation RF and baseband circuitry

The Downside of Explosive Mobile-Data Growth

The tremendous popularity of mobile devices such as smart phones and tablets, and cloud-based services for business and personal use, is driving more and more subscribers to access the Internet via wireless connections.

The load on mobile networks is increasing rapidly. Globally, about 1.6 Exabytes of mobile data is being generated per month, according to Cisco's Visual Networking Index (VNI) report , and this is having a dramatic effect on mobile network power consumption.

Today, operators are spending an estimated $2 billion a year to power their networks, and basestations are consuming a high proportion of that budget. According to figures from Vodafone, basestations account for almost 60% of total mobile network power consumption, while 20% is consumed by mobile switching equipment and around 15% by the core infrastructure.

Low efficiency in the RF and baseband processing stages is the main reason for the basestation's relatively high power consumption. The IET has reported that a typical 3G basestation uses about 500W of input power to produce only about 40W of output RF power.

The heat generated by inefficient operation must also be removed, typically by air conditioning, which adds further to the basestation's overall power consumption. The typical average annual energy consumption of a 3G basestation is around 4.5MWh.

Hence for a 3G mobile network covering an area such as the UK, which has around 12,000 basestations, the total energy consumed will be more than 50GWh per year. This causes a large amount of CO2 emission as well as contributing to the network's operating costs.

In developing countries, on the other hand, direct electricity connections are not always readily available. According to Scientific American (Tweed, 2013) magazine there are around 5 million basestations globally, of which 640,000 are not connected to an electrical grid and are largely run by diesel powered generators.

The costs of fuel and transportation, and the logistical challenges involved in keeping basestations operating continuously, are high.

As mobile usage worldwide increases, and 4G/LTE rollout gathers pace in developed countries, wireless data demand will continue to display explosive growth. Hence, the cost of powering and cooling wireless basestations is set to go on increasing.

Although basestation efficiency is known to have improved from 3% in 2003 to 12% in 2009 , further significant improvements are needed in order to limit overall network power consumption and hence enable operators to assert control over their operating costs.

Improving Network and Basestation Architectures

Several improvements are being pursued to reduce the power consumed by mobile basestations. The architecture of the network is evolving towards a heterogeneous architecture (HetNet) comprising basestations of various sizes, as shown in figure 1.

In a HetNet, smaller cells comprising micro or pico basestations having shorter range and lower capacity, and hence lower power consumption, are deployed throughout the network in addition to macrocells.

This allows the network to adapt continuously to subscriber usage, and meet subscriber demands using reduced power resources.

In addition to allowing more efficient wireless access over short ranges, smaller cells offload traffic from macrocells hence allowing greater use of power management to further reduce each macrocell's power consumption.

Within the basestation, improvements to the main processor in the baseband unit can raise data-handling efficiency. To maximise the advantages of such improvements, the ability to scale the processor architecture for use in smaller basestations or macrocells allows the same power-saving techniques to be applied quickly and cost effectively throughout the network.

Power Amplifier Efficiency

The RF part of the basestation consumes more than half of the overall power, mainly due to inefficiency in the PA as demand for increasing data rates and broader bandwidth force the amplifier to operate in its non-linear region. The Peak to Average Power Ratio (PAPR) can be around 6-10dB.

Techniques to allow the PA to operate in its most efficient region close to the 1dB compression point, with lower PAPR, can reduce the RF power consumption significantly.

Commonly used techniques include Digital Pre-Distortion (DPD) which pre-compensates the PA's non linearity, and Crest Factor Reduction (CFR) which clips the peaks of the signal to keep the error vector magnitude within 3GPP limits. DPD and CFR algorithms can be executed in a Digital Front End (DFE) typically implemented in the basestation's radio module.

A conventional hard-wired DFE is limited in its ability to support various frequency bands, air standards and types of PA. To overcome these challenges, LSI's SoftDFE® technology presents an agile solution that allows engineers to develop application-specific, power-efficient RF DFE modules for the PA.

It is able to support all the algorithms such as DPD and CFR required for a DFE to increase the amplifier's efficiency to over 50%.

Moreover, Soft DFE is programmable allowing the algorithms to be optimised, and also updated in the field if necessary - more readily than is possible with an alternative FPGA-based implementation.

The LSI Soft DFE block can be implemented in a standalone device, such as may be used in a remote radio head or the radio card of a macro basestation, or can be integrated as part of a larger System on Chip (SoC) together with a pico basestation's baseband unit (BBU) designed around LSI's high-efficiency Axxia® baseband processor architecture.

Better Baseband Processing

A typical basestation BBU is responsible for Layer-1, Layer-2 and Layer-3 processing, and supports wireless protocols such as LTE, LTE-A, WCDMA (and WiFi in small cell basestations). The choices open to equipment designers, as far as baseband processing is concerned, have typically been either a programmable FPGA-based solution or a custom ASIC and programmable DSPs.

More recently, large integrated multi- core CPU and DSP devices operating at a high clock frequency have been positioned as a baseband processing solution.

The established FPGA or ASIC options tend to force designers to trade-off the adaptability that an FPGA provides, for example to support emerging standards and protocols, against the performance and efficiency advantages traditionally offered by a custom ASIC-based solution. However, neither of these approaches satisfies every need for high performance, flexibility and energy efficiency.

LSI's Axxia technology introduces a new architecture that satisfies the increasing performance demands brought about by ubiquitous mobile data services while providing the flexibility and scalability to adapt and optimise the baseband processing and support efficient power management.

In addition, the architecture is scalable to address the processing demands both of macrocells and of smaller cells used to improve overall network efficiency.

Combining flexibility with high data-processing performance the platform utilises up to 16 ARM® Cortex™-A15 cores, as shown, which creates an extremely power efficient solution for data-plane processing tasks.

These cores are combined with LSI's Virtual Pipeline™ technology which comprises highly efficient hardware-based network accelerators for high-performance data-plane processing.

These accelerators include a multi-protocol packet processing engine capable of throughput up to 50Gbit/s, a 20Gbit/s security engine, and dedicated traffic management, content inspection and intelligent packet switch functions.

The Axxia platform represents the first instantiation of ARM's high-performance CoreLink™ CCN-504 QoS-aware low-latency coherent memory interconnect. This interconnect is used to link all of the main processing and memory elements of the SoC.

LSI's Virtual Pipeline technology and dedicated acceleration engines enable an extremely power-efficient solution compared to devices that just use large numbers of CPU cores. The CPU cores are typically run at high frequencies, hence consuming large quantities of power, to perform data-plane tasks to which they are not ideally suited.

Moreover, the Virtual Pipeline acceleration engines are designed to be modular thus enabling the processor to be scaled to higher performance levels without excessively increasing power consumption.

Scaling the number of Cortex-A15 control-plane cores and Virtual Pipeline resources of the Axxia architecture enables engineers to create a suitable system-on-chip solution for any size basestation, from a 3G or LTE macrocell to the smaller cell sizes used in a heterogeneous architecture.

The number of physical-layer interfaces and digital functions integrated on chip can also be optimised. In addition, OEMs have the flexibility to add proprietary functions for baseband processing as fixed function hardware blocks, programmable DSP cores and software, to customise the SoC or increase integration.

Support for software for common transport and security protocols such as IPSec, IPV4, IPV6 and LTE MAC is already provided, giving extra flexibility to adjust power and performance by offloading further data-plane functions.

Integrated Baseband-Unit SoC

Together, LSI's Soft DFE technology and Axxia processing platform can achieve up to a 50% reduction in the power consumed by the RF and baseband circuitry of a 3G or LTE macrocell.

When combined with the efficiency improvements provided by the HetNet network architecture, taking advantage of smaller-scale Axxia basestation processors, a significant increase in overall power efficiency can be achieved – giving network operators valuable extra control over operating expenses.


As the popularity of wireless Internet access continues to grow explosively across the globe, the power consumed by cellular basestations represents a rapidly growing proportion of total network power consumption. It is vital for network operators to improve the efficiency of their networks and basestations.

Complementing the migration of the network to a more efficient HetNet architecture, an innovative basestation processor architecture that delivers an enhanced combination of hardware efficiency with the flexibility and adaptability of a reprogrammable solution can help regain control over the spiralling power consumption of today's basestation RF and baseband circuitry.

  • Ed Saba is product manager of the Axxia Communication Processor family within the Networking Components Division of LSI Corporation.
  • Steve Vandris is director of Strategic Planning for the Networking Components Division of LSI Corporation.
mobile networking mobile data mobile internet power consumption power saving SoC basestations TRBC TRBCFeature TRBCHomeLead
Share this Article

Most Popular

Edition: UK
TopView classic version