Alvin Toffler's seminal book, Future Shock, discusses the disruptive societal effects of accelerating technology change. Today, such effects are manifest in the rapid evolution of job skills needed for sustainable employment, the globalization of manufacturing and supply chains, the shift to knowledge economies, the rising cost of technology intensive health care. Rapid technology change also stresses existing organizational processes and structures.
Spectrum Scarcity
One telling example of technological stress is the interplay between the explosive growth of smartphones and traditional models of spectrum allocation. Feature rich smartphones place unprecedented demands on wireless communication networks to support text services, social networking, and streaming media. If you have ever experienced service loss in a densely populated area, you understand the problem. In response, cellular carriers are scrambling to build new cell towers and deploy 4G technologies.
However, all of this cellular infrastructure investment centers on a very small fraction of the total radio spectrum, allocated via regulatory and auction processes. This historical approach, which dates almost from the days of Marconi, partitions the radio spectrum and allocates frequency bands to different purposes (e.g., government, public safety, military, television, radio, unlicensed consumer). The figure below shows the frequency allocations in the United States, and there are similar patterns in other parts of the world, with standards harmonized by the International Telecommunications Union (ITU).
Fixed partitioning was the appropriate mechanism when transceivers were designed to operate in limited frequency bands and all functionality was embedded in hardware. However, it has led to inefficient spectrum use, with some bands heavily subscribed while others are rarely used. Indeed, several studies have shown that large parts of the available spectrum are unused most of the time at most locations, within a reasonable detection threshold.
Cognitive Radio
With the rise inexpensive, high-performance microprocessors and radio frequency (RF) system-on-a-chip (SoC) designs, more nimble, cognitive radio designs are now possible that can operate across wide portions of the spectrum. Software defined radio (SDR), which implements all aspects of signal processing but the analog-to-digital conversion in software, is perhaps the most general cognitive radio example. Intermediate implementations mix hardware and software functions to achieve adaptability, identifying what portion of the spectrum can be used at the current location by the device and then negotiating access (e.g., period of use, power level, and priority). White spaces communication, made possible by the transition to digital televison, is one early example.
Because cognitive radios can share spectrum with other devices and adapt to changing circumstances as needed, they can reduce contention within fixed spectral bands. Even more importantly, they are one of the enabling technogies for a new generation of client plus cloud services based on the Internet of Things (IOT). Imagine adaptive traffic routing for electric vehicles based on traffic sensors, vehicle charge monitoring and driver calendar schedules. Or imagine in-home wellness monitoring for an aging population based on biomedical and environmental sensors, profile and mobility assessment and electronically mediated health care interactions.
Nimble Policy
Cognitive radio technologies are necessary but not sufficient. The spectrum future shock of exponentially rising communication demands challenges our existing spectrum allocation mechanisms. We must also adapt our policy and regulatory frameworks to enable deployment of cognitive communication technologies that span licensed and unlicensed spectrum. Both technology and policy will be needed to realize this vision of everywhere, anytime communication.
One of the major secondary drivers of fixed partitioning is the need for service providers to be able to control their spectrum resources. Today, there are agreed-upon conventions for determining the value of a spectrum asset based upon the size of the spectrum allocation, the population covered, the spectral efficiency of currently-available technologies, and the spectrum band.
Technologies which enable sharing of spectrum will require service providers to take a leap of faith, something they're not likely to do on their own. A case in point: Verizon's CEO recently asserted that there was already sufficient spectrum available even as the FCC was planning to double spectrum resources for consumer broadband. I'm fairly certain this statement was based upon a competitive analysis, and not on future capacity requirements.
So, the technologies need to be developed, yes. Regulators could, as a first step, make some spectrum available for excess capacity to entice its use. Service providers will be naysayers until they have a capacity hotspot with a technology and excess spectrum to fix it. If the access standards support it and the spectrum is available it will be embraced, if perhaps a bit slowly at first.
As an aside, Self-Organizing Networks (SON) have a huge potential to enable this kind of spectrum sharing, but without an open-standard paradigm around SON, it threatens to be a vendor-specific feature of access networks. The concept of a central intelligent system which could facilitate this kind of spectrum sharing requires things such as interference mitigation and load balancing across a multi-vendor (and perhaps multi-service-provider) network. Vendors today are embracing open-standards at the edge of the access network, but at the same time, building completely proprietary SON solutions in the core. The appropriate pressure by regulators could steer this in the right direction.
Posted by: Chad Pralle | July 06, 2010 at 11:13 PM