Much has been written about what constitutes the baseline for acceptable broadband network speeds. The U.S. National Broadband Plan includes the following recommendations:
Every household and business location in America should have access to affordable broadband service with the following characteristics:
- Actual download speeds of at least 4 Mbps and actual upload speeds of at least 1 Mbps
- An acceptable quality of service for the most common interactive applications
The FCC should review and reset this target every four years.
The latter – reviewing and updating the target regularly – is essential. What was broadband in the past is woefully inadequate today. Not long ago, a 1200-baud dial modem was an object of technological envy, and a 64 Kbps ISDN connection was beyond the reach of mere mortals. Some of us even remember acoustic couplers. Today, they are all artifacts of our Internet history.
Supercomputing and Internet History
Today's Internet traces its origins to the early ARPANET and its progeny, notably NSFNET. Although both the ARPANET and the early NSFNET were continent spanning networks, the ARPANET backbone was based on 56 Kbps links. That remained the standard backbone speed even as NSFNET was deployed to interconnect the U.S. supercomputing centers.
I was at the University of Illinois when we began the NSFNET transition to a T1 (1.5 Mbps) backbone at NCSA. The iconic image of the early NSFNET T1 backbone, shown here, was the work of my former NCSA colleagues Donna Cox and Bob Patterson. The world changed quickly after that, with deployment of a new T3 (45 Mbps) NSFNET backbone, birth of the Mosaic web browser fueling the dot.com boom and rapid deployment of dark fiber.
By 2001, Charlie Catlett and I were negotiating access to 4 10 Gbps lambdas for the NSF TeraGrid, connecting NCSA in Illinois and SDSC in California. We had initially pursued options to light ten lambdas (i.e., 100 Gbps) because we were worried that 40 Gbps might not be enough to support large-scale data sharing and high fidelity scientific visualization. We were both right and wrong. The scientific data did grow, but the scientific applications took longer to develop.
At about the same time, when I picked the name TeraGrid for the distributed grid we were building to connect the NSF supercomputing centers, Ian Foster asked me if we might later regret using the "tera" prefix in a world were supercomputing performance changed so rapidly. I replied that I thought it would last long enough, and is has, far longer than I might have expected.
Though we are now deep into the petascale computing era, we are by no means in the terabit era. Although network backbone speeds have continued rise, data volumes and user expectations have risen even more rapidly, both in scientific computing and in business and consumer web services. We are now building petabyte islands and content distribution networks (CDNs) connected by relatively slow multi-gigabit networks. I keep wondering when multi-terabit networking will be commonplace.
A Devices and Services World
Inexpensive broadband access has made possible Internet searches, cloud services and always-connected PCs, tablets and smartphones. It has transformed our society, shifted our notions of commerce, shaken governments, and brought McLuhan's global village within reach.
Thus, it is no exaggeration to say that broadband networks, wired and wireless, are the oxygen that lets this world of devices and services breathe. However, the air remains perilously thin. Billions of devices are being added to the network, data volumes are rising exponentially, and expectations for anytime, anywhere access are growing worldwide. For many, digital inclusion remains a dream, rather than a reality.
If we are to meet burgeoning demand, extend the benefits of broadband services to all of the world's population, and enable a new generation of data-rich services and experiences, we need to approach the problem of broadband deployment in new ways.
We need both top down and bottom up strategies that encourage innovation and more rapid deployment of last mile connectivity. We must rethink our approaches to spectrum allocation and access, embracing more nimble and adaptive schemes. We must expand the capabilities of our backbone networks to meet demand.
Above all, we need to remember that the definition of broadband is ever shifting. We will always want more.
Comments
You can follow this conversation by subscribing to the comment feed for this post.