When Larry Smarr, the founding director of the National Center for Supercomputing Applications (NCSA), quietly confided to me in the fall of 1999 that he was moving the UCSD, I was Head of the Department of Computer Science during the midst of the first dot.com boom, itself enabled by NCSA Mosaic and the first web revolution. Netscape and its Illinois alumni were remaking Silicon Valley; Max Levchin had just founded PayPal while a student in the department; alumnus Tom Siebel was flying high with new CRM technology at Siebel Systems; and we had just celebrated HAL's birthday (from 2001: A Space Odyssey) with a massive Cyberfest event that attracted worldwide attention. ("I am a HAL 9000 computer, production number 3; I became operational at the HAL plant in Urbana Illinois on January 12, 1997.")
Students were beating down the doors, begging to get into computer science; we were hiring faculty at a frantic pace, unable to keep up with demand; venture capitalists were calling, and virtual reality (VR) was all the rage. In many ways, it was much like today's deep learning and VR boom, powered by cloud services and big data. As Yogi Berra might say, it's like déjà vu all over again, as we consistently underestimate the power of computing to reshape our society.
When Larry and I talked about the future and the NCSA directorship in the fall of 1999, I already had had a long association with NCSA. I had worked on joint research projects, and I had done some of the early visualization of web traffic when NCSA's web server had been the world's busiest. I was also the leader of the data management team – what we would now call big data – for the National Computational Science Alliance (the Alliance), which was anchored by NCSA as part of the NSF Partnerships for Advanced Computational Infrastructure (PACI) program. Based on that conversation with Larry, it was clear that both NCSA and my life were about to change in some profound ways. However, despite the universal sense that computing was a revolutionary force, none of us could have predicted just how profound those changes would be. I am very proud of the NCSA team and humbled by what they accomplished.
In the space of four years, we broke ground on three new buildings, deployed the first Linux clusters in national allocation by NSF, negotiated creation of a 40 gigabit per second transcontinental optical network for research, developed an early GPU cluster based on Sony PlayStation 2's, launched the NSF TeraGrid, and committed to support the Large Synoptic Survey Telescope (LSST). In between all that, I joined the President's Information Technology Advisory Committee (PITAC), there were national meetings (the High-end Computing Revitalization Task Force (HECRTF)) and Congressional hearings about the future of supercomputing, and plans for petascale systems set stage for what would become Blue Waters. It was, in a very real way, an inflection point for the nature of computational science and scientific computing that continues today.
At the end of the 1990s, custom-built, parallel systems (MPPs) such as the Thinking Machines CM5 and SGI Origin 2000 dominated high-performance computing, having displaced vector machines such as the Cray Y-MP. As commodity PCs increased in performance, NCSA and the Alliance began experimenting with home-built Linux clusters, and in 2001, we deployed the first two of these – a one teraflop IA-32 cluster and one teraflop IA-64 Itanium cluster.
As strange as it may seem now, with Linux clusters the unquestioned standard for scientific computing, this was then viewed as a radical and highly risky decision. People often asked me, "Who do you call for support when something fails?" and "Who is responsible for the software?" The answers, of course, were that we were galvanizing a community and building a new support model, as we reinvented the very nature of scientific computing. NCSA led the way, as it has throughout its history.
What did not happen is not as well known. We had an opportunity to launch a big data initiative, in partnership with Microsoft. In 2000, secured a contingent $40M gift from Microsoft for a Windows-based data analytics system that would complement the Linux computational infrastructure, bringing database technology to high-performance computing. Alas, we did not receive the necessary federal funding. I will always wonder if we might have kickstarted the big data revolution a decade earlier. The irony is not lost on me that a few years later I would find myself leading the eXtreme Computing Group at Microsoft to design next-generation cloud computing infrastructure in support of deep learning and cloud services, using lessons and ideas from high-performance computing.
Despite the data analytics setback, the success of two Linux clusters provided the evidence needed for us to propose and deploy the NSF TeraGrid, connecting NCSA, SDSC, Argonne and Caltech in the world's largest open computing infrastructure for scientific discovery. Anchored by NCSA and SDSC, and in partnership with Intel and IBM, the TeraGrid brought distributed Grid services into national production and the subsequent expansion of the Extended Terascale Facility (ETF) created what became the NSF XSEDE program.
The TeraGrid also bought a dramatic increase in national bandwidth, as Charlie Catlett and I negotiated with Qwest to create a 40 gigabit per second wide area network that connected the Illinois and California TeraGrid sites. Our goal was to catalyze a new way of thinking about big data pipes. Ironically, Charlie and I almost nixed this deal, as we had been seeking an even faster 160 gigabits per second connection, only to settle for 40. To put this in perspective, at the time, the Internet2 transcontinental backbone operated at only 2.5 gigabits per second.
While system building, we also had great fun, made possible by the truly wonderful people that were and are the NCSA faculty, staff and students. We built (display) walls and played with smart badges for the SC02 conference. Donna Cox and her team created incredible video of severe storms and galactic evolution that illuminated the beauty of scientific discovery. Rob Pennington and his staff bought Playstation2 systems on eBay and built a GPU-accelerated Linux cluster that caught the attention of the NY Times and helped usher in today's GPU clusters.
In the midst of all this, we ran out of space – offices and computing facilities. The University of Illinois committed to an expansion of the Advanced Computation Building (ACB) that allowed us to deploy the terascale Linux clusters and the TeraGrid. ACB had originally been built to house ILLIAC IV and served as the machine room for NCSA, but the new clusters required more power, more cooling and more space. We designed the ACB expansion with plans for yet another expansion, one rendered unnecessary by the construction of the National Petascale Computing Facility.
I must give enormous credit to the university leadership team, because they built the ACB expansion before we had secured the NSF funding for new machines. I made a handshake, $100M deal with then Provost Richard Herman that if he built the ACB expansion, that NCSA would fill it with hardware. Both of us took a huge political risk, but we honored that commitment. It was just one event in a long history that reflects the repeated willingness of the University of Illinois to invest in the future.
Collectively, the team also secured funding for the NCSA office building and the new Siebel Center for Computer Science, creating an informatics quadrangle that continues to support student, faculty and staff discovery. During this time, I distinctly remember talking to Tom Siebel about the old Digital Computer Laboratory (DCL), which he described as looking like a prison. I remarked that he could help fix that, which began a long series of campus conversations that culminated in his remarkable $32M gift. Likewise, I was pleased that the NCSA building finally came to fruition, after many years of delays. It is not well known, but the two buildings were designed to allow them to connected via an extension to the NCSA building and an arch extending from the northwest corner of Siebel Center.
It was an exhilarating and exciting time. None of it would have happened without an incredible team. We built on Larry Smarr's groundbreaking vision for a supercomputing center. We created machines, software and tools; we engaged researchers, scholars and artists; we invented the future. That is – and always will be – NCSA's mission. Congratulations on 30 great years!
Recent Comments