As consumers, we have access to diverse software and a wide array of devices on which to use it. ("There's an app for that!") The virtuous cycle between consumer demand and application development ensures that the software ecosystem is rich and responsive. At the other extreme, the world's most powerful high-performance computing systems, experienced teams of computational scientists create research applications that target some of our most complex and pressing problems – energy, environment and climate change, computational fluid and structural dynamics for building and aircraft design, and biological models that illuminate the future of personalized medicine.
Ah, but betwixt and between ubiquitous consumer software and the ethereal realm of ultra-high-performance computing, lies the excluded middle, the world of day-to-day computational science problems. This "no man's land" lacks both the ready availability of application software suitable for solving problems in a host of domains – advanced manufacturing, materials analysis, and biomedical assessment – and the cadre of computational science experts who might develop custom software solutions. The result has been limited uptake of computational science by companies of all sizes, large, medium and small.
The U.S. Council on Competitiveness launched a high-performance computing (HPC) initiative whose goal was to "… facilitate wider usage of HPC across the private sector to propel productivity, innovation and competitiveness." The initiative conducted several surveys and business roundtables to understand what would accelerate adoption of high-performance computing and computational science by business. The overarching insights from these studies were (a) the difficulty in using current HPC systems and (b) the dearth of individuals with the necessary skills to develop computational models.
I recently attended a workshop on advanced manufacturing, organized by the President's Council of Advisors on Science and Technology (PCAST), where the use of computational science was also a topic. The chief technology officers (CTOs) of major multinational companies echoed the same observations. Namely, they would love to exploit HPC for business advantage, but it was too difficult to use and too hard to find the requisite talent.
In other contexts, I have repeatedly been told by both business leaders and academic researchers that they want "turnkey" HPC solutions that have the simplicity of desktop tools but the power of massively parallel computing. Such desktop tools would allow non-experts to create complex models quickly and easily, evaluate those models in parallel and correlate the results with experimental and observational data.
Unlike ultra-high-performance computing, this is about maximizing human productivity rather than obtaining the largest fraction of possible HPC platform performance. Most often, users will trade hardware performance for simplicity and convenience. This is an opportunity and a challenge, an opportunity to create domain-specific tools with high expressivity and a challenge to translate the output of those tools into efficient, parallel computations.