It's a simple question, really. As a government research funding agency, given $N to support M scholars, where M is large and N is not large enough, make wise and judicious investments. It's a multivariate optimization problem. Some investigators need large, expensive instruments to conduct leading edge research and scholarship. Others can (and do) conduct breakthrough research with nothing more than pencil and paper. Some work alone, others work in teams of thousands, and there are always more good research ideas and active researchers than available funds. It is a global optimization problem, whether in the U.S., Europe, or Asia.
Moreover, all funding agencies have only fuzzy predictors of the likely success of proposed projects – after all, that's why it's called research, rather than development. Also, every funding agency knows from experience that scientific impact is not necessarily correlated with funding scale. Some small projects yield stunning results, and some large projects are pedestrian; the converse is also true. Put another way, these funding investments and outcome yields are defined by power laws, with noisy data and uncertain, time varying coefficients.
In each case, the goal is a distribution of limited research funds that maximizes scientific output, subject to three critical constraints. First, one must maintain community vibrancy via funding continuity (i.e., one cannot starve some disciplines to fund others). Second, and equally importantly, one must invest in research projects that also train new generations of scientists, lest disciplines fade away. Third, one must balance economic and national competitiveness objectives. This balance is especially pronounced when funding is limited.
Given the current values of N and M in the U.S., no more than 5 to 10 percent of promising ideas can be funded in many domains. (I leave as an exercise for the reader to compute the probability of successful funding after submitting K separate proposals, where each proposal review is independent and unbiased. From there, it's just a fancy ramble to random walks and diffusion processes.) Even more perniciously, given this highly competitive environment, the mean age at which many researchers receive their first single investigator funding is now over 40 (i.e., a substantial fraction of their professional careers has already passed).
Though the funding allocation question may seem superficially simple, its solution is remarkably complex. Indeed, it is the essence of science and national competitiveness policy – judiciously allocating funds to research projects with widely varying funding scales and yield outcomes. At the high end, instruments may cost hundreds of millions to billions of U.S. dollars, and at the extreme scale, humanity can invest in at most one such instrument in a given domain. The multinational Large Hadron Collider (LHC) is the poster child of such limits, albeit one that supports thousands of scholars and recently yielded evidence of the long sought Higgs boson.
As scientific instruments grow in scale and complexity and the number of practicing scientists also expands, balancing single researcher and small group support against large, collaborative projects is of ever-growing importance. In some cases, U.S. funding agencies now find themselves facing an extraordinarily difficult choice, whether to terminate active and productive scientific instruments to fund new projects, or limit "new starts" to sustain existing projects. All of this is occasionally exacerbated by cost overruns on large instrument construction (e.g., the James Webb Space Telescope), which squeezes disciplinary research budgets or even leads to project termination (e.g., the Superconducting Super Collider).
Nor are scientific disciplines immune to hype and "irrational exuberance" that sometimes skew investments. In Gartner hype cycle parlance, some disciplines experience both the peak of inflated expectations and the trough of despair, before (one hopes) ascending the slope of enlightenment.
After the discovery of high temperature superconductors, the Woodstock of physics reflected enormous research excitement and subsequent research funding redirection. Computing survived the dot.com bubble, albeit with many untoward consequences. Similarly, artificial intelligence is now booming, with dramatic research advances and widespread applications. However, it suffered through multiple cycles of what has been ruefully called the AI Winter, due to over-promises in the 1970s and 1980s.
Finally, as I described in a 2014 essay, entitled Quantifying Innovation and Investment, these science policy and priority questions are with us always. They are similar to those posed by the late Jack Marburger, when he was director of the White House Office of Science and Technology Policy (OSTP):
How much should a nation spend on science? What kind of science? How much from private versus public sectors? Does demand for funding by potential science performers imply a shortage of funding or a surfeit of performers?
These big questions never change, but the answers do, and we must choose wisely as $N (funding supply) and M (number of potential researchers) change. That is the essence of scientific policy leadership.
Recent Comments