The Journal of Things We Like (Lots)
Select Page
Bhaven N. Sampat, Serendipity (Mar. 8, 2014), available at SSRN.

“Serendipity, the notion that research in one area often leads to advances in another, has been a central idea in the economics of innovation and science and technology policy.” Yet, Bhaven Sampat argues, “serendipity is a hypothesis, not a fact,” and it therefore needs to be tested. So Sampat set out to do just that, designing an impressive study to measure serendipity in pharmaceutical research. In this context, Sampat defines serendipity in terms of cross-disease spillover: results are serendipitous when research that was originally funded for one purpose (to target a certain disease) turns out to be useful for another purpose (in the treatment of different diseases).

Sampat tests the serendipity hypothesis by comparing the disease foci of NIH grants to the research output from those grants. Because most of the NIH Institutes and Centers are explicitly disease-oriented, Sampat is able to compare the disease foci of the granting Institutes to the foci of publications that result from the grants, to citations to these publications in patents, and then to marketed drugs associated with those patents. Finally, Sampat focuses on the subset of drugs held by NIH researchers themselves (the Bayh Dole drugs). Publications, patents, and drugs with different disease foci than the granting Institute are deemed evidence in support of the serendipity hypothesis.

This approach to measuring serendipity is useful because, despite good evidence that spillovers in the innovation context have significant overall economic benefits, evidence of the kind of serendipity to which Sampat refers has been mostly anecdotal. Viagra, for example, is a well-known example of a drug originally developed as a blood pressure medication but later discovered to be useful for treating erectile dysfunction.

Sampat ultimately finds strong evidence of serendipity by all of his measures. In particular, 50-60% of the publications resulting from grants were outside the main disease area and nearly 50% of drugs link back to a grant in a different disease area. The story is similar for drugs based on patents owned by NIH researchers. About 30% of those drugs link back to “basic” grants with no disease focus, and almost a quarter (23%) of the drugs which can be traced back to a disease-focused grant relate to a different disease area than the drug’s first indication.

It’s worth noting that, as Sampat acknowledges, this kind of cross-disease spillover isn’t necessarily a result of serendipity, which obviously implies an unintended outcome. Some of the NIH Institutes explicitly fund basic research, which makes it harder to know that research funded by those Institutes was actually intended for any specific purpose, let alone one that differed from the outcome. Some Institutes also fund research across several diseases, so it should not be surprising to find more diverse research outputs from those Institutes. It seems unlikely, however, that these factors explain the magnitude of cross-disease spillover that Sampat finds. Some significant amount of that spillover is very likely serendipitous.

Sampat’s research is quite timely, as there recently has been a renewed interest in innovation institutions beyond intellectual property, and government grants play an important role in the overall innovation ecosystem. But grants can be structured in a variety of ways, and differently structured grants will interact with other innovation institutions (including patent law) in different ways. Certain combinations of institutions will take us down one innovation path; other combinations will take us down a different path. One important consideration in choosing the mix of institutions is the extent to which we believe we can determine, ex ante, which specific research outputs we want, or whether instead we are better off choosing policies that leave open the greatest number of potential outcomes.

One way to read Sampat’s research is as evidence that policymakers simply can’t fully direct innovation because the research process is inherently unpredictable. On that view, less may be at stake in structuring innovation institutions than previously assumed – even when government grants are intended for particular ends, we can still expect to see a broader range of outputs simply because the research process is not linear. In this respect, those concerned about innovation policy being managed in an excessively top-down manner might have less to worry about than they feared.

But whether that is right depends a lot on whether these research outputs were in fact serendipitous and don’t simply look that way because researchers had to find a way to get their projects funded. To put it somewhat crudely, some applications for disease-focused grants might simply have been directed to particular Institutes because that is where the money was. The extent to which we should still worry about focusing research grants also depends on the range of serendipity that might have resulted under some less restrictive institutional arrangements – whether, for example, more research would have been undertaken that did not have a clear objective at all, or whether the research would have generated an even wider range of outputs simply because the researchers were aiming at a more diverse range of targets. In this sense, the range of targets might anchor the outputs such that, even if the results turn out to be broader than the focus of the grants, they are still more limited than they might otherwise have been.

Of course, none of this answers the question of how much we should want to focus innovation. For while it might be true that less restrictive institutional arrangements would allow for more serendipity, it might also mean that we are less likely to get specific research outputs that we can be confident we want. We might, for example, get a wider range of cancer treatments, but less effective treatments for particular types of cancer. Which kind of system we should prefer is an important normative question that Sampat doesn’t try to answer. But we need to have a meaningful and empirically-grounded sense of the tradeoffs involved in order to think clearly about our priorities, and Sampat has done us a great service by beginning to fill the empirical vacuum.

Download PDF
Cite as: Mark McKenna, Measuring Serendipity, JOTWELL (January 20, 2015) (reviewing Bhaven N. Sampat, Serendipity (Mar. 8, 2014), available at SSRN),