The Journal of Things We Like (Lots)
Select Page
Ryan Hill & Carolyn Stein, Race to the Bottom: Competition and Quality in Science (Jan. 5, 2021).

Significant new technologies have often been invented nearly simultaneously, and some scholars have worried that patent law’s rewards for the first to file create incentives to race to the patent office and do less to refine the invention. Similar concerns have been voiced about competition for academic priority leading to rushed, low-quality publications. But measuring whether competition for IP or academic credit actually decreases quality has proven difficult, and this difficulty limits the usefulness of models of innovation races.

In a creative and important new working paper, Race to the Bottom: Competition and Quality in Science, economists Ryan Hill and Carolyn Stein tackle this empirical challenge. They focus on structural biologists, whose research deciphering protein structures has advanced drug and vaccine development (including for COVID-19) and led to over a dozen Nobel Prizes. Journals and funding agencies generally require structural biologists to deposit their structures for proteins and other biological macromolecules in a worldwide repository, the Protein Data Bank (PDB). Using this rich dataset, Hill and Stein have documented that structures with higher expected reputational rewards induce more competition and are completed faster—but at lower scientific quality. Recognizing and navigating this tradeoff is important for scholars and policymakers concerned with allocating awards among competing innovators through a range of policy instruments, ranging from academic credit to intellectual property.

Three key features of the PDB make it a viable setting for this research. First, it has objective measures of project quality. The quality of a PDB structure is based on how well it fits to experimental data, resulting in quantitative, unbiased quality metrics. Second, it provides measures of project timelines. The authors could observe both the time between collecting experimental data and depositing a structure (as a measure of project speed) and the time between a first deposit and the deposit of similar structures (as a measure of competition). Third, it enables estimates of the expected reputational reward from winning the priority race to deposit a given protein structure. The detailed descriptive data in the PDB allows a structure’s potential to be estimated based on information that would have been known to researchers before they began working, including the protein type, organism, and prior related papers.

If scientists can choose whether to invest in a research project and how long to refine their work before publishing, then the projects with the highest potential reputation rewards should induce the most entry—but entrants concerned about being scooped may also rush to publish their work prematurely. And this is exactly what Hill and Stein find. Structures in the 90th versus the 10th percentile of the potential distribution induce more competition (30% more deposits), are completed faster (by 2 months), and have lower scientific quality (by 0.7 standard deviations). The fact that high-potential projects are completed more quickly suggests these results aren’t driven by high-potential projects being more complex. Additionally, the authors show that these correlations are smaller for scientists who receive lower reputational rewards from publication and priority: researchers at government-funded structural genomics consortia, who are focused achieving a comprehensive protein catalog rather than publishing individual results.

The welfare implications of rushed, low-quality protein structures appear significant. Improving a structure generally requires inefficient reinvestment of the same costs expended by the original research team. But optimizing existing incentives is challenging. Hill and Stein consider increasing the share of credit allocated to the second-place team—such as through recent journal policies that treat scooped papers on equal footing with novel papers—and conclude that if the total rewards are fixed (as seems plausible with scientific credit), the quality improvement might be outweighed by decreased investment. As another option, they argue that both investment and quality could be improved by barring entry by competitors once one team has started working on a protein structure—a sort of academic prospect theory, as was the norm in the early days of structural biology, before the size of the field made the norm too difficult to enforce. Importantly, this result depends on the specific nature of their model, with quality differences driven more by rushed work to avoid being scooped than by the skill of the research team. Reintroducing this kind of entry barrier for academic research would be challenging (and problematic under antitrust laws), but this result may inform debates over the optimal timing of awarding patent rights.

Hill and Stein’s rigorous empirical evidence that innovation races can lead to decreased quality scientific work is a welcome addition to the innovation racing literature, including because many racing models omit this consideration altogether. And their paper is also well worth reading for their thoughtful discussion of key factors for allocating rewards among competing innovators. First, how easy is it to build on incomplete work, both scientifically and legally? Unlike in structural biology, follow-on work is not always particularly costly; for example, if an ornithologist releases an incomplete dataset of bird species, a subsequent team can pick up the project relatively seamlessly, increasing the value of early disclosure. Second, how important are differences in research skill relative to the decline in quality caused by rushing? Ending innovation races early may be effective in structural biology, but in many cases, giving the first team time to complete work well may not be worth the cost of preventing a better team from stepping in. Third, are rewards fixed? Creating additional academic credit may be difficult, but financial rewards—including through government prizes and subsidies—can be used to increase the second team’s payoff without reducing the first’s.

Before reading this paper, I had thought about the problem of rewards for incomplete research primarily in terms of quality thresholds such as patentability criteria, but choosing a threshold that applies across projects of varying difficulty is challenging in practice. Hill and Stein have given me a richer understanding of the relevant variables and policy instruments for tackling this challenge, and I look forward to seeing the impact this work has on the innovation law community.

Download PDF
Cite as: Lisa Larrimore Ouellette, How Do Innovation Races Affect Research Quality?, JOTWELL (April 30, 2021) (reviewing Ryan Hill & Carolyn Stein, Race to the Bottom: Competition and Quality in Science (Jan. 5, 2021)), https://ip.jotwell.com/how-do-innovation-races-affect-research-quality/.