The Journal of Things We Like (Lots)
Select Page

Innovation Policy Pluralism, or Innovation Policy Hybridism?

Daniel J. Hemel and Lisa Larrimore Ouellette, Innovation Policy Pluralism, 128 Yale L. J. (forthcoming), available at SSRN.

In previous work, Daniel J. Hemel and Lisa Larrimore Ouellette explored the range of tools available to regulators interested in promoting innovation. (See Ted Sichelman’s jot.) While legal scholars addressing innovation policy frequently focus solely on patent law—in fact, the term “intellectual property” is often employed as a synecdoche to refer to the broader scholarly field of innovation policy—Hemel and Ouellette argued that viewing patents, prizes, grants, and tax credits as imperfect substitutes allows the public goods problem that underlies innovation policy to be solved in a variety of different ways, each with its own advantages and disadvantages.

In their most recent work, Innovation Policy Pluralism, Hemel and Ouellette push their earlier argument one step further. They again increase the number of tools in the innovation-policy toolkit by developing a divide-and-recombine approach to intellectual property and its quasi-substitutes. They argue that any given tool for promoting innovation has two “separate and separable” components. First, it has an innovation incentive or a “payoff structure for the producers of knowledge goods.” Second, it has an allocation mechanism that “establish[es] the conditions under which consumers can use knowledge goods.” Hemel and Ouellette provide a thorough, clearly argued, and convincing analysis of the combinatorial possibilities that arise from this finer-grained analysis of the components of innovation-policy regimes.

More specifically, Hemel and Ouellette identify three conceptually distinct types of combinatorial possibilities. First, matching combines components of different types: it allows the innovation-incentive component from one conventional innovation-policy tool to be coupled to the allocation-mechanism component from another. For example, if the government were to buy patents from innovation producers upon issuance at the market price and make the innovation freely available to the public, the innovation incentive would resemble the innovation incentive created by patent law (because it rewards producers with a fraction of the social value that their innovation generates, as measured by consumers’ willingness to pay) but the allocation mechanism would resemble the open-access regime conventionally associated with prizes and grants. Second, mixing treats each of the components as continuous rather than binary variables and allows amalgams of distinct components of the same type. For example, focusing on the innovation-incentive component, an upfront grant that does not cover the full expense of research and development expenses may be combined with patent exclusivity that is weaker than conventional patent law provides. Third, layering recognizes that different types of innovation incentives and allocation mechanisms can be deployed at the national and international levels.

While they do produce and analyze the pros and cons of an array of specific proposals, Hemel and Ouellette view their primary contribution as a conceptual framework for talking about innovation policy in a more precise and nuanced manner. The framework helps us to both conceive new innovation-policy proposals and evaluate their normative merits. Just as importantly, it also helps us to understand the jumble of laws that currently promote innovation in the United States. To prove this latter point, Hemel and Ouellette use their framework in a final section to describe the innovation-policy regime that already exists in the pharmaceutical industry more succinctly and accurately than would be possible if taxes, grants, and patents were conceived of as siloed innovation-policy regimes.

To my mind, the article’s title is a bit misleading. (This is akin to praising with faint damnation, I know.) Hemel and Ouellette’s earlier work was about innovation-policy pluralism. There is no one-size-fits-all optimal solution to a public goods problem; different circumstances may call for different tools from the toolkit; there is value in recognizing the diversity of tools available for remedying an innovation-policy problem. Taking the argument to the next level, Innovation Policy Pluralism embraces what I think could more tellingly be labeled innovation-policy “hybridism.” Here, the very notion that there is a finite set of discrete tools in the toolkit becomes a contingent artifact of the conceptual framework conventionally used to think about innovation policy. By breaking these tools down into the components that they all share, the conceptual framework developed by Hemel and Ouellette reveals that we can match, mix and layer our way to hybrid tools that are more perfectly crafted to the specifications of any given innovation-policy problem.

Cite as: Kevin E. Collins, Innovation Policy Pluralism, or Innovation Policy Hybridism?, JOTWELL (July 17, 2018) (reviewing Daniel J. Hemel and Lisa Larrimore Ouellette, Innovation Policy Pluralism, 128 Yale L. J. (forthcoming), available at SSRN), https://ip.jotwell.com/innovation-policy-pluralism-or-innovation-policy-hybridism/.

IP Norms’ Dark Side

Stephanie Bair & Laura Pedraza-Fariña, Anti-Innovation Norms, 112 Nw. L. Rev. 1069 (2018).

The early legal literature on law and social norms tended to paint a rosy picture. Social norms were generally depicted as an optimal set of organically developed rules, informed by the experience of a close-knit community, and thus superior to formal law as a way to regulate behavior. Later scholars came to realize that nothing guarantees the optimality of social norms: they may perpetuate practices that no longer make sense, or they may advance the interests of certain groups but not social welfare. In such cases, formal law holds the promise of overriding suboptimal norms and moving society to a better place.

A few years back, I wrote, together with Christopher Sprigman, one of the first papers on social norms in intellectual property law in which we explored how standup comedians informally regulate the ownership and transfer of rights in jokes and comedic routines. While we thought that in that particular case the system of social norms largely brought good outcomes in terms of inducing creativity, that conclusion came at the end of reviewing their advantages relative to copyright law, but, importantly, also their relative disadvantages. For example, we noted aspects of the norms system that we thought were unattractive for creativity relative to formal copyright law: standups norms’ system does not recognize a term limit (so the norms’ system discourages dissemination and reuse of old materials), does not harbor a concept of fair use (so it discourages derivative and sequential creativity), and grants a scope of protection that exceeds that which is available under copyright’s idea/expression doctrine (so it discourages borrowing high level concepts and themes from others).

Since we wrote our paper, the law and social norms field of IP has grown considerably, and is, I believe, one of the most important developments in IP theory over the last decade, not to mention a fun and interesting one. Others have explored social norms that are copyright-like, patent-like, and trademark-like. More recently, the literature moved to make more general claims and observations based on a rich array of norms’ systems and case studies. One of the latest and important additions to this recent literature is the paper by Stephanie Bair and Laura Pedraza-Fariña which makes the claim about the dark side—i.e. counterproductive creativity-wise—of social norms more generally. In doing so, it builds on much social science and psychology literature.

Bair and Pedraza-Fariña emphasize that anti-innovation norms are harmful because they tend to hinder breakthrough innovation, namely innovation that recombines knowledge across boundaries of (often distant) creative communities and industries. They create barriers between bodies of knowledge, reinforcing the use of specialized, field-specific terminology rather than a terminology, shared among fields, that tends to be over-enforced.

Anti-innovation norms come in three types: research priority norms, methodology norms, and evaluation norms. Research priority norms determine which problems are worth artistic or technological attention. The dark side is that they implicitly suggest which problems are not worth members’ time and attention. This is particularly costly when norms discourage members from exploring intersectional problems.

Methodology norms dictate the proper ways, tools, and routines appropriate to solve the problems community members identified as important. Such norms can have a negative consequence to the extent that they teach against the use of alternative, non-conforming methodologies. Intersectional problems pose a specific challenge as they are often best addressed through the combination of methodologies.

Lastly, evaluation norms dictate how a creative community assesses the contribution of members. Good work may be that which is patented, or published in certain fora, or meets some other community standard. Their dark side lies in reinforcing priority and evaluation norms and in discouraging cross-disciplinary innovation.

Two psychological biases explain why anti-innovation norms emerge and persist. First, the status quo bias describes a preference for the current state of affairs over change. Second, conformity bias describes individuals’ tendency to forego exercising their own judgement in favor of following social norms.

Having illuminated social norms’ dark side, Bair and Pedraza-Fariña propose solutions. They believe that IP law, which affects the appropriability of the fruits of one’s labor, is limited in its ability to incentivize cross-boundary innovation. Instead, they emphasize the need to turn to other mechanisms that affect innovation, such as government grants and tax credits.

Conducting the first general treatment of the topic, this article is important not only for drawing attention to, and advancing the understanding of, social norms’ failures, its causes, its effects and for suggesting possible solutions thereto, but also because the analysis opens up the door for many potential follow-on research paths, of which I would like to suggest a few. First, the article reinforces the understanding that law and social norms are just two ways to foster innovation, and that each comes with relative, rather than absolute, advantages and disadvantages. Law and social norms do not only affect how much is being created, but also the type of innovation that is taking place. For example, in standup comedy early 20th century norms suggested that copying and reuse were legit and humor tended to be generic, but late 20th century norms suggested that originality was key, and the type of humor created tends to be personal, observational, and point of view driven. The law and social norms literature suggests that in some cases law and social norms may work in tandem and reinforce each other, but that in others they may crowd out one another. After recognizing that social norms have their disadvantages too, an additional step forward would be to develop a theory of the optimal mix of innovation law (IP and non-IP) and social norms.

Second, the authors focus on social norms’ dark side in hindering cross-boundary research. It is at least conceivable that social norms also have negative effects within disciplinary boundaries (as the last comedy example may suggest). Examining norms’ failure in other communities may suggest whether this is a more general point, and perhaps help to enrich the general theory of norms’ failure.

Third, and related, since the article comes at a stage where a considerable number of industry-specific case studies already exists, it likely presents the opportunity to further contextualize and concretize the discussion. For example, for all or some of the communities explored in the literature, scholars could identify particular anti-innovation norms, map the category they fall into, highlight the psychological biases that reinforce them, and propose particular IP and non-IP reforms that would be most effective in countering them. Indeed, it seems that there is much to be gained from paying attention to anti-innovation norms in IP.

Cite as: Dotan Oliar, IP Norms’ Dark Side, JOTWELL (June 26, 2018) (reviewing Stephanie Bair & Laura Pedraza-Fariña, Anti-Innovation Norms, 112 Nw. L. Rev. 1069 (2018)), https://ip.jotwell.com/ip-norms-dark-side/.

Does Copyright Have a Framing Problem?

Margot E. Kaminski & Guy A. Rub, Copyright’s Framing Problem, 64 UCLA L. Rev. 1102 (2017).

Numerous provisions of the Copyright Act of 1976 (“1976 Act”) use the term “work” as a key referent for determining copyrightability, ownership, scope of rights, limitations on scope, and remedies. Yet, Congress did not provide a general-purpose definition of what counts as a “work,” even though it defined a plethora of arguably much less important terms. When the parties in litigation explicitly or implicitly disagree about the fundamental issue of what the plaintiff’s or defendant’s work is, what is a court to do?

This is a big and important question. While Kaminski and Rub do not provide a full response, they do frame the problem and illustrate how it plays out in many contexts. They demonstrate that courts have considerable flexibility in how to define the relevant work. Sometimes, courts use this flexibility to “zoom in” on particular facets of, for example, a design on a carpet that mixes public domain and original elements. Other times they “zoom out” to consider a work’s total concept and feel. Courts rarely defend their framing of the relevant work, and when they do, they do not use a shared set of criteria to justify their choices.

Kaminski and Rub have three goals for this article. First, they review numerous cases that illustrate the range of legal issues for which a court’s framing choice is consequential. They show the inconsistency of judicial framing choices and the rarity of justifications of these choices. Second, they argue that the doctrinal flexibility judges have exercised is more a feature than a bug in copyright law because it supplements a court’s ability to tailor the scope of rights or remedies in particular cases to better achieve copyright’s goals. Third, they argue that courts should more self-consciously exercise this framing flexibility and provide a justification for their choices. The authors close with a set of criteria courts might rely on to provide such a justification.

In their survey of the choices that are available to courts in framing the definition of the “work,” the authors start with the cases involving copying that is not wholesale, so-called “non-literal infringement.” The legal test is whether the two works are “substantially similar,” and how this is applied depends greatly on how the plaintiff’s and defendant’s works are framed. One of the more difficult issues in this analysis involves whether a character is a work independent of the larger narrative in which the character appears, and if it is a work, how is that work defined when the character, say, Sherlock Holmes, or the Batmobile, evolves? The article briefly discusses how the Ninth Circuit “zoomed out” from the many iterations of the Batmobile to focus on its general characteristics in the Batman series to define the work in DC Comics v. Towle.1 As a reader, I wanted the authors to dive a little deeper into this issue to compare and contrast some of the other character cases.

The article then turns to questions of authorship, showing that courts have decided in motion picture cases to zoom out and treat the dominant author of the motion picture as the author, rejecting the theory that an actor creates a distinct work in a scene for which the actor is primarily responsible. The article then dives deeper into the useful article doctrine, arguing that the Supreme Court, in its enigmatic recent decision in Star Athletica LLC v. Varsity Brands, Inc.,2 sent conflicting signals about how to frame the design features and the utilitarian aspects by zooming in on the chevrons on a cheerleading uniform in parts of the discussion and zooming out to look at the uniform as a whole in others.

Turning to the framing tests the courts have used, or could use, Kaminski and Rub argue that most courts use no test, showing the inconsistent framing in cases involving photographs copied from collections of photographs as an example. Courts sometimes ask whether something is a work or a constituent element by testing it for copyrightability, but the authors persuasively argue that this test does too little work. A market-based approach has some appeal, and some courts ask whether the putatively independent works can “live their own copyright life [sic].”3 A variant on this test is whether the putative work has independent economic value. The authors acknowledge the appeal and some utility of this approach, but they rightly caution that with increasing variety in bundling and unbundling of works in digital form, market signals are less likely to provide stable guidance.

The closing section of the article argues that judicial flexibility in framing the work is beneficial because it allows courts to tailor protection and remedies with respect to policy considerations such as: the incentives-access tradeoff, managing transaction costs, and channeling creativity. As a result, permitting inconsistent framing depending on the doctrinal context provides some benefit. Small units may be copyrightable, and therefore works, for the purposes of copyrightability, but if each copyrightable unit is eligible for a separate award of statutory damages, existing problems with proportionality in statutory damages would become greatly magnified, for example.

The authors set an ambitious task for themselves, and their ability to range across the copyright caselaw is impressive. Two caveats: As a reader, I would have liked to see greater engagement with Justin Hughes’s article on microworks,4 which the authors acknowledge that they are building on, particularly when considering the available framing tests. I also think that the framing choices they discuss in the subsections on fair use, statutory damages, and Section 201(c) are qualitatively related but distinct because the decisions in the cases discussed there focused more on choosing which work – the collective work or the contribution to the collective work – to focus on rather than whether to zoom in or out on aspects of the work, as in the substantial similarity or useful article cases. That said, Kaminski and Rub make a persuasive case that courts should more explicitly identify and justify their framing choices about the relevant work(s) when applying the wide range of doctrines tied to the “work.”

  1. 802 F.3d 1012 (9th Cir. 2015). []
  2. 137 S. Ct. 1002 (2017). []
  3. Gamma Audio & Video, Inc. v. Ean-Chea, 11 F.3d 1106, 1116 (1st Cir. 1993). []
  4. Justin Hughes, Size Matters (or Should) in Copyright Law, 74 Fordham L. Rev. 575 (2005). []
Cite as: Michael W. Carroll, Does Copyright Have a Framing Problem?, JOTWELL (May 29, 2018) (reviewing Margot E. Kaminski & Guy A. Rub, Copyright’s Framing Problem, 64 UCLA L. Rev. 1102 (2017)), https://ip.jotwell.com/does-copyright-have-a-framing-problem/.

When Social Norms for Innovation Don’t Maximize Social Welfare

Stephanie Bair & Laura Pedraza-Fariña, Anti-Innovation Norms, 112 Nw. U. L. Rev. 1069 (2018).

For more than a decade, intellectual property scholars have demonstrated how informal social norms can efficiently substitute for formal legal protections to promote creativity and innovation. Now come Stephanie Bair and Laura Pedraza-Fariña to show that social norms can have a less happy effect. In some instances, they argue, informal social norms may impede innovation. Their forthcoming article, Anti-Innovation Norms, explores a selection of these and what might be done about them.

According to the standard economic story, IP rights exist to solve a public goods problem. In the absence of IP rights, rampant copying would reduce the ability of creators to recoup their investments, and society would face underinvestment in new works and inventions. IP rights solve this problem by enabling creators to charge supramarginal prices for those works and inventions. Although IP rights are costly, they produce net benefits by encouraging creativity that we wouldn’t otherwise have.

Yet a still-expanding list of studies has shown that in many industries, including fashion design, cooking, stand-up comedy, and tattooing, the lack of formal IP protection has not killed creativity. In these and other fields, robust social norms that promote creativity and attribution and that shun certain kinds of copying can replace formal legal protection more flexibly and at lower social cost. Norms can make free riding costly, and they support those engaged in creative work.

Professors Bair and Pedraza-Fariña do not gainsay the importance of social norms to limit the effects of free riding in many low-IP fields, but they suggest that scholars’ focus on free riding has caused them to miss other, less salutary, social norms that affect innovation. In particular, Professors Bair and Pedraza-Fariña demonstrate the existence of a group of norms that undermine an especially valuable source of innovation—disciplinary boundary crossing. While many of the most important innovations in science, technology, and the arts come from the merging of insights from previously unrelated fields, creators can face substantial disincentives to engage in boundary-crossing work.

Professors Bair and Pedraza-Fariña have, themselves, not been dissuaded from crossing disciplinary boundaries to substantiate their account. They turn first to sociological studies of science and the arts in order to identify three anti-innovation norms: research priority norms, methodology norms, and evaluation norms. These norms tell people within a given field (1) to work on projects that the field deems valuable, (2) to apply methods deemed acceptable to those projects, and (3) to judge the value of contributions to the field on terms that the field prefers. Drawing on examples from physics, medicine, and contemporary visual art, the authors demonstrate how anti-innovation norms create field-specific professional and reputational costs that discourage creators from pursuing potentially more valuable opportunities at or beyond the borders of their fields.

Continuing their own boundary crossing, Professors Bair and Pedraza-Fariña next turn to social psychology to explain possible sources of anti-innovation norms. They focus on two aspects of individual decision-making, the status quo bias and the conformity bias, as the likely culprits. For a variety of reasons, people tend to be strongly attached to the ways things are now, and it can be surprisingly difficult to get people to switch away from their present status. In addition, people generally like to conform their behavior to those in their social groups. Conformity aids group membership and solidarity and the benefits they confer. The authors speculate that these two tendencies can make it harder for individual creators to break away from accepted paths and procedures and to risk forging their own identities.

Having established the existence of anti-innovation norms, Professors Bair and Pedraza-Fariña return to their disciplinary home to consider whether law can do anything to solve the problems they’ve articulated. In large part, they are skeptical of IP law’s ability to particularly encourage boundary-crossing innovation, because copyright and patent tend to reward innovations without reference to whether they combine diverse insights or not. They argue that IP’s reliance on decentralized market signals, which is normally considered a feature, can also be a bug when those signals are filtered through a norm-inflected view of the technological or artistic environment. Although the authors suggest some tweaks to IP doctrine, their policy proposals focus instead on using grant funding and tax incentives to encourage and reward boundary-crossing creativity.

Professors Bair and Pedraza-Fariña have helpfully reminded IP scholars that social norms are not always a good thing for creativity. In addition to the norms against boundary crossing, other norms may discourage certain kinds of creators or certain kinds of creativity from emerging. As we develop a richer view of the sociology and psychology of innovation, we should pay attention to the costs of informal ordering as well as to its benefits.

Cite as: Christopher J. Buccafusco, When Social Norms for Innovation Don’t Maximize Social Welfare, JOTWELL (April 30, 2018) (reviewing Stephanie Bair & Laura Pedraza-Fariña, Anti-Innovation Norms, 112 Nw. U. L. Rev. 1069 (2018)), https://ip.jotwell.com/when-social-norms-for-innovation-dont-maximize-social-welfare/.

Designing Design Patent Subject Matter

Sarah Burstein, The Article of Manufacture in 1877, 32 Berkeley Tech. L.J. 1 (2017).

Designs for articles of manufacture are eligible for design patent protection under section 171 of the Patent Act if they are “new, original and ornamental.” As is clear on the face of this provision, design patent subject matter is limited to designs for “articles of manufacture”—other types of design, including architectural design, would seem to be excluded. But what exactly is included? What is a “design for an article of manufacture”? In an important new article, The Article of Manufacture in 1877, Sarah Burstein provides critical historical context on that question.

Courts and the Patent Office have in recent years given “article of manufacture” a very broad interpretation. They have, for instance, accepted as design patent subject matter the design of a spray pattern for a water fountain, the design of the shank portion of a drill bit, and graphical user interface (GUI) designs (so long as they are depicted in a little rectangle to indicate—nominally—that they show up on a screen). As a subject matter threshold, then, the article of manufacture requirement has in recent years excluded very little from design patent protection.

But the Supreme Court’s decision in  Apple v. Samsung made the relevant article of manufacture central to determining damages, putting new pressure on design patent law’s understanding of that concept. The issue in Apple v. Samsung was the appropriate way to calculate damages in a case involving a “partial design”—that is, a case in which the design patent claimed only part of the article to which it was applied. Design patent has a special damages rule under which an infringer is “liable to the owner to the extent of his total profit, but not less than $250.” 35 U.S.C. § 289. Congress added that rule to the Patent Act after the Supreme Court’s decision in Dobson v. Hartford Carpet Co., 114 U.S. 439 (1885), in which the Court refused to allow the owner of a patented carpet design to recover the full profits from the sale of infringing rugs. According to the Court, profits attributable to the design had to be distinguished from those attributable to the “intrinsic merits of quality and structure” of the article. Thus, in order to recover the entire profits from the sale of articles bearing the patented design, those articles must have sold for higher prices than those without the design. Dobson, 114 U.S. at 444.

Since there is little dispute that Congress intended § 289 as a rejection of the result in Dobson, courts have long understood that section to prohibit apportionment of damages. In the view of the Federal Circuit, that meant that damages in a design patent case were calculated based on sales of the entire commercial product, even if the design patent at issue claimed only a small part of the design of that product (say, the rounded corners of a device, to pick a not-so-random example). But as the Supreme Court seemed to understand, that approach implicitly attributes zero value to the function of the device or any of its other features, a consequence that seems indefensible, particularly in the context of something so complex as a smartphone. Indeed, applying its “no apportionment” rule, the Federal Circuit upheld a jury award of Samsung’s entire profit on the sale of the infringing smartphones, even though those devices employed hundreds or thousands of other patented and unpatented components.

The Supreme Court purported to solve this problem by focusing on the meaning of “article of manufacture.” While the statute requires courts to award the defendant’s “total profits,” those profits must be determined in relation to the relevant article of manufacture to which the patented design was applied. Crucially, and contrary to the Federal Circuit’s assumption, the article of manufacture need not be the entire commercial product sold to consumers; it might be something less than that. So far, so good. Unfortunately, the Court gave no guidance as to when courts should treat the relevant article of manufacture as the commercial product or as some smaller unit, nor did it explain how one should identify the relevant article if it is less than the product as a whole.

It seems likely, however, that the claim will prove important here: Because a design must be for an article of manufacture, the article of manufacture will be defined in relation to the claimed aspects of the broader article shown in the drawings. So understood, “partial design” is a misnomer—patents that claim only a component of the article are not partial designs of an article; instead the article is, at least in some cases, defined by reference to the claim.

If it can be operationalized, the Court’s reconceptualization of the article of manufacture may prove an elegant solution to the remedies problem. But as Burstein shows in her article, if history is our guide to the proper definition of “article of manufacture,” there is more the Court could have done here. For while it’s true that not all articles of manufacture are complete commercial products, neither should all components be considered articles of manufacture.

Surveying early case law, Burstein concludes that “article of manufacture” was a term of art that did not simply refer to any article that was manufactured. (P. 42.) “Instead, it referred to a tangible item—other than a machine or a composition of matter—with a unitary structure made by humans and complete in itself for use or for sale.” (P. 43.) That definition didn’t exclude all components; indeed, several of the cases Burstein cites involve design patents on parts. But only those components that were manufactured separately and identifiable as discrete parts (even if ultimately combined into a bigger commercial product) counted.

That’s obviously not a self-applying rule. Among other things, its workability depends on courts’ ability to identify the claimed design as a relevant part of a product and not just as a feature of the product. That may sometimes be quite difficult. Still, conceiving of the article of manufacture in this way frames the inquiry very differently, and the implications go far beyond remedies. As Burstein highlights, the remedies problem is illustrative of a much bigger issue: Because the PTO and the courts have paid so little attention to what should count as a relevant article of manufacture, they have extended design patent protection to a number of designs that are subject to the Apple v. Samsung problem. Rather than solving it at the remedies stage, perhaps the better approach is to reconsider the scope of design patent subject matter.

Rethinking subject matter would go a long way toward addressing a number of other problems in design patent law. Accepting that one can patent small portions of the design of an article has made claiming more mysterious. That, in turn, has made it difficult for courts to compare claimed designs to the prior art to assess novelty and obviousness. It has exacerbated the difficulties courts face in giving a valid design its proper scope. Burstein acknowledges that there may be reasons to depart to some degree from the 1877 approach, particularly in light of the range of designs now impacted by design patent. But we can do so, she argues, without surrendering the idea that the article of manufacture has limiting content—the “article of manufacture” cannot just be whatever the claimant suggests it is. That’s a lesson that could profoundly affect all of our thinking about design patents for years to come.

Cite as: Mark McKenna, Designing Design Patent Subject Matter, JOTWELL (March 22, 2018) (reviewing Sarah Burstein, The Article of Manufacture in 1877, 32 Berkeley Tech. L.J. 1 (2017)), https://ip.jotwell.com/designing-design-patent-subject-matter/.

What Is the Payoff from Public R&D Investments?

Pierre Azoulay, Joshua S. Graff Zivin, Danielle Li & Bhaven N. Sampat, Public R&D Investments and Private-Sector Patenting: Evidence from NIH Funding Rules (revised 2017), available at NBER.

Intellectual property scholars have increasingly recognized that IP is only one of many legal tools for incentivizing innovation. In addition to facilitating transfers from consumers to innovators through a “shadow tax” on IP-protected goods, the U.S. government also directly rewards innovators with public funds through R&D tax incentives, innovation prizes, and—most significantly—well over $100 billion per year in direct federal and state R&D support. This direct public science funding, which primarily goes to grants and national laboratories, has long been viewed as important to U.S. economic growth. But federal R&D spending has been declining and is the subject of an increasingly partisan divide, making this a key moment to ask: What is the public’s return on this investment?

In an outstanding empirical analysis, corresponding author Danielle Li at MIT Sloan and her three coauthors—Pierre Azoulay at Sloan, Joshua Graff Zivin at UC San Diego Economics and Public Policy, and Bhaven Sampat at Columbia Public Health (collectively, “AGL&S”)—have tackled this question for grant funding by the U.S. National Institutes of Health (NIH). With a budget of over $30 billion per year for biomedical research, the NIH is the single largest research funder in the world. But assessing the causal impact of this investment is difficult, even when focusing only on its effect on private-sector patents. How can one measure information flows in fields full of serendipity and spillovers? Are grants merely correlated with private-sector advances? Does public funding “crowd out” private investment? AGL&S’s empirical design makes progress on each of these issues, and they conclude that each $10 million in NIH funding in fact generates 2.7 additional private-sector patents.

Prior work has studied outcomes in specific areas of NIH grant funding (which misses advances in different fields) or citations to university patents (which misses grant-funded academic results that are not patented). Instead, AGL&S look to publications resulting from these grants, and then to patents that cite these publications, which have been shown in a validation against survey results to be a better measure of the influence of public sector research than patent-to-patent citations. Out of the 153,076 NIH grants funded between 1980 and 2005, about 40% led to a publication cited by a patent; and out of the 232,276 private-sector life-science patents issued by the USPTO between 1980 and 2012, about 40% directly cited an NIH-funded publication.

A second empirical challenge is that public funding may target the most promising research areas, so increased grant funding might simply be correlated with increased patenting in that area rather than causing it. AGL&S take on this problem in two ways. First, they classify grants by disease (e.g., cancer), scientific approach (e.g., cell signaling), and time (e.g., 1990), allowing them to include fixed effects to account for these potential sources of endogeneity in funding. Second, they find effectively random variation in funding at the disease/science/time level based on grant applications whose review scores were just above or just below the NIH’s funding cutoffs, and they use the “windfall” funding received by a disease/science/time above the cutoffs as an instrument for total funding. Under both approaches, they find that additional funding increases the number of patents that cite NIH-funded publications.

The third challenge noted above is that grant-funded research might crowd out private-sector funding, such that the increase in private-sector follow-on patenting is accompanied by a decrease in patents that do not rely on NIH-funded results. To be clear, substitution of patents with grants is not necessarily problematic from an economic perspective: both are costly, and neither is optimal in all circumstances. But AGL&S show that grant funding spurs rather than substitutes for private-sector R&D. To study this effect, they use a separate dataset of patents that cite publications related to NIH-funded publications (using a keyword-based similarity measure). If NIH funding merely crowded out private research, the authors argue that one would not expect the award of an NIH grant to increase the number of patents in a research area, but instead, they find an increase here as well.

These results come with a number of caveats, the most important of which is that AGL&S focus on the outcome of only private-sector patenting, which ignores the many other benefits of public biomedical research. In the life sciences, more innovations are probably captured by the patent system than in other fields, but there are certainly medical innovations such as ICU hygiene checklists that have enormous social benefit that is not easily excludable with patents. This problem would be exacerbated in other areas of research, which is one reason that it would be difficult to replicate AGL&S’s analysis for grants from other funding agencies.

Of course, even within the life sciences, turning this result into a rate of return is difficult. AGL&S’s back-of-the-envelope calculations lead to tentative estimates on the order of a $20–$30 million increase in firm market value for each $10 million NIH grant. The true social rate of return might be significantly higher (e.g., if social value is significantly higher than firm market value, or if firms often build on uncited knowledge) or lower (e.g., if patents frequently cite to papers that are not real intellectual influences). But even if one focuses only on the results that can be determined with more precision—the positive effect on private-sector patenting and the lack of measurable crowding out—the results in this paper are important data points for the increasingly polarized discussion of direct government R&D funding.

Cite as: Lisa Larrimore Ouellette, What Is the Payoff from Public R&D Investments?, JOTWELL (February 28, 2018) (reviewing Pierre Azoulay, Joshua S. Graff Zivin, Danielle Li & Bhaven N. Sampat, Public R&D Investments and Private-Sector Patenting: Evidence from NIH Funding Rules (revised 2017), available at NBER), https://ip.jotwell.com/payoff-public-rd-investments/.

Will Algorithms Take the Fairness Out of Fair Use?

Dan L. Burk, Algorithmic Fair Use, U. Chi. L. Rev. (forthcoming), available at SSRN.

As part of an invited symposium, organized by The University of Chicago Law Review, on whether artificial intelligence will spell the end of one-size-fits-all laws, Dan Burk has written a terrific essay explaining why he’s skeptical that AI or machine learning will lead to better copyright fair use decisions. In the essay, Algorithmic Fair Use, Professor Burk identifies three main bases for his concerns.

First, Professor Burk is skeptical that American fair use law, which is articulated as a relatively open-textured standard (as compared with U.K.-style “fair dealing” provisions that set out a laundry list of tightly specified circumstances in which portions of a copyrighted work may be used without permission), could ever be reproduced with much fidelity as a set of software rules. The resistance of American fair use to ruleification, and therefore to automation, runs deep – not least because the content of any fair use rule depends upon antecedent considerations that are themselves resistant to distillation into rules:

Determining the impact of the unauthorized use of a work on the actual or potential market for the underlying work requires a model of the market and decisions about the data that properly populate that model. The amount of the work used can be mapped to the percentage of lines or words or pixels or bits taken for a given use, but some weight or significance must be accorded that number, whether defined by explicit programming values or by algorithmically learned data patterns. The type of work used and the use to which the protected taking is put require some categorization of works and uses. These and a multitude of other design choices made in advance would determine the allowance or disallowance of uses for protected content; algorithms do not make judgments; they are rather the products of human judgment.

Second, and perhaps more importantly given the direction of technology at the moment, Professor Burk is skeptical of fair use automation through machine learning. Machine learning does not depend on ruleification but would instead seek to determine whether a use was fair by comparing it to patterns that correlate with uses judged to be fair within a large dataset of unauthorized uses. But a machine’s ability to produce relevant and reliable results through pattern matching presumes that the machine has been fed a dataset that is representative of the fair use determinations, and the facts underlying them, as they exist out in the world.

Getting the dataset right, Professor Burk argues, is likely to be expensive and difficult. But the problem runs deeper than just cost or the technical difficulties of assembling a reliable dataset. The fundamental conceptual difficulty is that the output of a machine learning algorithm is just a correlation. It isn’t a judgment about whether that correlation is meaningful. In an entertaining but important aside, Professor Burk refers to a famous instance where data mining showed a strong correlation between movements in the S&P 500 stock index and the production of butter in Bangladesh. In that case, he notes, “a human decisionmaker is required to designate the trend as spurious rather than meaningful.” The same would be true of fair use determinations made by a machine learning algorithm – human intervention would be required to check that the machine’s output makes any sense outside the confines of the machine’s dataset.

Third, and finally, Professor Burk is wary of proposals to automate fair use because he fears that encoding fair use into the operation of machines will shape human expectations and behavior in ways that are both difficult to predict in advance or to contest ex post. He outlines this reservation in part by quoting from a video creator’s writings describing how Google’s Content ID system has shaped creativity on YouTube:

“You could make a video that meets the criteria for fair use, but YouTube could still take it down because of their internal system (Copyright ID) [sic] which analyzes and detects copyrighted material. So I learned to edit my way around that system. Nearly every stylistic decision you see about the channel — the length of the clips, the number of examples, which studios’ films we chose, the way narration and clip audio weave together, the reordering and flipping of shots, the remixing of 5.1 audio, the rhythm and pacing of the overall video — all of that was reverse engineered from YouTube’s Copyright ID. I spent about a week doing brute force trial-and-error. I would privately upload several different essay clips, then see which got flagged and which didn’t. This gave me a rough idea what the system could detect, and I edited the videos to avoid those potholes.”1

Of course, machines are not the only mechanism for shaping behavior. That’s what law does too; indeed, that is the very point of having laws. An advantage of the conventional legal system is that when laws and legal reasoning are more readily accessible and comprehensible, they are more easily contested. The inscrutable outputs of trade-secret-protected algorithms or invisible data sets, by contrast, are likely to obscure the ways in which law shapes behavior. In the end, Professor Burk is profoundly pessimistic: “[I]mplementation of algorithmic fair use,” he says, “will inevitably, and probably detrimentally, change the nature of fair use.”

I am not so sure that we know enough yet to judge whether Professor Burk’s intuition is right. It does seem likely that automation will create pressure to “ruleify” fair use, that is, to turn it into a more elaborated version of U.K.-style fair dealing. But what is our normative takeaway if that happens? Is ruleified fair use, where enforcement is done cheaply by machines, necessarily worse than our current fair use standard?

Current American fair use law is more flexible than any set of imaginable fair use rules, yet (in part because of that flexibility) enforcement is expensive and undertaken only in the comparatively rare occasions where a user has both the incentive and means to engage in federal court litigation. Thus, fair use as we know it in the U.S. is flexible, but inaccessible.

Ruleified fair use administered by machines promises to solve the accessibility problem. But will that gain come only at the expense of a bowdlerized set of fair use rules? That depends in part on who would be making the rules that automation demands, and what the process looks like for creating new rules. Would the rule-maker be open to input from users as well as content owners? And would the rule-maker be obliged to periodically revisit the rules to make sure that new exceptions could be added as needed, and exceptions that had proved ill-advised removed?

These are among the important questions that Professor Burk’s provocative essay raises, and they should command the attention of the copyright academy in the years to come.

  1. Tony Zhou, Postmortem: Every Frame a Painting, Medium (Dec. 2, 2017), https://medium.com/@tonyszhou/postmortem-1b338537fabc. []
Cite as: Christopher J. Sprigman, Will Algorithms Take the Fairness Out of Fair Use?, JOTWELL (January 15, 2018) (reviewing Dan L. Burk, Algorithmic Fair Use, U. Chi. L. Rev. (forthcoming), available at SSRN), https://ip.jotwell.com/will-algorithms-take-fairness-fair-use/.

Questioning Trade Secret Privileges in Criminal Cases

Rebecca Wexler, Life, Liberty, and Trade Secrets: Intellectual Property in the Criminal Justice System, 70 Stan. L. Rev. (forthcoming 2018), available at SSRN.

How pervasive are software-based information technologies in the criminal justice system these days? And what role does trade secrecy law play in blocking defendants’ access to technical information about the design of these technologies from which to assess their objectivity and accuracy?

The short answer to both questions is much more than you may have expected, as you will learn when you read Wexler’s fascinating and powerful new article on intellectual property in the criminal justice system. Among other things, the article shows just how ubiquitous these technologies have become in virtually every stage of the criminal justice system: from police investigations to risk assessment for bail setting to evidence prepared for or presented at trial to sentencing and parole decisions. When defense lawyers raise questions about the accuracy of the outputs of these technologies and seek access to source code or other technical documentation, courts have been quite receptive to arguments that non-disclosure is justified because the information is a trade secret. Wexler’s thesis is that there should be no trade secrecy privilege in criminal cases. Period. Protective orders can adequately protect any trade secrets implicated by a criminal defense discovery request.

An especially interesting part of the article comes from its revelations about the evolution of trade secrecy privileges in civil and criminal cases. Wexler did considerable archival research and, among other things, discovered that courts initially recognized no such privilege. Getting at the truth was more important than protecting a witness’ secrets. When courts did begin to take trade secrecy into account, the initial rationale for doing so was to shield witnesses from self-incrimination (i.e., if revealing a commercial secret in court would put the witness at risk for civil or criminal liability).

In the early twentieth century, a lively debate cropped up between evidence treatise author John Wigmore and Judge Learned Hand over whether a trade secrecy privilege should be recognized in civil litigation. Hand opposed such a privilege because he valued truth-seeking over protection of commercial secrets. Wigmore, by contrast, supported legal recognition of a trade secret privilege. During the course of an American Law Institute (ALI) model code project on evidence, members of the ALI were split on the issue, some favoring Hand’s view and some favoring Wigmore’s. Eventually the model code adopted Wigmore’s position, perhaps in part as a result of technology industry lobbying. But even then, the privilege extended only to privilege non-disclosure of trade secrets in civil litigation.

There is obviously a much stronger rationale for recognizing a trade secret privilege in civil cases. These lawsuits often involve disputes between competitors or would-be competitors in the commercial marketplace. Fair competition would be undermined if, for example, the very trade secrets at issue were dissipated in a litigation aimed at protecting them. But as Wexler points out, that risk does not exist in criminal cases because defendants and defense lawyers who seek access to secret information about technologies that police or prosecutors are using to arrest, convict, and punish these defendants are unlikely to become commercial competitors of the trade secret claimants.

Not until 1991 did any state have a trade secrecy privilege for evidence proffered in criminal cases. In that year, the California legislature modified its evidence statute to recognize such a privilege. The goal of this change, though, was to prevent public disclosure of the secrets in the course of criminal proceedings, not to serve as a limit on what information defendants could obtain about technologies being used to convict or punish them.

The first appellate court decision applying this trade secret privilege to block defense efforts to obtain disclosure of information used to convict a defendant was not rendered until 2015. A lawyer for Martell Chubbs asked a court to order disclosure of the source code of a forensic program that had produced evidence used to support Chubbs’ conviction in a death penalty case. The trial court agreed with Chubbs’ lawyer that unless the defense had access to this code, Chubbs would be denied the right to confront and cross-examine relevant witnesses against him. The developer of the software, however, refused to turn over the source code on the ground it was a trade secret. On appeal, a California court overturned the disclosure order on the ground that the source code was a trade secret, which could be disclosed only if the defense met a high standard to justify such a disclosure. The Chubbs case has influenced courts in some other jurisdictions, although a few courts have been more receptive to source code disclosure requests.

Those who support the existence of a trade secret privilege for source code and other technical documentation argue that without it, there will be too little incentive for developers to invest in the creation of software that can produce objective evidence that will make criminal trials more accurate and effective. Wexler challenges this argument on several grounds. She explains various limits that already substantially curtail discovery in criminal cases, the discretion courts have to deny abusive discovery requests, and the availability of protective orders to protect secrets that have commercial value. She argues that greater transparency of technical information about forensic and other technologies used in the criminal justice system might even spur more innovation, as companies will no longer be able to overprotect their programs through aggressive trade secret claims. Wexler points to other mechanisms, such as prizes and government grants, that, as Natalie Ram has shown in another stimulating article, Innovating Criminal Justice, could support the development of criminal justice system software.

Wexler argues that recognition of the trade secret privilege in the criminal justice system is both unnecessary and harmful. Because this privilege is a creature of the common law in most states, the ongoing common law process can rectify the mistaken belief that the trade secret privilege is sound and has a long pedigree. She offers a vision about how to move forward with better public policy on this important but mostly unnoticed dimension of our criminal justice system. This article opens up a welcome new line of research in the intellectual property field.

Cite as: Pamela Samuelson, Questioning Trade Secret Privileges in Criminal Cases, JOTWELL (December 1, 2017) (reviewing Rebecca Wexler, Life, Liberty, and Trade Secrets: Intellectual Property in the Criminal Justice System, 70 Stan. L. Rev. (forthcoming 2018), available at SSRN), https://ip.jotwell.com/questioning-trade-secret-privileges-in-criminal-cases/.

Creative Communities and Intellectual Property Law

Betsy Rosenblatt, Belonging as Intellectual Creation, 82 Mo. L. Rev. 91 (2017).

In late November 2016, recreational knitters Krista Suh and Jayna Zwieman conceived of The Pussyhat Project — a way for knitters and crocheters to participate in the January 21, 2017, Women’s March on Washington by creating a simple hat for marchers to wear. To facilitate the project, there was a website (featuring several patterns for free download, the first created by yarn store owner Kat Coyle), an Instagram account, and a hashtag. There wasn’t, however, a focus on a particular level of output. Rather, the goal of the project was to foster community through creative work, building on existing networks of knitters and highlighting the ways in which knitting circles are often “powerful gatherings of women, a safe space to talk.”

The community’s boundaries were porous and self-policed. Anyone was welcome to claim membership; the only requirement was to create or be the recipient of creation. Although the basic form of the hat was loosely defined — pink in color and rectangular in shape — individual knitters were free to stylize their hats in any way they wished. Patterns were freely shared, and distribution took place via a voluntary infrastructure. The community that resulted produced tens of thousands of hats in two months, and representative hats now reside in the collections of major museums across the country.

Scholars will undoubtedly have much more to say about this movement as its history is written, including critiques involving, inter alia, race, class, gender identity, and the sociology of protest movements. For now, the project is worth adding to our consideration of other organic communities that have inspired creativity without a focus on commercialization — even if they also feature stronger policing mechanisms (Wikipedia), more reliance on traditional IP inputs (fan edits and cosplay), or more emphasis on reputation building (message boards and Facebook posts). What do these community gardens of creativity — unburdened by concerns about monetization or propertization — tell us about what the goals of intellectual property law should be?

Professor Betsy Rosenblatt suggests in her recent article that the law has too narrow a focus. Creating with and for others, research shows, promotes a sense of belonging, which, in turn, motivates and improves the results of creativity. Indeed, for the pussyhat knitters, a sense of belonging to a social movement likely provided the entire motivation to create. (I should make clear here that the example throughout of the Pussyhat Project is mine, not Professor Rosenblatt’s.) So if the law focuses only on the tangible results of creativity — what Professor Rosenblatt refers to as “stuff’ — and fails to consider the importance of belonging, it might incentivize less creativity than it otherwise would.

What does it mean for a creator to belong? Professor Rosenblatt writes that a sense of belonging is both personal and “contextually mediated.” It arises when an individual feels “included, valued, and respected” by members of a group to which she is connected and with which her “values are in harmony.” (P. 96.) Belonging may be, at times, officially determined, but a sense of belonging can be only a psychological and emotional phenomenon. One cannot experience a sense of belonging unless one perceives oneself to belong, depending on “the individual’s subjective experience of interactions with in-group and out-group members.” (P. 100.) Whether a group offers official validation in the form of an award or membership criteria, unofficial acceptance in the form of discussions and ratings of one’s creative efforts, or simply the ability to become part of the group through an act of creativity, the emotion that is likely to spur creativity is the (scientifically tested) feeling of connection these activities inspire.

So if a sense of belonging both inspires and is inspired by creativity, and if creativity is what our intellectual property system aims to incentivize, at least in part, shouldn’t our system take belonging more into account, along with other noneconomic motivations? If it did so, what might such a system look like?

We would start by determining what conditions facilitate a sense of belonging. Research on organizational membership might help us to understand the psychology behind individuals’ decisions to join a community, commit to that community, or take a leadership role in that community. Shared semiotics and discourse no doubt create the bonds that allow an individual to feel included. Reputational and other benefits from a creative community may create a desire (or perceived obligation) to return those benefits in kind. We might also consider, as Professor Rosenblatt suggests, the importance of shared endeavors, participation in decision-making, psychological reward, and community values. (Pp. 101-03.) (The success of the Pussyhat Project, for example, seemed to result from all four — indeed, the freedom participants were given to make their creativity manifest was probably a significant validating factor.)

This is an important, albeit challenging, project, and, to her credit, Professor Rosenblatt remains cautiously optimistic about intellectual property law’s ability to respond fully to these questions. Focusing primarily on copyright law, she suggests that greater attention should be paid to attributional and reputational interests and to rules that encourage collaboration and playfulness — all areas that align with the inherently psychological nature of belonging. Yet, even starting modestly, we cannot ignore some important threshold questions that will help to anchor foundational principles. Should intellectual property law remain agnostic if, for example, the sense of belonging it promotes results in a group that is socially undesirable and/or dangerous to others? Should it matter whether intellectual property law is used to foster a sense of belonging or to assert a preference for disassociation? Without a firm sense of our values regarding these questions, our ability to foster the conditions for socially beneficial creativity will be hampered.

So what might we learn from the Pussyhat Project, insofar as it serves as one case study for Professor Rosenblatt’s proposal? The creative productivity generated by the pussyhat knitters would not have resulted without a sense of belonging. It also would not likely have resulted if the project had been bound by formal rules, by a sense of ownership, by exclusion, or by a definition of “the work” — all intellectual property law’s vocabulary. Not all communities, of course, are creative (or creative in the same way), and not all creators benefit from community (or benefit from it in the same way). But Professor Rosenblatt’s thoughtful article reminds us that the intersection of creativity and community is a subject worthy of our continued attention. It asks us to consider whether the law should pivot to do more to actively engage inspiration, focusing on the psychological motivations that, for many individual creators, are central to creativity — or whether the law should instead recognize its limits and simply do its best not to get in the way.

Cite as: Laura A. Heymann, Creative Communities and Intellectual Property Law, JOTWELL (November 1, 2017) (reviewing Betsy Rosenblatt, Belonging as Intellectual Creation, 82 Mo. L. Rev. 91 (2017)), https://ip.jotwell.com/creative-communities-and-intellectual-property-law/.

Three Strikes for Copyright

Abhishek Nagaraj, Does Copyright Affect Reuse? Evidence from Google Books and Wikipedia, Mgmt. Sci. (forthcoming 2017), available at abhishekn.com.

How should copyright law change to take account of the internet? Should copyright expand to plug the internet’s leakiness and protect content that the internet would otherwise make more freely available? Or, should copyright relax its strict liability regime given diverse and productive reuses in the internet age and the benefits networked diffusion provides users and second-generation creators? Answering these questions depends on what we think copyright is for and how it is used and confronted by creators and audiences. In a new article studying these questions in the very focused setting of Wikipedia articles about baseball and baseball players (there are more than you might imagine!), Professor Abhishek Nagaraj demonstrates that where production of new knowledge depends on pre-existing information, strong copyright law can reduce both the quality and quantity of new content.

Professor Nagaraj studies the intersection of digital access and information diffusion. In the paper reviewed here, Nagaraj takes advantage of the lack of automatic renewal of copyrighted works published before 1964, rendering many to the public domain, to estimate the effect of access to public domain material on the quality of Wikipedia pages. His findings both confirm other studies in this area and raise new lines of inquiry.

This paper tells several stories. The first starts in 2008, when Google Books digitized all of the issues of Baseball Digest between 1940 and 2008. Of these, the pre-1964 issues are in the public domain; the rest remain under copyright. This first story explains how Wikipedia articles about baseball cite to the public domain Baseball Digest issues twice as much as the in-copyright sources. It is no surprise that digitization of an important source of information about baseball enables access and encourages the reuse of this resource on Wikipedia, the fifth-most visited website (with about 10 billion page views monthly). After digitization, citation to Baseball Digest increased 300% over pre-digitization levels. What surprises more is that public domain sources (which are also older) are cited more frequently than in-copyright sources, despite both being digitized fully by Google Books. As Nagaraj demonstrates, relying on quantitative analysis of citation frequency and open-ended survey questions with Wikipedians, copyright is a barrier to citation and reuse of the digitized material, and Wikipedians are paying attention to those barriers.

This paper tells another story about the consequence of the copyright barrier – i.e., that it diminishes the quality of the Wikipages about certain baseball players who played after 1964. For pages about baseball players who are neither famous nor obscure (e.g., the average player about whom a Wikipedia page would come in handy), those players who made their debut appearances before 1964 have higher quality pages than those who began playing after 1964. How does Nagaraj measure quality? By measuring what he argues is circumstantial evidence of higher quality content: citation to Baseball Digest, the number of images on the page, and the number of visitors to the page (as a measure of reader utility). Pre-1964 player pages for well-known (but not superstar) players have almost twice the number of citations to Baseball Digest, 1.78 images as compared to 0.92 for in-copyright player-pages, and they attract about forty-seven more visitors per month on average.

Nagaraj describes this effect in terms of a welfare impact, suggesting that pages negatively affected by copyright are unable to fully capture and deliver value to end users. In intellectual property debates, we often worry about quality over quantity, whether the “progress” to which the intellectual property clause of the Constitution aims is more stuff or better stuff, “better” being a tricky term. We also worry about the relevance of citation counts. A helpful and intriguing feature of Nagaraj’s paper is his metric for quality that is both quantitatively measurable and qualitatively significant for the community the content serves (baseball fans).

The third story this paper tells is the most interesting of all. It concerns the differential impact of copyright restrictions on images versus text, which difference is driving the first story described above. Generally, digitization should lower the costs of reuse for both types of media, but Nagaraj shows that text is cited to at a significantly higher rate than images, leading to the reuse of and reliance on text at a much higher rate than images from the in-copyright Baseball Digest sources. This means that the digitization of content benefits textual content more than images (photographic or otherwise). Or, more precisely, the negative effects of copyright on citation and use disappear for text and are driven by a lack of reuse of images post-1964. Nagaraj hypothesizes and then demonstrates that because images require more “transformation” (under copyright fair use) than textual content to avoid infringement liability, the gains of access won by digitization are mitigated, for the in-copyright images, by copyright’s imposition of greater transaction costs. It follows that the “reuse of out-of-copyright content is likely to be higher for images … than for text” (P. 16), suggesting that the public-domain status of images is more important for diffusion and reuse of this kind of information and content. In fact, Nagaraj shows that the “likelihood that an image will be reused from a post-1964 [in-copyright] issue of Baseball Digest is very close to zero, even after digitization [whereas] … the pattern for text citations is quite different. … In-copyright and out-of-copyright text citations track each other pretty closely, suggesting that copyright has very little impact on preventing the reuse of digitized textual material.” (Pp. 17-18.) His chart reproduced below illustrates this point.

This last story raises fascinating questions. First, in a networked digital age in which photographs, logos, memes, and other graphic images have been shown to circulate more swiftly and broadly than text, in which images dominate the semantic web (including, importantly, journalism and social media), it is surprising that copyright may control the use of images as forcefully as Nagaraj documents about Wikipedia. This is a significant threat to veritable and free speech.

Second, the collection and management of photographs online is growing in concentration and expense. There are a lot of free images on the web, but they serve to entertain or illustrate, not to inform or contribute facts for debate.  (There is a reason we see the same pictures of Mitch McConnell, Antelope Canyon, or the Lincoln Memorial on news sites). New, timely, and authenticated photos are harder to source; there are fewer photojournalists and even fewer aggregators and news agencies managing the collection and distribution of news. If, as Nagaraj’s study proves, internet users rely on photographs to anchor and explicate information sought, copyrighted photographs circulate less freely than text online (a statement many photographers would dispute, but that’s another story), and because news outlets now less frequently pay for and distribute photojournalistic images, we have a significant information problem in the digital age.

Third, is it possible that Wikipedians are so closely hewing to the copyright fair use analysis that it is harder to claim transformative fair use of photographs than text? In my research, I found creative and innovative communities followed idiosyncratic norms of copying (or not copying) that did not align with intellectual property law. The story Nagaraj tells about the non-use of photographs versus the use of text under copyright on Wikipedia is a story of behavior arguably aligning with copyright doctrine. Reusing copyrighted photographs verbatim is harder under copyright fair use than quoting or paraphrasing parts of text. But if you surf the internet with its seemingly uncontrolled reproduction of photographs, you would be forgiven for thinking that copyright law doesn’t act as a barrier to copying and distribution of photographs at all. So what explains the Wikipedian’s careful non-use of photographs under copyright? Banners on Wikipedia pages indicate editors are indeed knowledgeable about complex copyright rules. But, perhaps more importantly for the welfare question, Nagaraj asks: is the Wikipedian’s behavior that may be copyright compliant (although arguments exist on both sides) good for their encyclopedic project to produce and disseminate free high-quality and comprehensive information to world readers? These are big and important questions Nagaraj tackles admirably. For anyone interested in a model for robust quantitative experimentation in intellectual property with qualitative implications and analysis for further study, I highly recommend Nagaraj’s newest paper.

Cite as: Jessica Silbey, Three Strikes for Copyright, JOTWELL (October 13, 2017) (reviewing Abhishek Nagaraj, Does Copyright Affect Reuse? Evidence from Google Books and Wikipedia, Mgmt. Sci. (forthcoming 2017), available at abhishekn.com), https://ip.jotwell.com/three-strikes-for-copyright/.