The Journal of Things We Like (Lots)
Select Page

The Costs of Trademarking Dolls

Rebecca Curtin, Zombie Cinderella and the Undead Public Domain86 Tenn. L. Rev. __ (forthcoming 2018), available at SSRN.

Professor Curtin’s article, Zombie Cinderella and the Undead Public Domain, takes a recent case from the Trademark Trial and Appeal Board (TTAB) as the basis for an argument that trademark doctrine needs stronger protection against the exclusive commercial appropriation of characters that are in the public domain. In that case, a doll manufacturer sought to register the term “Zombie Cinderella” for a doll that was zombie-ish and princess-like. The examiner refused registration because the term “Zombie Cinderella” for this kind of doll was confusingly similar to the mark for Walt Disney’s Cinderella doll. Although the TTAB overturned the examiner’s “refusal to register” determination, it did so because it said Disney’s mark is a conceptually weak source indicator of “Disney” for dolls. This leaves open the possibility that Disney could build a stronger association between its mark and its dolls and eventually monopolize the term “Cinderella” as a mark for princess dolls. Professor Curtin’s article argues that leaving this opportunity open would be bad policy and should be precluded under a proper application of trademark law.

There are several aspects of this article that make it worth reading. First, it is a deep dive into a single case at the PTO, teaching readers about trademark registration and appeals from adverse rulings. The article reads as a compact case history from beginning to end. It appropriately balances the technical aspects of trademark practice with conceptual dimensions of trademark theory, such as aesthetic functionality (a doctrine that bars trademark protection for design and word elements that confer a non-reputationally-related advantage on the trademark holder). Second, drawing from her scholarly expertise in literary history, Professor Curtin provides rich and appropriately expansive details about Cinderella’s provenance as a character dating from 1558. The cultural history alone makes this article a joy to read, as Professor Curtin traces the Cinderella character forward to the 1899 Georges Melies’ film Cinderella and backward to a ninth century Chinese folk tale. This sets up the issue of what Disney can properly claim to own of the character’s name (as a trademark for its own princess doll toy) after the 1950 release of its film Cinderella.

The central problems this article addresses are “overreaching trademark claims that quash expressive uses of public domain characters” and “the competitive needs of multiple producers to make reference to public domain characters in the names and designs of their products.” (P. 3.) Overreaching trademark claims undermine the law’s goal of promoting competition through the use of distinctive marks that designate the source of those goods. Trademarks that monopolize valuable public domain elements and undermine the competitive advantage of other manufacturers to name and describe their goods injure both consumers and competition. The argument that underlies Curtin’s description of this problem is that there are and should be allowed to be more than one “Cinderella doll.” Disney may make such a doll and brand it as Disney’s Cinderella doll, but other manufacturers may also have Cinderella dolls and call them, for example, Zombie Cinderella or Dolly Pockets Cinderella Doll.

Trademark law does not permit restriction of the making of the goods themselves. It restricts only the use of confusingly similar marks on goods. Indeed, trademark law intends to enable copying of goods –  e.g., there are and should be many kinds of bottled waters with their negligibly perceptible differences in ingredients and taste. And yet on the shelves we can distinguish between Dasani, Poland Spring, and Fiji. Likewise, Curtin argues, there are and should be many kinds of Cinderella dolls. As with the bottled water where the product is the water whether or not purified the same way or containing any (or the same) additives, the product here is the doll with characteristics from the Cinderella fairy tale: rags to riches details, princess-like dress, fairy godmother, glass slippers, etc. But if Disney owns the mark “Disney Cinderella” for dolls that refer to and look like the fairy tale Cinderella, and other manufacturers can make dolls that look like Cinderella but cannot call them Cinderella dolls because of the broad scope of Disney’s trademark, competitors are inhibited from competing in the marketplace for Cinderella-like dolls.

This central problem of assessing the plausible scope of Disney’s Cinderella mark for dolls leads to other challenges related to the technical aspects of trademark practice – e.g., how do you prove competitive need or injury to limit a trademark holder’s claim? This is a question of evidence and proof at the registration (and appeal) stage of trademark practice and also informs the complex (and confused) federal case law on aesthetic functionality.1 Professor Curtin criticizes the TTAB’s guidance that the commercial weakness of Disney’s Cinderella mark was all that stood in the way of successful opposition to Zombie Cinderella’s registration because it “sends the message that it is possible to appropriate the public domain character like Cinderella, even for products like dolls, if only your marketing and enforcement strategies are aggressive enough.” (P. 25.) Instead, the article urges movement in the other direction: application of the aesthetic functionality doctrine at the registration stage to limit the appropriation of public domain characters and features when their depiction is a useful product feature conferring a non-reputationally-related advantage (e.g., a non-trademark benefit) on the trademark holder. As Curtin writes, “If a doll can’t be called ‘Cinderella’ or incorporate a glass slipper into its packaging, it can’t compete effectively with Disney’s Cinderella dolls, because resonance with that fairy tale is a source of the attraction to the product, a source that Disney was free to exploit and that second (or third or fourth) comers should be equally free to exploit.” (P. 43.)

There are other problems with allowing trademark holders to appropriate public domain characters other than restricting competition. Curtin’s description is irresistible: “a trademark holder who is successful in altering the meaning of a character so its primary significance is source identification impoverishes the public domain of the character’s living meaning, rendering it an inanimate corpse. No one, not even the potential trademark holder really wants to fully replace the cultural meaning in a public domain character with the reputational meaning or source identification of the trademark holder.” (P. 34.) Once appropriated, the public domain character dies because it is rendered inert by the trademark, whose meaning must remain constant for it to function as the predictable source identifier of the product.

Professor Curtin carefully stays within trademark doctrine for most of the article, mentioning the rule against generic trademarks as a cousin to aesthetic functionality that some have argued can be helpfully applied to characters to limit trademark scope.2 In addition to aesthetic functionality, she might consider more forcefully arguing for disclaiming practice in trademark law which, as with marks that contain generic words, requires applicants to limit their mark to that which excludes the generic word or uses it only in a specific combination. But she might also draw helpful support from well-established copyright doctrines for the principle that the public domain must remain available to all comers. Scene à faires doctrine requires stock characters and plots (e.g., a wicked witch and star-crossed lovers) to stay in the public domain as critical building blocks of cultural expression. And the separability doctrine applied to useful articles requires separating the useful features of the copyrighted work from the expressive ones (e.g., the useful cut of a dress from the design elements applied to dress fabric) for the purposes of only protecting that which is expressive and not useful to the article’s function.3 As an Article about the difficulty of persuading courts of the commercial and cultural values of aesthetics, it seems worth emphasizing through comparative analysis that trademark law resembles other intellectual property regimes by relying on and therefore protecting as a resource the vibrancy of literary culture in the public domain. In other words, the doctrine of aesthetic functionality is more familiar, conventional, and fundamental than the scholarly and case-law debates over the doctrine would have us believe.

Recently, Professor Curtin has had the opportunity to test her analysis of aesthetic functionality as applied to literary characters in the public domain. For the sequel to Zombie Cinderella and the Undead Public Domain, see her opposition (filed along with legal clinic students at Suffolk University Law School) to United Trademark Holding’s trademark application for Rapunzel for a line of dolls that depict the Rapunzel character.4

  1. See Justin Hughes, Cognitive and Aesthetic Functionality, 36 Cardozo L. Rev. 1227 (2015); Robert Bone, Trademark Functionality Reexamined, 7 J. Legal Analysis 183 (2015); Mark McKenna, (Dys)Functionality, 48 Houston L. Rev. 823 (2011-2012).
  2. Betsy Rosenblatt, Adventure of the Shrinking Public Domain, 86 U. Colo. L. Rev. 561 (2015).
  3. But see Star Athletica v. Varsity Brands. As the debate between the majority and dissenting opinions explain, separability is not always a self-evident exercise. Compare Id., slip op. at 13-14 with Breyer dissent, slip op. at 3-5, 10-12.
  4. Maria Cramer, Rapunzel, Rapunzel, Let Down Your Trademark Restrictions, Boston Globe (June 05, 2018); Karen Katz, Loletta Darden & Rebecca Curtin, Rescuing Rapunzel, IPWAtchdog (June 16, 2018).
Cite as: Jessica Silbey, The Costs of Trademarking Dolls, JOTWELL (November 6, 2018) (reviewing Rebecca Curtin, Zombie Cinderella and the Undead Public Domain86 Tenn. L. Rev. __ (forthcoming 2018), available at SSRN), https://ip.jotwell.com/the-costs-of-trademarking-dolls/.

Creative Vigilantism

Amy Adler & Jeanne C. Fromer, Taking Intellectual Property into Their Own Hands, 107 Cal. L. Rev. __ (forthcoming 2019), available at SSRN.

It’s no longer news that a major proportion of property regulation happens outside the bounds of the law thanks to social norms and their extralegal enforcement. Yet legal scholars continue to find new and fascinating ways to advance this insight. The latest installment in the conversation about the norm-based regulation of intangible property is Amy Adler and Jeanne Fromer’s Taking Intellectual Property into Their Own Hands.

This sparkling article1 adds a novel perspective to the dialogue that has been developing for more than a decade about the extralegal regulation of creative production. Most of this work considers how a given group regulates their distinctive works via norms, without recourse to copyright or trademark law. This move has been made with respect to recipes developed by French chefs, roller derby skaters’ nicknames, clowns’ face makeup, tattoo artists’ ink designs, and many others.

Adler and Fromer add an important new dimension to the creative norms literature by focusing on extralegal enforcement rather than the substantive rules governing a particular category of work. As they show, many owners who could state plausible claims for copyright or trademark infringement increasingly choose not to file suit, but instead to deploy one of two surprisingly effective means of striking back at the purported infringer: shaming and retaking the copy.

It’s hard to choose, but in my view the best part of this article is Adler and Fromer’s luminous catalogue of these two novel forms of non-legal enforcement. Most informal norms are enforced to some extent by shaming sanctions, but no one seems better at deploying shame online than the operators of the website Diet Prada, who inflict humorless wrath on any fashion designer they perceive to have copied another without credit.

In terms of retaking the copy, the Suicide Girls finally figured out a way to outflank notorious reappropriation artist Richard Prince. They appropriated his unauthorized use of their Instagram posts, added the words “true art” in a comment, and sold the resulting near-exact copies of Prince canvases for a mere $90, undercutting the market for Prince’s works and giving all profits to charity. Adler and Fromer illustrate all of this with photographs, tweets, and other visuals that make their story all the more vivid.

One of the major insights that Adler and Fromer extract from this fascinating story of creative vigilantes is that this extralegal enforcement operates “without the backdrop of a single close-knit community.” This feature is important because, they point out, “legal scholars tend to see” such close-knit communities “as prerequisite to enforcing extralegal norms.”

While Adler and Fromer’s work is not the first to explore norm enforcement outside the context of close-knit communities,2 the question is indeed a fascinating one. The answer to this puzzle may not be that norm enforcement does not require a close-knit community, but that in the internet age what makes a community has changed. For the Shasta County ranchers Ellickson studied in Order Without Law, geographic proximity and relative isolation meant that shaming sanctions by one’s neighbors could be socially isolating.

But in the internet age, community is less a function of place than shared interests and values. So if the people in your online community of fashionistas decide to shame you on Twitter or Instagram, you may not even know the identity behind the online persona responsible, but their message of viral opprobrium among those who share your professional tastes and ambitious could spell ostracism nonetheless.

Adler and Fromer also do admirable work to frame this article about extralegal activity in terms of law. They argue that shaming and retaking the copy can advance the same aims as copyright and trademark law: remuneration and attribution (as well as avoiding misattribution). This account fits with some of their examples. 1980s designer Dapper Dan enjoyed a renaissance of popularity—and, presumably, income—when people shamed Gucci for failing to acknowledge their debt to him in a recent fashion show. And James Turrell’s surprisingly hip rebuke of Drake’s rip-off of his installations in the “Hotline Bling” video subtly but unmistakably made clear that Drake’s use of Turrell’s work was unlicensed and unauthorized.

Other examples, though, fit less cleanly with Adler and Fromer’s claim that extralegal copyright and trademark enforcement approximates the goals of those doctrines—namely, remuneration and attribution. Putting aside for a moment due skepticism of the claim that either copyright or trademark chiefly seeks to give creators attribution,3 consider the Suicide Girls. Their retaking of Richard Prince’s unauthorized copies got them zero money since all proceeds for sales of their reappropriated works went to charity. Nor did attribution seem to be an issue since Prince’s uses did not erase the owners’ Instagram usernames.

What is really going on may be bigger than Adler and Fromer’s legalist framing suggests. What the shamers and copy retakers seem to want is not just their own version of what copyright and trademark law promise, but something simpler and less admirable: good old fashioned revenge. The Suicide Girls’ delicious judo move got them neither money nor much else other than sticking it to Prince so poetically their scheme could have supplied the plot of a Quentin Tarantino film. This story seems less about creative incentives or avoiding consumer confusion, and much more about righting a wrong felt on a deeply visceral level, showing that moral intuitions about wrongful transgression animate owners of intangible property just as much as owners of physical property.

And to Adler and Fromer’s credit, their article evinces due care about the dark side of extralegal copyright and trademark enforcement. They acknowledge that both copyright and trademark law are calibrated to balance owners’ rights with the public interest in free access to information, and that aggrieved owners are unlikely to take the latter consideration into account when striking back at an infringer. Leaving enforcement to creative vigilantes threatens to enforce their interests more broadly than law would allow.

There’s another downside to self-help that Adler and Fromer don’t feature as prominently but merits mention: it often gets things wrong. This is familiar in the crime context, as when online sleuths misidentified a purported terrorist, leading to his suicide. But creators who seek to call out unauthorized copying can also err. The Rubinoos, for example, accused Avril Lavigne of ripping off their musical work in her song “Girlfriend” (and got a decent settlement from her). But closer analysis of the central chord progressions in the purportedly infringed Rubinoos song “Boyfriend” showed that it bore an eerie similarity to the sound of the Rolling Stones “Get Off My Cloud.”

The authors explain that self-help often misfires because owners have a “folk sense of law,” not an accurate sense of what law actually is. My observations of norm-based systems, however, suggest that the concern is a bit different, and perhaps more problematic. Such extralegal regulation of creative production does not seek to approximate law or substitute for it. Rather, it runs entirely independently of law and its entitlements. I’d wager that the Suicide Girls’ outrage at Prince was not rooted in any sense of law at all, but rather a reaction to affront at property transgression that is entirely unmoored from and prior to federal law’s entitlements to works or marks.

The remoteness of law to social norms systems is one of the major contributions that Ellickson in particular emphasized. Lawyers, of course, tend to be legal centralists. We read the world through the lens of the law and assume that others do as well. But most work in this area reveals that law is marginal at best to individuals’ decisions to regulate their property via norms. Norms arise out of strongly felt moral intuitions about right and wrong that hold true for people independently of what rights law promises them.

So when the authors criticize Diet Prada because on that site “copying is almost always assumed to be wrong,” this conflates morality and legality. Diet Prada seems to seek only to vindicate to a gut-level sense of what’s right and wrong. The law’s carefully crafted copyright and trademark doctrines, by contrast, are not meant to model moral intuitions but to achieve socially desirable allocations of private rights in information. So while the authors seek to emphasize the overlap between the agendas of creative vigilantes and the remedies supplied by copyright and trademark law, I wonder whether the differences swamp the similarities.

My thoughts and feelings about this article are many so I’ll rein them in here with one final thought. Among the valuable insights of Adler and Fromer’s article is that they distinguish between different kinds of creative self-help, chiefly shaming the infringer and reappropriating the copy. They further show that shaming strategies tend to be costlier and riskier given the downsides of self-help, while reappropriating the copy tends to be more constructive as well as consonant with the goals of copyright and trademark law.

The puzzle this leaves the reader with is: What can law do about this? If it’s right that copy retaking is a better extralegal enforcement strategy than shaming, is there a way to change copyright and trademark to encourage the former and/or discourage the latter? The answer to that question may merit a separate article, but the suggestion is that law may have some role to play even in this world that operates outside legal bounds.

  1. The authors term their piece an essay. I have never quite understood what separates an article from an essay but the latter is supposed to be shorter and less formal. Adler and Fromer’s piece warrants all kinds of superlatives, but at 77 heavily footnoted pages, short and breezy are not among them. It’s an article.
  2. Lior Strahilevitz called the effectiveness of norms outside close-knit communities “the puzzle crying out for an explanation” in a 2003 article about cooperation on file-sharing networks. See also Dave Fagundes, Queues.
  3. In some ways copyright infringement causes attribution to happen, but that’s not the same as showing that the aim of the doctrine is to provide authors with proper attribution.
Cite as: David Fagundes, Creative Vigilantism, JOTWELL (October 3, 2018) (reviewing Amy Adler & Jeanne C. Fromer, Taking Intellectual Property into Their Own Hands, 107 Cal. L. Rev. __ (forthcoming 2019), available at SSRN), https://ip.jotwell.com/creative-vigilantism/.

Decoupling Intellectual Property’s Incentive and Allocation Functions

Daniel J. Hemel & Lisa Larrimore Ouellette, Innovation Policy Pluralism, 128 Yale L. J. __ (forthcoming 2018), available at SSRN.

It’s rare for two JOTWELL editors to choose the same article to review. When that happens, it’s surely a sign of an “instant classic.” So even if you’ve read Kevin Collins’s laudatory jot of Daniel J. Hemel and Lisa Larrimore Ouellette’s superb piece a few months ago, you should read this one, too. And, if you didn’t read that review, you should definitely read this one.

If double coverage weren’t enough, three years ago, my jot reviewed Hemel and Ouellette’s brilliant article, Beyond the Patents-Prizes Debate. Besides explaining the importance of considering the full panoply of tools to incentivize innovation—such as patents, prizes, grants, and tax credits—Hemel and Ouellette showed that these tools could be decoupled and refashioned to create effectively new, mutant-like rights with potentially superior effects than in their “pure” form.

In this follow-up article, Hemel and Ouellette insightfully discern the broad theoretical ramifications of their previous IP reconstructions. Because Kevin Collins’s jot lucidly summarizes the expanse of the article’s exposition, I focus on the article’s most salient insight—namely, that IP’s “incentive” function is separable from its “allocation” function. Specifically, the “incentive” function refers to the market-based financial reward provided to innovators for producing an innovation (and here I elide the distinction between R & D-based “inventions” and commercialized “innovations”). The “allocation” function concerns the payment of a proprietary price by consumers (and intermediaries) to access innovations covered by IP rights.

Traditionally, IP scholars have recognized these two functions, but have assumed that they are essentially coupled. Indeed, the oft-stated benefit of IP rights over prizes and grants is that IP offers market-based rewards generated (if the rights are useful) by supracompetitive payments from the market to the innovator. Yet, as Hemel and Ouellette show, IP’s incentive and allocation functions are wholly separable.

Critically, Hemel and Ouellette rightly contend that many criticisms (and proposed reforms) of the IP system often concern either the incentive or allocation function, but not both. For instance, consider the criticism that intellectual property rights punish the less well-off by allowing for supracompetitive pricing, pricing out consumers who otherwise would have bought the protected goods in a competitive market. This is a criticism of IP’s allocation function, but not its incentive function.

For instance, once an innovator acquires a patent from the Patent Office, the government could acquire the patent (via mandate) at fair market value and then make the invention available to commercializers for free. To the extent that the fair market value roughly represents the profits or licensing fees the innovator would have earned by retaining the patent, the reward would be essentially the same as in a market system. Yet, the allocation of the product for free to intermediaries and end-consumers substantially differs from IP’s traditional market-based, allocation system. Namely, in place of market users paying supernormal prices for the patented product, the government would collect the funds for payment to the innovator by other means, such as general taxation.

Because taxation may offer efficiency and distributive benefits relative to a patent system, such an approach may be preferable to traditional allocation schemes. Although my own view is that governments cannot easily determine fair market value—and, indeed, there is a rich literature on the pros and cons of “patents buyouts” of this sort—Hemel and Ouellette are the first to recognize the core theoretical and policy implications of this sort of “pluralistic” “mixing” and “matching” of innovation incentives.

Hemel and Ouellette’s framework allows IP scholars to better theorize the function and role of IP, and to identify reasons why different combinations of IP and non-IP mechanisms may be optimal in specific circumstances. For example, market-based allocation mechanisms may be preferable when the value of the innovation is difficult to ascertain ex ante. Yet, because inventors are not usually the best commercializers of their inventions, it may sometimes be more sensible to provide the inventor a fixed payout from a patent auction—in turn offering a limited-term “commercialization” patent to the winning bidder simply to commercialize and sell the invention, which may reduce overall deadweight losses (see my piece, Commercializing Patents for a similar proposal, yet not as well-theorized as the proposals in Hemel and Ouellette’s article).

The meshing of IP and non-IP tools, which the authors term innovation policy “pluralism” (and Collins aptly terms “hybridism”), is not merely a theoretical refinement designed for forward-looking policymaking; it also provides a more nuanced account of today’s innovation landscape. In addition to the Bayh-Dole Act, prizes, grants, and patent auctions, the theory helps one to understand such diverse areas as the Hatch-Waxman Act, price caps on patented products, research tax credits, standards essential patents, compulsory licensing for digital audio streaming, and academic publishing. Given its broad theoretical and practical import, and immediate recognition among the community of IP scholars, this not-even-yet-published article will surely be a canonical work for many years to come.

Cite as: Ted Sichelman, Decoupling Intellectual Property’s Incentive and Allocation Functions, JOTWELL (September 3, 2018) (reviewing Daniel J. Hemel & Lisa Larrimore Ouellette, Innovation Policy Pluralism, 128 Yale L. J. __ (forthcoming 2018), available at SSRN), https://ip.jotwell.com/decoupling-intellectual-propertys-incentive-and-allocation-functions/.

Innovation Policy Pluralism, or Innovation Policy Hybridism?

Daniel J. Hemel and Lisa Larrimore Ouellette, Innovation Policy Pluralism, 128 Yale L. J. (forthcoming), available at SSRN.

In previous work, Daniel J. Hemel and Lisa Larrimore Ouellette explored the range of tools available to regulators interested in promoting innovation. (See Ted Sichelman’s jot.) While legal scholars addressing innovation policy frequently focus solely on patent law—in fact, the term “intellectual property” is often employed as a synecdoche to refer to the broader scholarly field of innovation policy—Hemel and Ouellette argued that viewing patents, prizes, grants, and tax credits as imperfect substitutes allows the public goods problem that underlies innovation policy to be solved in a variety of different ways, each with its own advantages and disadvantages.

In their most recent work, Innovation Policy Pluralism, Hemel and Ouellette push their earlier argument one step further. They again increase the number of tools in the innovation-policy toolkit by developing a divide-and-recombine approach to intellectual property and its quasi-substitutes. They argue that any given tool for promoting innovation has two “separate and separable” components. First, it has an innovation incentive or a “payoff structure for the producers of knowledge goods.” Second, it has an allocation mechanism that “establish[es] the conditions under which consumers can use knowledge goods.” Hemel and Ouellette provide a thorough, clearly argued, and convincing analysis of the combinatorial possibilities that arise from this finer-grained analysis of the components of innovation-policy regimes.

More specifically, Hemel and Ouellette identify three conceptually distinct types of combinatorial possibilities. First, matching combines components of different types: it allows the innovation-incentive component from one conventional innovation-policy tool to be coupled to the allocation-mechanism component from another. For example, if the government were to buy patents from innovation producers upon issuance at the market price and make the innovation freely available to the public, the innovation incentive would resemble the innovation incentive created by patent law (because it rewards producers with a fraction of the social value that their innovation generates, as measured by consumers’ willingness to pay) but the allocation mechanism would resemble the open-access regime conventionally associated with prizes and grants. Second, mixing treats each of the components as continuous rather than binary variables and allows amalgams of distinct components of the same type. For example, focusing on the innovation-incentive component, an upfront grant that does not cover the full expense of research and development expenses may be combined with patent exclusivity that is weaker than conventional patent law provides. Third, layering recognizes that different types of innovation incentives and allocation mechanisms can be deployed at the national and international levels.

While they do produce and analyze the pros and cons of an array of specific proposals, Hemel and Ouellette view their primary contribution as a conceptual framework for talking about innovation policy in a more precise and nuanced manner. The framework helps us to both conceive new innovation-policy proposals and evaluate their normative merits. Just as importantly, it also helps us to understand the jumble of laws that currently promote innovation in the United States. To prove this latter point, Hemel and Ouellette use their framework in a final section to describe the innovation-policy regime that already exists in the pharmaceutical industry more succinctly and accurately than would be possible if taxes, grants, and patents were conceived of as siloed innovation-policy regimes.

To my mind, the article’s title is a bit misleading. (This is akin to praising with faint damnation, I know.) Hemel and Ouellette’s earlier work was about innovation-policy pluralism. There is no one-size-fits-all optimal solution to a public goods problem; different circumstances may call for different tools from the toolkit; there is value in recognizing the diversity of tools available for remedying an innovation-policy problem. Taking the argument to the next level, Innovation Policy Pluralism embraces what I think could more tellingly be labeled innovation-policy “hybridism.” Here, the very notion that there is a finite set of discrete tools in the toolkit becomes a contingent artifact of the conceptual framework conventionally used to think about innovation policy. By breaking these tools down into the components that they all share, the conceptual framework developed by Hemel and Ouellette reveals that we can match, mix and layer our way to hybrid tools that are more perfectly crafted to the specifications of any given innovation-policy problem.

Cite as: Kevin E. Collins, Innovation Policy Pluralism, or Innovation Policy Hybridism?, JOTWELL (July 17, 2018) (reviewing Daniel J. Hemel and Lisa Larrimore Ouellette, Innovation Policy Pluralism, 128 Yale L. J. (forthcoming), available at SSRN), https://ip.jotwell.com/innovation-policy-pluralism-or-innovation-policy-hybridism/.

IP Norms’ Dark Side

Stephanie Bair & Laura Pedraza-Fariña, Anti-Innovation Norms, 112 Nw. L. Rev. 1069 (2018).

The early legal literature on law and social norms tended to paint a rosy picture. Social norms were generally depicted as an optimal set of organically developed rules, informed by the experience of a close-knit community, and thus superior to formal law as a way of regulating behavior. Later scholars came to realize that nothing guarantees the optimality of social norms: they may perpetuate practices that no longer make sense, or they may advance the interests of certain groups but not social welfare. In such cases, formal law holds the promise of overriding suboptimal norms and moving society to a better place.

A few years back, I wrote, together with Christopher Sprigman, one of the first papers on social norms in intellectual property law in which we explored how standup comedians informally regulate the ownership and transfer of rights in jokes and comedic routines. While we thought that in that particular case the system of social norms largely brought good outcomes in terms of inducing creativity, that conclusion came at the end of reviewing their advantages relative to copyright law, but, importantly, also their relative disadvantages. For example, we noted aspects of the norms system that we thought were unattractive for creativity relative to formal copyright law: standups norms’ system does not recognize a term limit (so the norms’ system discourages dissemination and reuse of old materials), does not harbor a concept of fair use (so it discourages derivative and sequential creativity), and grants a scope of protection that exceeds that which is available under copyright’s idea/expression doctrine (so it discourages borrowing high level concepts and themes from others).

Since we wrote our paper, the law and social norms field of IP has grown considerably, and is, I believe, one of the most important developments in IP theory over the last decade, not to mention a fun and interesting one. Others have explored social norms that are copyright-like, patent-like, and trademark-like. More recently, the literature moved to make more general claims and observations based on a rich array of norms’ systems and case studies. One of the latest and important additions to this recent literature is the paper by Stephanie Bair and Laura Pedraza-Fariña which makes the claim about the dark side—i.e. counterproductive creativity-wise—of social norms more generally. In doing so, it builds on much social science and psychology literature.

Bair and Pedraza-Fariña emphasize that anti-innovation norms are harmful because they tend to hinder breakthrough innovation, namely innovation that recombines knowledge across boundaries of (often distant) creative communities and industries. They create barriers between bodies of knowledge, reinforcing the use of specialized, field-specific terminology rather than one that is shared among fields. They also tend to be over-enforced.

Anti-innovation norms come in three types: research priority norms, methodology norms, and evaluation norms. Research priority norms determine which problems are worth artistic or technological attention. Their dark side is that they implicitly suggest which problems are not worth members’ time and attention. This is particularly costly when norms discourage members from exploring intersectional problems.

Methodology norms dictate the proper ways, tools, and routines appropriate to solve the problems community members identified as important. Such norms can have a negative consequence to the extent that they teach against the use of alternative, non-conforming methodologies. Intersectional problems pose a specific challenge as they are often best addressed through the combination of methodologies.

Lastly, evaluation norms dictate how a creative community assesses the contribution of members. Good work may be that which is patented, or published in certain fora, or meets some other community standard. Their dark side lies in reinforcing priority and evaluation norms and in discouraging cross-disciplinary innovation.

Two psychological biases explain why anti-innovation norms emerge and persist. First, the status quo bias describes a preference for the current state of affairs over change. Second, conformity bias describes individuals’ tendency to forego exercising their own judgement in favor of following social norms.

Having illuminated social norms’ dark side, Bair and Pedraza-Fariña propose solutions. They believe that IP law, which affects the appropriability of the fruits of one’s labor, is limited in its ability to incentivize cross-boundary innovation. Instead, they emphasize the need to turn to other mechanisms that affect innovation, such as government grants and tax credits.

Conducting the first general treatment of the topic, this article is important not only for drawing attention to, and advancing the understanding of, social norms’ failures, its causes and its effects, and for suggesting possible solutions thereto, but also because the analysis opens up the door for many potential follow-on research paths, of which I would like to suggest a few. First, the article reinforces the understanding that law and social norms are just two ways to foster innovation, and that each comes with relative, rather than absolute, advantages and disadvantages. Law and social norms do not only affect how much is being created, but also the type of innovation that is taking place. For example, in standup comedy, early 20th century norms suggested that copying and reuse were legit and humor tended to be generic, but late 20th century norms suggested that originality was key, and the type of humor created tends to be personal, observational, and point of view driven. The law and social norms literature suggests that in some cases law and social norms may work in tandem and reinforce each other, but that in others they may crowd out one another. After recognizing that social norms have their disadvantages too, an additional step forward would be to develop a theory of the optimal mix of innovation law (IP and non-IP) and social norms.

Second, the authors focus on social norms’ dark side in hindering cross-boundary research. It is at least conceivable that social norms also have negative effects within disciplinary boundaries (as the last comedy example may suggest). Examining norms’ failure in other communities may suggest whether this is a more general point, and perhaps help to enrich the general theory of norms’ failure.

Third, and related, since the article comes at a stage where a considerable number of industry-specific case studies already exists, it likely presents the opportunity to further contextualize and concretize the discussion. For example, for all or some of the communities explored in the literature, scholars could identify particular anti-innovation norms, map the category they fall into, highlight the psychological biases that reinforce them, and propose particular IP and non-IP reforms that would be most effective in countering them. Indeed, it seems that there is much to be gained from paying attention to anti-innovation norms in IP.

Cite as: Dotan Oliar, IP Norms’ Dark Side, JOTWELL (June 26, 2018) (reviewing Stephanie Bair & Laura Pedraza-Fariña, Anti-Innovation Norms, 112 Nw. L. Rev. 1069 (2018)), https://ip.jotwell.com/ip-norms-dark-side/.

Does Copyright Have a Framing Problem?

Margot E. Kaminski & Guy A. Rub, Copyright’s Framing Problem, 64 UCLA L. Rev. 1102 (2017).

Numerous provisions of the Copyright Act of 1976 (“1976 Act”) use the term “work” as a key referent for determining copyrightability, ownership, scope of rights, limitations on scope, and remedies. Yet, Congress did not provide a general-purpose definition of what counts as a “work,” even though it defined a plethora of arguably much less important terms. When the parties in litigation explicitly or implicitly disagree about the fundamental issue of what the plaintiff’s or defendant’s work is, what is a court to do?

This is a big and important question. While Kaminski and Rub do not provide a full response, they do frame the problem and illustrate how it plays out in many contexts. They demonstrate that courts have considerable flexibility in how to define the relevant work. Sometimes, courts use this flexibility to “zoom in” on particular facets of, for example, a design on a carpet that mixes public domain and original elements. Other times they “zoom out” to consider a work’s total concept and feel. Courts rarely defend their framing of the relevant work, and when they do, they do not use a shared set of criteria to justify their choices.

Kaminski and Rub have three goals for this article. First, they review numerous cases that illustrate the range of legal issues for which a court’s framing choice is consequential. They show the inconsistency of judicial framing choices and the rarity of justifications of these choices. Second, they argue that the doctrinal flexibility judges have exercised is more a feature than a bug in copyright law because it supplements a court’s ability to tailor the scope of rights or remedies in particular cases to better achieve copyright’s goals. Third, they argue that courts should more self-consciously exercise this framing flexibility and provide a justification for their choices. The authors close with a set of criteria courts might rely on to provide such a justification.

In their survey of the choices that are available to courts in framing the definition of the “work,” the authors start with the cases involving copying that is not wholesale, so-called “non-literal infringement.” The legal test is whether the two works are “substantially similar,” and how this is applied depends greatly on how the plaintiff’s and defendant’s works are framed. One of the more difficult issues in this analysis involves whether a character is a work independent of the larger narrative in which the character appears, and if it is a work, how is that work defined when the character, say, Sherlock Holmes, or the Batmobile, evolves? The article briefly discusses how the Ninth Circuit “zoomed out” from the many iterations of the Batmobile to focus on its general characteristics in the Batman series to define the work in DC Comics v. Towle.1 As a reader, I wanted the authors to dive a little deeper into this issue to compare and contrast some of the other character cases.

The article then turns to questions of authorship, showing that courts have decided in motion picture cases to zoom out and treat the dominant author of the motion picture as the author, rejecting the theory that an actor creates a distinct work in a scene for which the actor is primarily responsible. The article then dives deeper into the useful article doctrine, arguing that the Supreme Court, in its enigmatic recent decision in Star Athletica LLC v. Varsity Brands, Inc.,2 sent conflicting signals about how to frame the design features and the utilitarian aspects by zooming in on the chevrons on a cheerleading uniform in parts of the discussion and zooming out to look at the uniform as a whole in others.

Turning to the framing tests the courts have used, or could use, Kaminski and Rub argue that most courts use no test, showing the inconsistent framing in cases involving photographs copied from collections of photographs as an example. Courts sometimes ask whether something is a work or a constituent element by testing it for copyrightability, but the authors persuasively argue that this test does too little work. A market-based approach has some appeal, and some courts ask whether the putatively independent works can “live their own copyright life [sic].”3 A variant on this test is whether the putative work has independent economic value. The authors acknowledge the appeal and some utility of this approach, but they rightly caution that with increasing variety in bundling and unbundling of works in digital form, market signals are less likely to provide stable guidance.

The closing section of the article argues that judicial flexibility in framing the work is beneficial because it allows courts to tailor protection and remedies with respect to policy considerations such as: the incentives-access tradeoff, managing transaction costs, and channeling creativity. As a result, permitting inconsistent framing depending on the doctrinal context provides some benefit. Small units may be copyrightable, and therefore works, for the purposes of copyrightability, but if each copyrightable unit is eligible for a separate award of statutory damages, existing problems with proportionality in statutory damages would become greatly magnified, for example.

The authors set an ambitious task for themselves, and their ability to range across the copyright caselaw is impressive. Two caveats: As a reader, I would have liked to see greater engagement with Justin Hughes’s article on microworks,4 which the authors acknowledge that they are building on, particularly when considering the available framing tests. I also think that the framing choices they discuss in the subsections on fair use, statutory damages, and Section 201(c) are qualitatively related but distinct because the decisions in the cases discussed there focused more on choosing which work – the collective work or the contribution to the collective work – to focus on rather than whether to zoom in or out on aspects of the work, as in the substantial similarity or useful article cases. That said, Kaminski and Rub make a persuasive case that courts should more explicitly identify and justify their framing choices about the relevant work(s) when applying the wide range of doctrines tied to the “work.”

  1. 802 F.3d 1012 (9th Cir. 2015).
  2. 137 S. Ct. 1002 (2017).
  3. Gamma Audio & Video, Inc. v. Ean-Chea, 11 F.3d 1106, 1116 (1st Cir. 1993).
  4. Justin Hughes, Size Matters (or Should) in Copyright Law, 74 Fordham L. Rev. 575 (2005).
Cite as: Michael W. Carroll, Does Copyright Have a Framing Problem?, JOTWELL (May 29, 2018) (reviewing Margot E. Kaminski & Guy A. Rub, Copyright’s Framing Problem, 64 UCLA L. Rev. 1102 (2017)), https://ip.jotwell.com/does-copyright-have-a-framing-problem/.

When Social Norms for Innovation Don’t Maximize Social Welfare

Stephanie Bair & Laura Pedraza-Fariña, Anti-Innovation Norms, 112 Nw. U. L. Rev. 1069 (2018).

For more than a decade, intellectual property scholars have demonstrated how informal social norms can efficiently substitute for formal legal protections to promote creativity and innovation. Now come Stephanie Bair and Laura Pedraza-Fariña to show that social norms can have a less happy effect. In some instances, they argue, informal social norms may impede innovation. Their forthcoming article, Anti-Innovation Norms, explores a selection of these and what might be done about them.

According to the standard economic story, IP rights exist to solve a public goods problem. In the absence of IP rights, rampant copying would reduce the ability of creators to recoup their investments, and society would face underinvestment in new works and inventions. IP rights solve this problem by enabling creators to charge supramarginal prices for those works and inventions. Although IP rights are costly, they produce net benefits by encouraging creativity that we wouldn’t otherwise have.

Yet a still-expanding list of studies has shown that in many industries, including fashion design, cooking, stand-up comedy, and tattooing, the lack of formal IP protection has not killed creativity. In these and other fields, robust social norms that promote creativity and attribution and that shun certain kinds of copying can replace formal legal protection more flexibly and at lower social cost. Norms can make free riding costly, and they support those engaged in creative work.

Professors Bair and Pedraza-Fariña do not gainsay the importance of social norms to limit the effects of free riding in many low-IP fields, but they suggest that scholars’ focus on free riding has caused them to miss other, less salutary, social norms that affect innovation. In particular, Professors Bair and Pedraza-Fariña demonstrate the existence of a group of norms that undermine an especially valuable source of innovation—disciplinary boundary crossing. While many of the most important innovations in science, technology, and the arts come from the merging of insights from previously unrelated fields, creators can face substantial disincentives to engage in boundary-crossing work.

Professors Bair and Pedraza-Fariña have, themselves, not been dissuaded from crossing disciplinary boundaries to substantiate their account. They turn first to sociological studies of science and the arts in order to identify three anti-innovation norms: research priority norms, methodology norms, and evaluation norms. These norms tell people within a given field (1) to work on projects that the field deems valuable, (2) to apply methods deemed acceptable to those projects, and (3) to judge the value of contributions to the field on terms that the field prefers. Drawing on examples from physics, medicine, and contemporary visual art, the authors demonstrate how anti-innovation norms create field-specific professional and reputational costs that discourage creators from pursuing potentially more valuable opportunities at or beyond the borders of their fields.

Continuing their own boundary crossing, Professors Bair and Pedraza-Fariña next turn to social psychology to explain possible sources of anti-innovation norms. They focus on two aspects of individual decision-making, the status quo bias and the conformity bias, as the likely culprits. For a variety of reasons, people tend to be strongly attached to the ways things are now, and it can be surprisingly difficult to get people to switch away from their present status. In addition, people generally like to conform their behavior to those in their social groups. Conformity aids group membership and solidarity and the benefits they confer. The authors speculate that these two tendencies can make it harder for individual creators to break away from accepted paths and procedures and to risk forging their own identities.

Having established the existence of anti-innovation norms, Professors Bair and Pedraza-Fariña return to their disciplinary home to consider whether law can do anything to solve the problems they’ve articulated. In large part, they are skeptical of IP law’s ability to particularly encourage boundary-crossing innovation, because copyright and patent tend to reward innovations without reference to whether they combine diverse insights or not. They argue that IP’s reliance on decentralized market signals, which is normally considered a feature, can also be a bug when those signals are filtered through a norm-inflected view of the technological or artistic environment. Although the authors suggest some tweaks to IP doctrine, their policy proposals focus instead on using grant funding and tax incentives to encourage and reward boundary-crossing creativity.

Professors Bair and Pedraza-Fariña have helpfully reminded IP scholars that social norms are not always a good thing for creativity. In addition to the norms against boundary crossing, other norms may discourage certain kinds of creators or certain kinds of creativity from emerging. As we develop a richer view of the sociology and psychology of innovation, we should pay attention to the costs of informal ordering as well as to its benefits.

Cite as: Christopher J. Buccafusco, When Social Norms for Innovation Don’t Maximize Social Welfare, JOTWELL (April 30, 2018) (reviewing Stephanie Bair & Laura Pedraza-Fariña, Anti-Innovation Norms, 112 Nw. U. L. Rev. 1069 (2018)), https://ip.jotwell.com/when-social-norms-for-innovation-dont-maximize-social-welfare/.

Designing Design Patent Subject Matter

Sarah Burstein, The Article of Manufacture in 1877, 32 Berkeley Tech. L.J. 1 (2017).

Designs for articles of manufacture are eligible for design patent protection under section 171 of the Patent Act if they are “new, original and ornamental.” As is clear on the face of this provision, design patent subject matter is limited to designs for “articles of manufacture”—other types of design, including architectural design, would seem to be excluded. But what exactly is included? What is a “design for an article of manufacture”? In an important new article, The Article of Manufacture in 1877, Sarah Burstein provides critical historical context on that question.

Courts and the Patent Office have in recent years given “article of manufacture” a very broad interpretation. They have, for instance, accepted as design patent subject matter the design of a spray pattern for a water fountain, the design of the shank portion of a drill bit, and graphical user interface (GUI) designs (so long as they are depicted in a little rectangle to indicate—nominally—that they show up on a screen). As a subject matter threshold, then, the article of manufacture requirement has in recent years excluded very little from design patent protection.

But the Supreme Court’s decision in  Apple v. Samsung made the relevant article of manufacture central to determining damages, putting new pressure on design patent law’s understanding of that concept. The issue in Apple v. Samsung was the appropriate way to calculate damages in a case involving a “partial design”—that is, a case in which the design patent claimed only part of the article to which it was applied. Design patent has a special damages rule under which an infringer is “liable to the owner to the extent of his total profit, but not less than $250.” 35 U.S.C. § 289. Congress added that rule to the Patent Act after the Supreme Court’s decision in Dobson v. Hartford Carpet Co., 114 U.S. 439 (1885), in which the Court refused to allow the owner of a patented carpet design to recover the full profits from the sale of infringing rugs. According to the Court, profits attributable to the design had to be distinguished from those attributable to the “intrinsic merits of quality and structure” of the article. Thus, in order to recover the entire profits from the sale of articles bearing the patented design, those articles must have sold for higher prices than those without the design. Dobson, 114 U.S. at 444.

Since there is little dispute that Congress intended § 289 as a rejection of the result in Dobson, courts have long understood that section to prohibit apportionment of damages. In the view of the Federal Circuit, that meant that damages in a design patent case were calculated based on sales of the entire commercial product, even if the design patent at issue claimed only a small part of the design of that product (say, the rounded corners of a device, to pick a not-so-random example). But as the Supreme Court seemed to understand, that approach implicitly attributes zero value to the function of the device or any of its other features, a consequence that seems indefensible, particularly in the context of something so complex as a smartphone. Indeed, applying its “no apportionment” rule, the Federal Circuit upheld a jury award of Samsung’s entire profit on the sale of the infringing smartphones, even though those devices employed hundreds or thousands of other patented and unpatented components.

The Supreme Court purported to solve this problem by focusing on the meaning of “article of manufacture.” While the statute requires courts to award the defendant’s “total profits,” those profits must be determined in relation to the relevant article of manufacture to which the patented design was applied. Crucially, and contrary to the Federal Circuit’s assumption, the article of manufacture need not be the entire commercial product sold to consumers; it might be something less than that. So far, so good. Unfortunately, the Court gave no guidance as to when courts should treat the relevant article of manufacture as the commercial product or as some smaller unit, nor did it explain how one should identify the relevant article if it is less than the product as a whole.

It seems likely, however, that the claim will prove important here: Because a design must be for an article of manufacture, the article of manufacture will be defined in relation to the claimed aspects of the broader article shown in the drawings. So understood, “partial design” is a misnomer—patents that claim only a component of the article are not partial designs of an article; instead the article is, at least in some cases, defined by reference to the claim.

If it can be operationalized, the Court’s reconceptualization of the article of manufacture may prove an elegant solution to the remedies problem. But as Burstein shows in her article, if history is our guide to the proper definition of “article of manufacture,” there is more the Court could have done here. For while it’s true that not all articles of manufacture are complete commercial products, neither should all components be considered articles of manufacture.

Surveying early case law, Burstein concludes that “article of manufacture” was a term of art that did not simply refer to any article that was manufactured. (P. 42.) “Instead, it referred to a tangible item—other than a machine or a composition of matter—with a unitary structure made by humans and complete in itself for use or for sale.” (P. 43.) That definition didn’t exclude all components; indeed, several of the cases Burstein cites involve design patents on parts. But only those components that were manufactured separately and identifiable as discrete parts (even if ultimately combined into a bigger commercial product) counted.

That’s obviously not a self-applying rule. Among other things, its workability depends on courts’ ability to identify the claimed design as a relevant part of a product and not just as a feature of the product. That may sometimes be quite difficult. Still, conceiving of the article of manufacture in this way frames the inquiry very differently, and the implications go far beyond remedies. As Burstein highlights, the remedies problem is illustrative of a much bigger issue: Because the PTO and the courts have paid so little attention to what should count as a relevant article of manufacture, they have extended design patent protection to a number of designs that are subject to the Apple v. Samsung problem. Rather than solving it at the remedies stage, perhaps the better approach is to reconsider the scope of design patent subject matter.

Rethinking subject matter would go a long way toward addressing a number of other problems in design patent law. Accepting that one can patent small portions of the design of an article has made claiming more mysterious. That, in turn, has made it difficult for courts to compare claimed designs to the prior art to assess novelty and obviousness. It has exacerbated the difficulties courts face in giving a valid design its proper scope. Burstein acknowledges that there may be reasons to depart to some degree from the 1877 approach, particularly in light of the range of designs now impacted by design patent. But we can do so, she argues, without surrendering the idea that the article of manufacture has limiting content—the “article of manufacture” cannot just be whatever the claimant suggests it is. That’s a lesson that could profoundly affect all of our thinking about design patents for years to come.

Cite as: Mark McKenna, Designing Design Patent Subject Matter, JOTWELL (March 22, 2018) (reviewing Sarah Burstein, The Article of Manufacture in 1877, 32 Berkeley Tech. L.J. 1 (2017)), https://ip.jotwell.com/designing-design-patent-subject-matter/.

What Is the Payoff from Public R&D Investments?

Pierre Azoulay, Joshua S. Graff Zivin, Danielle Li & Bhaven N. Sampat, Public R&D Investments and Private-Sector Patenting: Evidence from NIH Funding Rules (revised 2017), available at NBER.

Intellectual property scholars have increasingly recognized that IP is only one of many legal tools for incentivizing innovation. In addition to facilitating transfers from consumers to innovators through a “shadow tax” on IP-protected goods, the U.S. government also directly rewards innovators with public funds through R&D tax incentives, innovation prizes, and—most significantly—well over $100 billion per year in direct federal and state R&D support. This direct public science funding, which primarily goes to grants and national laboratories, has long been viewed as important to U.S. economic growth. But federal R&D spending has been declining and is the subject of an increasingly partisan divide, making this a key moment to ask: What is the public’s return on this investment?

In an outstanding empirical analysis, corresponding author Danielle Li at MIT Sloan and her three coauthors—Pierre Azoulay at Sloan, Joshua Graff Zivin at UC San Diego Economics and Public Policy, and Bhaven Sampat at Columbia Public Health (collectively, “AGL&S”)—have tackled this question for grant funding by the U.S. National Institutes of Health (NIH). With a budget of over $30 billion per year for biomedical research, the NIH is the single largest research funder in the world. But assessing the causal impact of this investment is difficult, even when focusing only on its effect on private-sector patents. How can one measure information flows in fields full of serendipity and spillovers? Are grants merely correlated with private-sector advances? Does public funding “crowd out” private investment? AGL&S’s empirical design makes progress on each of these issues, and they conclude that each $10 million in NIH funding in fact generates 2.7 additional private-sector patents.

Prior work has studied outcomes in specific areas of NIH grant funding (which misses advances in different fields) or citations to university patents (which misses grant-funded academic results that are not patented). Instead, AGL&S look to publications resulting from these grants, and then to patents that cite these publications, which have been shown in a validation against survey results to be a better measure of the influence of public sector research than patent-to-patent citations. Out of the 153,076 NIH grants funded between 1980 and 2005, about 40% led to a publication cited by a patent; and out of the 232,276 private-sector life-science patents issued by the USPTO between 1980 and 2012, about 40% directly cited an NIH-funded publication.

A second empirical challenge is that public funding may target the most promising research areas, so increased grant funding might simply be correlated with increased patenting in that area rather than causing it. AGL&S take on this problem in two ways. First, they classify grants by disease (e.g., cancer), scientific approach (e.g., cell signaling), and time (e.g., 1990), allowing them to include fixed effects to account for these potential sources of endogeneity in funding. Second, they find effectively random variation in funding at the disease/science/time level based on grant applications whose review scores were just above or just below the NIH’s funding cutoffs, and they use the “windfall” funding received by a disease/science/time above the cutoffs as an instrument for total funding. Under both approaches, they find that additional funding increases the number of patents that cite NIH-funded publications.

The third challenge noted above is that grant-funded research might crowd out private-sector funding, such that the increase in private-sector follow-on patenting is accompanied by a decrease in patents that do not rely on NIH-funded results. To be clear, substitution of patents with grants is not necessarily problematic from an economic perspective: both are costly, and neither is optimal in all circumstances. But AGL&S show that grant funding spurs rather than substitutes for private-sector R&D. To study this effect, they use a separate dataset of patents that cite publications related to NIH-funded publications (using a keyword-based similarity measure). If NIH funding merely crowded out private research, the authors argue that one would not expect the award of an NIH grant to increase the number of patents in a research area, but instead, they find an increase here as well.

These results come with a number of caveats, the most important of which is that AGL&S focus on the outcome of only private-sector patenting, which ignores the many other benefits of public biomedical research. In the life sciences, more innovations are probably captured by the patent system than in other fields, but there are certainly medical innovations such as ICU hygiene checklists that have enormous social benefit that is not easily excludable with patents. This problem would be exacerbated in other areas of research, which is one reason that it would be difficult to replicate AGL&S’s analysis for grants from other funding agencies.

Of course, even within the life sciences, turning this result into a rate of return is difficult. AGL&S’s back-of-the-envelope calculations lead to tentative estimates on the order of a $20–$30 million increase in firm market value for each $10 million NIH grant. The true social rate of return might be significantly higher (e.g., if social value is significantly higher than firm market value, or if firms often build on uncited knowledge) or lower (e.g., if patents frequently cite to papers that are not real intellectual influences). But even if one focuses only on the results that can be determined with more precision—the positive effect on private-sector patenting and the lack of measurable crowding out—the results in this paper are important data points for the increasingly polarized discussion of direct government R&D funding.

Cite as: Lisa Larrimore Ouellette, What Is the Payoff from Public R&D Investments?, JOTWELL (February 28, 2018) (reviewing Pierre Azoulay, Joshua S. Graff Zivin, Danielle Li & Bhaven N. Sampat, Public R&D Investments and Private-Sector Patenting: Evidence from NIH Funding Rules (revised 2017), available at NBER), https://ip.jotwell.com/payoff-public-rd-investments/.

Will Algorithms Take the Fairness Out of Fair Use?

Dan L. Burk, Algorithmic Fair Use, U. Chi. L. Rev. (forthcoming), available at SSRN.

As part of an invited symposium, organized by The University of Chicago Law Review, on whether artificial intelligence will spell the end of one-size-fits-all laws, Dan Burk has written a terrific essay explaining why he’s skeptical that AI or machine learning will lead to better copyright fair use decisions. In the essay, Algorithmic Fair Use, Professor Burk identifies three main bases for his concerns.

First, Professor Burk is skeptical that American fair use law, which is articulated as a relatively open-textured standard (as compared with U.K.-style “fair dealing” provisions that set out a laundry list of tightly specified circumstances in which portions of a copyrighted work may be used without permission), could ever be reproduced with much fidelity as a set of software rules. The resistance of American fair use to ruleification, and therefore to automation, runs deep – not least because the content of any fair use rule depends upon antecedent considerations that are themselves resistant to distillation into rules:

Determining the impact of the unauthorized use of a work on the actual or potential market for the underlying work requires a model of the market and decisions about the data that properly populate that model. The amount of the work used can be mapped to the percentage of lines or words or pixels or bits taken for a given use, but some weight or significance must be accorded that number, whether defined by explicit programming values or by algorithmically learned data patterns. The type of work used and the use to which the protected taking is put require some categorization of works and uses. These and a multitude of other design choices made in advance would determine the allowance or disallowance of uses for protected content; algorithms do not make judgments; they are rather the products of human judgment.

Second, and perhaps more importantly given the direction of technology at the moment, Professor Burk is skeptical of fair use automation through machine learning. Machine learning does not depend on ruleification but would instead seek to determine whether a use was fair by comparing it to patterns that correlate with uses judged to be fair within a large dataset of unauthorized uses. But a machine’s ability to produce relevant and reliable results through pattern matching presumes that the machine has been fed a dataset that is representative of the fair use determinations, and the facts underlying them, as they exist out in the world.

Getting the dataset right, Professor Burk argues, is likely to be expensive and difficult. But the problem runs deeper than just cost or the technical difficulties of assembling a reliable dataset. The fundamental conceptual difficulty is that the output of a machine learning algorithm is just a correlation. It isn’t a judgment about whether that correlation is meaningful. In an entertaining but important aside, Professor Burk refers to a famous instance where data mining showed a strong correlation between movements in the S&P 500 stock index and the production of butter in Bangladesh. In that case, he notes, “a human decisionmaker is required to designate the trend as spurious rather than meaningful.” The same would be true of fair use determinations made by a machine learning algorithm – human intervention would be required to check that the machine’s output makes any sense outside the confines of the machine’s dataset.

Third, and finally, Professor Burk is wary of proposals to automate fair use because he fears that encoding fair use into the operation of machines will shape human expectations and behavior in ways that are both difficult to predict in advance or to contest ex post. He outlines this reservation in part by quoting from a video creator’s writings describing how Google’s Content ID system has shaped creativity on YouTube:

“You could make a video that meets the criteria for fair use, but YouTube could still take it down because of their internal system (Copyright ID) [sic] which analyzes and detects copyrighted material. So I learned to edit my way around that system. Nearly every stylistic decision you see about the channel — the length of the clips, the number of examples, which studios’ films we chose, the way narration and clip audio weave together, the reordering and flipping of shots, the remixing of 5.1 audio, the rhythm and pacing of the overall video — all of that was reverse engineered from YouTube’s Copyright ID. I spent about a week doing brute force trial-and-error. I would privately upload several different essay clips, then see which got flagged and which didn’t. This gave me a rough idea what the system could detect, and I edited the videos to avoid those potholes.”1

Of course, machines are not the only mechanism for shaping behavior. That’s what law does too; indeed, that is the very point of having laws. An advantage of the conventional legal system is that when laws and legal reasoning are more readily accessible and comprehensible, they are more easily contested. The inscrutable outputs of trade-secret-protected algorithms or invisible data sets, by contrast, are likely to obscure the ways in which law shapes behavior. In the end, Professor Burk is profoundly pessimistic: “[I]mplementation of algorithmic fair use,” he says, “will inevitably, and probably detrimentally, change the nature of fair use.”

I am not so sure that we know enough yet to judge whether Professor Burk’s intuition is right. It does seem likely that automation will create pressure to “ruleify” fair use, that is, to turn it into a more elaborated version of U.K.-style fair dealing. But what is our normative takeaway if that happens? Is ruleified fair use, where enforcement is done cheaply by machines, necessarily worse than our current fair use standard?

Current American fair use law is more flexible than any set of imaginable fair use rules, yet (in part because of that flexibility) enforcement is expensive and undertaken only in the comparatively rare occasions where a user has both the incentive and means to engage in federal court litigation. Thus, fair use as we know it in the U.S. is flexible, but inaccessible.

Ruleified fair use administered by machines promises to solve the accessibility problem. But will that gain come only at the expense of a bowdlerized set of fair use rules? That depends in part on who would be making the rules that automation demands, and what the process looks like for creating new rules. Would the rule-maker be open to input from users as well as content owners? And would the rule-maker be obliged to periodically revisit the rules to make sure that new exceptions could be added as needed, and exceptions that had proved ill-advised removed?

These are among the important questions that Professor Burk’s provocative essay raises, and they should command the attention of the copyright academy in the years to come.

  1. Tony Zhou, Postmortem: Every Frame a Painting, Medium (Dec. 2, 2017), https://medium.com/@tonyszhou/postmortem-1b338537fabc.
Cite as: Christopher J. Sprigman, Will Algorithms Take the Fairness Out of Fair Use?, JOTWELL (January 15, 2018) (reviewing Dan L. Burk, Algorithmic Fair Use, U. Chi. L. Rev. (forthcoming), available at SSRN), https://ip.jotwell.com/will-algorithms-take-fairness-fair-use/.