The Journal of Things We Like (Lots)
Select Page

A Classical Perspective on Information Ownership

Jeremy N. Sheff, Jefferson’s Taper (Feb. 11, 2019), available at SSRN.

It’s not news that normatively fraught debates in legal academia tend to become polarized and then stuck. Scholarship often tends to cohere around preexisting camps, causing debate to focus on which camp (and who within each camp) is right and to ignore the possibility that the available framings may have missed something important. In light of this, one of the most valuable and refreshing moves an article can make is to throw a bomb into the long-accepted binary of a given academic debate by suggesting an entirely new way of thinking about an issue. This is precisely what Jeremy Sheff does to the debate over foundational concepts of information ownership in his fascinating and provocative draft, Jefferson’s Taper.

Here’s the backstory: Some scholars favor a limited vision of information owners’ rights and tend to embrace what has become known as the utilitarian theory1 of copyright and patent. According to this view, property in creative expression or inventions is not rooted in any notion of “right” other than the state’s positive law. Rather, the state grants monopolies in information only because (and to the extent that) doing so is necessary to incentivize the creation of things that would earn no profits for their owners absent law’s imposition of exclusive rights. Other scholars prefer a more expansive vision of owners’ rights; these scholars tend to advocate an alternative view of copyright and patent rooted in the writings of John Locke. This approach locates a pre-political right to ideas in the labor expended in creating them and rejects the notion that copyright and patent are nothing more than state-created monopolies designed to calibrate the optimal level of creative and inventive production.

Adherents of each side in this debate have produced a wide variety of authorities for each view, but none has been as influential as Thomas Jefferson as expressed in his letter to Isaac McPherson. In that letter, Jefferson uses the metaphor of lighting another’s candle to make a point about the non-rivalrous nature of intangible property: “He who receives an idea from me, receives instruction himself without lessening mine; as he who lights his taper at mine, receives light, without darkening mine.” (P. 5.) He deployed this example to argue that “inventions … cannot, in nature, be a subject of property.” (P. 5.) The Parable of the Taper has become a shibboleth for those who prefer the low-protectionist “utilitarian” view of copyright and patent.

But what if Jefferson meant something else entirely by the Parable of the Taper? This is the tantalizing question that Sheff investigates. He begins by pointing out that in all likelihood Jefferson did not come up with the Parable on his own, but rather borrowed it from Roman statesman and philosopher Cicero, who used the same story with slightly different phrasing in his philosophical tract De Officiis. Did Jefferson’s uncited reference to De Officiis suggest that he shared Cicero’s basic approach to property? If so, that may reframe entirely the meaning of the Parable of the Taper, and indeed the common understanding of Jefferson’s position on how to regulate ideas.

We commonly assume that Jefferson is a scion of the Enlightenment and its modern assumption that all persons are created equal. (A proposition that Jefferson adhered to at best in theory not practice, given that he was a slaveowner.) But the pre-Enlightenment Classical tradition of property—which has its roots in Cicero, among other ancients—assumed instead that people are necessarily born unequal. Consequentially, the challenge of law generally, and of property law in particular, is how to allocate rights among people in light of their inequality. Cicero’s view of property in particular was elitist and conservative. It accepted without question preexisting distributions of property and offered arguments about how and why these distributions should be preserved.

Sheff is careful not to argue that his discovery proves that Jefferson adhered wholesale to a pre-modern, Ciceronian worldview on property and equality generally. But he does imagine what it would mean to think about copyright and patent through this lens with provocative results. For one thing, in the passage from which Jefferson borrowed the parable, Cicero discusses the obligation of property owners to engage in acts of beneficence, at least toward those who merit such acts. The point of Cicero’s relating the parable is that he regards the duty of beneficence to be at its zenith when acting generously costs owners little or nothing, as when someone asks to light their lamp with yours. Sheff suggests that this could be read to mean that Jefferson’s view of copyright and patent included the conviction that owners of copyrights and patents had obligations to share the fruits of their intellectual labors with the public. This reading translates the deeply conservative Ciceronian view of property into one that is—in this application, at least—generous and public-spirited.

Sheff’s article is enlightening well beyond the ambitious thesis he seeks to advance. For one thing, his eloquent writing makes the seventy-seven pages of historical and philosophical exegesis read like a pleasure, not a slog. For those of us who know little of the Classical tradition of philosophy, Sheff’s article is a fascinating and useful primer that moves from a deep dive into Cicero to a tour through Aristotelian, Thomist, and Grotian views on property. One criterion for what makes an article worth reading is that in so doing, you learn something new and important. In this sense, Sheff’s work succeeds masterfully.

But how important was Sheff’s discovery? He makes a very strong case that Jefferson borrowed the parable of the taper from Cicero, but extrapolating from use of that one metaphor that Jefferson more generally embraced the Ciceronian worldview on property represents a fairly large conceptual leap. Sheff does not, for example, substantiate this suggestion by citing any other passages from Jefferson’s writing that embrace the Classical approach to property. And while I am no Jefferson scholar, I am fairly confident that there are indications that he instead embraced (again, with astonishing lack of awareness given his ownership of slaves) the modern Enlightenment view that all people are born equal.2

Yet this does not detract from the success of Sheff’s article in light of its major ambition: His piece is best viewed not as a claim about Jefferson’s own beliefs, but as an attempt to breathe life into the sclerotic debate in copyright and patent between low-protectionist utilitarians and high-protectionist Lockeans. In Jefferson’s Taper, Jeremy Sheff invites us to think more broadly about the range of philosophical traditions that may illuminate our understanding of owning ideas, and more generally serves as a reminder of the importance of bringing fresh perspectives to scholarly debates with long-fixed battle lines

  1. Sheff is right that legal scholars refer to this theory using the term “utilitarian,” but this usage is not quite right. This theory of copyright and patent is consequentialist because it looks to outcomes: How well does a given regime of exclusive rights maximize creation and invention? But it is not utilitarian because it does not (necessarily) adopt a Benthamite greatest-good-for-the-greatest-number framework for evaluating the normative appeal of that regime.
  2. For example, the familiar language from the preamble to the Declaration of Independence that it is a “self-evident” truth that “all men are created equal.”
Cite as: David Fagundes, A Classical Perspective on Information Ownership, JOTWELL (August 9, 2019) (reviewing Jeremy N. Sheff, Jefferson’s Taper (Feb. 11, 2019), available at SSRN),

How Elite Lawyers Shape the Law

  • Paul R. Gugliuzza, The Supreme Court at the Bar of Patents, 95 Notre Dame L. Rev. __ (forthcoming, 2020), available at SSRN.
  • Paul R. Gugliuzza, Elite Patent Law, 104 Iowa L. Rev. __ (forthcoming, 2019), available at SSRN.

Christopher Langdell’s “case” method of teaching the law has dominated the law school classroom for over a century. In this pedagogical approach, students typically read appellate opinions, and professors tease “rules” from the opinions—often in concert with the so-called Socratic method, which enlists students to aid in this abstractive process. This approach is said to make students “think like lawyers,” but what’s typically ignored in the process is the role lawyers actually play in the very cases under consideration. Instead, the working assumption is that judges are presented with arguments and facts up high from anonymous sets of ideal lawyers, who never miss a key argument or forget a relevant fact.

Of course, the actual world of lawyering is much messier, and lawyers range from the glorious and gifted to the struggling and essentially incompetent. But exactly how does this variation in attorney quality affect case outcomes? This all-too-important question has scarcely been addressed, much less answered, by systematic academic study. In an outstanding duo of articles, Paul Gugliuzza shines newfound light on the issue by examining the role of “elite” advocates in the certiorari process at the U.S. Supreme Court.

Unlike actual case outcomes, which are often a poor test for attorney quality because of endogeneity concerns (the best attorneys often take the hardest cases), selection effects, and the lack of any “natural experiment” comparing a before-and-after “treatment,” certiorari in patent cases is in my view quite a worthy domain in which to suss out the effects of attorney quality.

As Gugliuzza recounts in exhaustive and well-researched detail, there is a major shift in patent action appeals in the participation of “elite” attorneys, particularly at the Supreme Court. (By “elites,” Gugliuzza refers to those attorneys who presented oral argument in at least five cases in that term and the ten previous terms combined.) Barring other explanations—which Gugliuzza does a thorough job in effectively eliminating—this sets up enough of a natural experiment to assess the causal role of elite attorneys in the fate of patent appeals, especially the grant (or denial) of cert petitions.

Notably, Gugliuzza finds that “the Supreme Court is 3.3 times more likely to grant cert. when a petition in a Federal Circuit patent case is filed by an elite advocate as compared to a non-elite.” (Supr. Ct., P. 34.)  Specifically, while non-elite petitions are granted at a 4.7% rate, elite petitions are granted at a high 15.6% rate. Exactly how and why this occurs is complex. Part of the reason is the fact that in cases handled by elites, large numbers of amicus briefs are filed at the cert stage, and the presence of those briefs is even more strongly correlated with cert grant than the presence of elites.

Of course, it could be the fact that elites tend to work on more important cases, and it is precisely those cases that garner more amicus briefs. But as Gugliuzza explains—and which aligns with my own experience—it is the network and know-how of elites that drive the amicus filings, creating a causal link between elites and cert grants. Also, many elites are known to the justices and clerks. And elites know how to craft briefs to increase the odds of a cert grant. Thus, even more so than Gugliuzza, I think it’s fairly clear that elites are a substantial causal factor in the Supreme Court’s renewed interest in patent law issues.

What’s more incredible about Gugliuzza’s findings is that, in my view, they substantially understate the role “elites” are playing in patent cases at the Supreme Court, because Gugliuzza’s definition excludes attorneys who regularly draft briefs (but do not argue) Supreme Court cases and also excludes well-known academics (since none has argued 5 cases), who have increasingly played a role at the certiorari stage in patent cases over the past 10 years.

Gugliuzza plans to tease out some of these additional influences in a follow-on study, which I have no doubt will strongly support the causal role between elites and cert grants in patent cases. But where does all this leave us?

First and foremost, Gugliuzza’s study reminds us as law professors that attorneys really do matter and that we need to teach students as much, including the nitty gritty of why—not just in “skills” and “clinical” courses, but in “doctrinal” courses, too. It also opens the door for further empirical study on the role of attorney quality in outcomes (outside of mere win rates—which, as I noted above, is a difficult way to measure the effects of attorney quality) in many other areas of law.

Second, it raises important normative issues regarding the development of the law. As Gugliuzza rightly notes, elite advocates tend to have little training in science and technology, and instead are typically generalists. When both the advocates and judges are generalists in patent cases, this can lead to a “blind leading the blind” problem. As Justice Scalia aptly recognized in his Myriad opinion, he could not join certain portions of the majority opinion, stating “I am un-able to affirm those details on my own knowledge or even my own belief.”1 Personally, I find it hard to believe that any justice in the majority had any scientific knowledge substantially greater than Justice Scalia’s. Indeed, Gugliuzza documents cause for concern because most of the Supreme Court decisions have been in areas that are basic enough for the justices to understand, like procedure or statutory interpretation, rather than core substantive issues of patent law. Even the substantive cases, like KSR, Myriad, Mayo, Alice, Global-Tech, and the like, present relatively simple sets of facts, which in essence means the Court has eschewed many doctrinal areas in need of resolution, such as enablement, written description, and complex obviousness doctrines.

At the same, the elites arguably have stronger skills when it comes to law and policy than the usual patent litigator. Elites may help to correct for the sometimes tunnel-vision of patent litigators and, more importantly, “specialized” Federal Circuit judges. This may help avoid court capture and pro-patent biases, which tend to serve the economic aims of the patent bar.

As Gugliuzza perceptively notes, perhaps it’s too early to answer the normative question. There are decent arguments on both sides of the fence. My own instincts are that generalist elites—in concert with the elites that make up the Supreme Court—are mucking up patent doctrine to the point that the system isn’t working as it should. Most problematic are generalist opinions, which often don’t provide sufficient guidance to innovators and potential infringers, alike, to order their business affairs. More generally, the Supreme Court has produced many opinions that have weakened patents (e.g., KSR, Alice, Mayo, eBay, Global-Tech, and TC Heartland), which although not always intentional, is in my view the wrong policy choice.

In sum, I thoroughly enjoyed Gugliuzza’s insights on these important questions, and the more general question of the role of lawyers on the law, and I believe Gugliuzza’s articles and follow-on studies will surely play a critical role in resolving these thorny debates as the empirics continue to unfold.

  1. Ass’n of Molecular Pathology v. Myriad Genetics, Inc., 569 U.S. 576, 596 (2013) (Scalia, J., concurring).
Ted Sichelman, How Elite Lawyers Shape the Law, JOTWELL (July 26, 2019) (reviewing Paul R. Gugliuzza, The Supreme Court at the Bar of Patents, 95 Notre Dame L. Rev. __ (forthcoming, 2020), available at SSRN; Paul R. Gugliuzza, Elite Patent Law, 104 Iowa L. Rev. __ (forthcoming, 2019), available at SSRN),

Using Fair Use to Reduce Algorithmic Bias

Amanda Levendowski, How Copyright Law Can Fix Artificial Intelligence’s Implicit Bias Problem, 93 Wash. L. Rev. 579 (2018).

What is the relationship between copyright law and artificial intelligence or machine learning systems that produce outputs biased by race, gender, national origin, and related aspects of being human? That is the question that Amanda Levendowski investigates and addresses in her refreshingly well-written, to-the-point article How Copyright Law Can Fix Artificial Intelligence’s Implicit Bias Problem. In a nutshell, she argues that: (1) these systems need large quantities of training data to be effective; (2) those building these systems rely on biased data in part because of their own biases but also because of potential risks of copyright infringement; and (3) more copyrighted works can legally be included as training data under the fair use doctrine and should be so used to selectively diversify the inputs to these systems to de-bias their outputs.

Levendowski starts with the problem in the form of Google’s natural language processing system word2vec. It is a form of neural word embedding that analyzes the context in which words appear in the source texts to produce “vectors,” which indicate word associations such as “Beijing” is to “China” as “Warsaw” is to “Poland.” Trained by analyzing the published news sources incorporated into Google News to which Google has obtained a copyright license, word2vec ingests the biases in those sources and spits out results like “man” is to “computer programmer” as “woman” is to “homemaker.” Levendowski acknowledges that those in the machine learning research community agree that this is a problem and are in search of a solution (including Google’s own researchers)1, but she responds that it should not be left only to developers at large technology companies with access to the training data to de-bias their own systems.

Levendowski further asserts that copyright law stands as a potential barrier, or at least a perceived barrier, to outside researchers’ ability to investigate and report on bias in these systems. Copyright reinforces incumbents’ advantages in three ways. First, while reverse engineering of the algorithms is protected by fair use, accessing those algorithms, if they are subject to technological protection measures under 17 U.S.C. §1201, is limited to the narrower § 1201(f) exception or the right to circumvent that the First Amendment may provide.2 Second, if a biased system’s underlying training data is copyrighted, journalists and other investigators who seek to expose the sources of algorithmic bias are likely to be chilled by the prospect of an infringement suit. Finally, the leading artificial intelligence developers have significant resource advantages that allow them to acquire enormous training datasets by building them (Facebook) or buying them (IBM).

This competitive advantage leads newcomers to rely on what Levendowski terms “biased, low-friction data” or BFLD; that is, data that are accessible and that carry little legal risk. (P. 589.) Here, her example is the 1.6 million emails among Enron employees made accessible by the Federal Energy Regulatory Commission in 2003. This is one of the only publicly-accessible large datasets of interlinked emails. Although these emails are technically works of authorship protected by copyright, the legal risk that any of these authors would sue an AI researcher for using these is close to nil. But, this is hardly a representative sample of people to study if one were to train a system to extract generalizable rules about how human beings communicate by email. Other examples of BFLD that have other forms of bias include public domain works published prior to 1923, which do not reflect modern language usage, and Wikipedia, which is legally low-risk because of its Creative Commons license but is a biased source of facts about the world because of the large gender imbalance among contributors. Levendowski argues that this imbalance biases the data in the language used to describe women in many Wikipedia entries, and the substance of these reflect male bias in terms of the subject matter covered and the subject matter omitted, such as key facts about women in biographical entries.

The article then argues that enlarging any of these datasets, specifically with diverse, copyrighted sources that are likely to mitigate or erase bias, is desirable and is legal as a fair use. Recognizing that access to these sources remains a challenge, Levendowski argues that at least the use of these sources should be cleared by fair use.

Here, I should disclose my bias. I have a forthcoming article that makes a related argument that copyright law permits the use of large sets of copyrighted works for text and data mining, so I am sympathetic to this article’s argument. Nonetheless, I think most readers will find that although the fair use analysis in this article is brief, perhaps too brief, it is supported by the case law and copyright policy.

The analysis argues that using copyrighted works as training data is a transformative use, and there is now substantial case law and scholarship that support this assertion. The use is for a different purpose than for which the works were published and the use adds something new through the system’s operation. The article then argues the second factor also favors the use because even creative works are being used for their “factual” nature; i.e., as examples of creative works by humans. Under the third factor, using the entirety of these works is necessary and appropriate for this purpose and has been approved in a number of cases involving computational processing of copyrighted works. Finally, under the fourth factor, even if some of the training data has been licensed in by current developers, the transformative purpose under the first factor overrides any negative impact that fair use may have on this market.

While this analysis is generally persuasive, I found this part of the article a little thin. I agree that a court would almost certainly characterize this use as transformative for the reasons stated. But, the second factor has traditionally been focused on how much expressive material is in the work being borrowed from rather than the borrower’s purpose. This move felt like giving the transformative purpose a second bite at the apple. While the second fair use factor does little work on its own, I think it is appropriate to consider as part of the balance how much original expression is at stake.

I will note that I wanted more discussion of the third and fourth factors. While it is easy to agree that use of entire works is likely to be permissible, the harder question is how much of that training data can be made publicly available under fair use by those seeking algorithmic accountability. I would have liked to know more about how and where Levendowski would draw this line. Similarly, the evidence of some licensing for this use, needs more elaborate discussion. I agree that the transformative purpose is likely to insulate this use, and that this licensing market is really one for access to, rather than use of, the training data, which diminishes the impact under the fourth factor.3

With that said, I want to acknowledge the creativity of Levedowski’s thesis, and show appreciation for her clear, succinct presentation of the three stages of her analysis. This piece is a welcome contribution by an early-career researcher, and I look forward to reading her future work.

  1. Ben Packer et al., Text Embedding Models Contain Bias. Here’s Why That Matters, Google Developers Blog (Apr. 13, 2018).
  2. See Universal City Studios, Inc. v. Corley, 273 F.3d 429 (2d Cir. 2001)(recognizing that §1201 can be applied in a way that burdens speech and is subject to intermediate scrutiny when it does so).
  3. Here I want to recognize the argument advanced by Ben Sobel on the fourth fair use factor. He argues that, at least when creative works are used to train systems designed to create competing creative works, the fourth fair use factor should weigh against such use. See Benjamin L.W. Sobel, Artificial Intelligence’s Fair Use Crisis, 41 Colum. J.L. & Arts 45, 75-79 (2017). It is a creative argument by which I am not persuaded because fair use should not be the policy finger in the dike holding back automated creation in my view. But, I found his arguments about the ways in which machine learning systems may require more nuanced fair use analysis to be well made.
Cite as: Michael W. Carroll, Using Fair Use to Reduce Algorithmic Bias, JOTWELL (June 28, 2019) (reviewing Amanda Levendowski, How Copyright Law Can Fix Artificial Intelligence’s Implicit Bias Problem, 93 Wash. L. Rev. 579 (2018)),

Personalizing Copyright Law Using Consumer Demographics

Adi Libson & Gideon Parchomovsky. Toward the Personalization of Copyright Law, 86 U. Chi. L. Rev.  527 (2019).

Most people assume, if implicitly, that there is a substantial element of uniformity in our IP system. At first blush, our copyright and patent laws extend a (presumably) uniform set of rights to (presumably) uniform authors and inventors, who can then sue (presumably) uniform unauthorized users. Scholarship for some time now has already noted that the bundle of rights is not actually uniform, and has theorized on the optimal tailoring of rights to particular industries and subject-matters. More recently the literature has started to unpack the implicit assumption of creator uniformity using data on the demographics of authors and inventors. Statistically speaking, the data has shown that creators of different races, genders and ages diverge in the rate and direction of their creative efforts. In this new and exciting article, Libson and Parchomovsky begin to unpack the assumption of user uniformity using user demographics.

Legal enforcement of copyrights entails benefits and costs. On the benefit side, it provides authors with an incentive to create, by securing to them the exclusive exploitation of their works. On the cost side, it reduces access to creative works, by endowing the author with a monopoly-like power. Optimally, copyrights would only be enforced against high value consumers (thus achieving the incentive rationale), but not against those with valuations lower than the market price (thus achieving the access rationale). In theory, allowing free access to those who cannot afford the market price would be efficient, as it would allow them access without sacrificing the author’s incentive. In practice, however, this cannot be done because many who are willing and able to pay would masquerade as ones who are not, and authors have no crystal ball to reveal consumer valuation. Copyright enforcement thus makes sure that those who can pay would, realizing that the access cost is borne as a necessary evil.

Not necessarily so anymore, say Libson & Parchomovsky. Using data on the demographics of consumers of audio and video content, they show that certain cross-sections of users never enter the market. With regards to these users, it does not make a lot of sense to harshly enforce copyright law. Rather, treating infringement by these users leniently would have the benefit of increasing access to content without sacrificing incentives to the author, namely without the risk that otherwise paying users would masquerade as low-value ones.

To illustrate how this can be done, Libson and Parchomovsky use two data sets. First, they use data from the Consumer Expenditure Survey of the Bureau of Labor Statistics that give a general view of household consumption patterns. For example, they note that average household spending on online audio and video consumption varies considerably with household demographics, including income, age, race, education, marital status and geographical location. Second, they use panel data on online purchases of music and video of over 80,000 households. Various household demographics correlate with purchase decisions, including, most prominently, race and age. They report that about 1500 of the 80,000 households did not buy music and about 4500 did not buy video online.

Together, these datasets give a sense of certain user cross-sections that are highly unlikely to ever purchase copyrighted content. For example, none of the 176 households that are southern, without a college degree, aged 24 years-old or younger, with income less than $100,000, and are not African American purchased copyrighted audio content online in 2016. Also, none of the 72 households that are southern, without a college degree, aged 29 years-old or younger, with income less than $100,000, and who are not African American purchased copyrighted video content online in 2016. Accordingly, under certain assumptions and caveats, the authors maintain that it would make sense to reduce the copyright liability of such households, and even exempt them from liability, because doing so would not disincentivize authors but would increase household—and so social—welfare.

Libson and Parchomovsky present their data as a proof of concept and suggest that much more could be done to optimize copyright policy if and when better data became available. But even with their data, the authors spell out three policy implications: the use of personalized consumption data can reduce the deadweight loss associated with copyright protection, copyright enforcement should be limited with regards to consumer demographics that are unlikely to purchase content, and that sanctions can be varied based upon user characteristics. This paper thus makes a novel contribution on its own, and opens up the way for further empirical investigation of users in IP.

Cite as: Dotan Oliar, Personalizing Copyright Law Using Consumer Demographics, JOTWELL (May 30, 2019) (reviewing Adi Libson & Gideon Parchomovsky. Toward the Personalization of Copyright Law, 86 U. Chi. L. Rev.  527 (2019)),

Discovery, Cannibalization, or Both: How Digitization Affects Demand for Physical Copies

Abhishek Nagaraj & Imke Reimers, Digitization and the Demand for Physical Works: Evidence from the Google Books Project (2019), available at SSRN.

From 2004 until 2009, the Google Books Project (GBP) digitized thousands of books from the collection of Harvard University’s library and made them available online. According to Google and proponents of the GBP, digitization would introduce readers to books that they otherwise couldn’t find or obtain, increasing access to and interest in the digitized works. But according to some authors and publishers, the creation of free digital copies would usurp the demand for print copies, undermining an important industry. This dispute was at the heart of a decade of litigation over GBP’s legality. After all of that, who was right?

According to a recent empirical study by economists Abhishek Nagaraj and Imke Reimers, the answer is: both of them. The paper, Digitization and the Demand for Physical Works: Evidence from the Google Books Project, combines data from several sources to reveal some key features about the effects of digitization on dead-tree versions of books. The story they tell suggests that neither of the simple narratives is entirely correct.

Google worked with Harvard to scan books from 2004 to 2009, proceeding in a largely random fashion. The only limitation was that Google only scanned books that had been published prior to 1923, because these works were in the public domain and, thus, could be freely copied. Works published in 1923 or later might still be covered by copyright, so Google chose not to scan those initially. Nagaraj and Reimers obtained from Harvard the approximate dates on which the pre-1923 books were scanned.

Harvard also provided them with the number of times between 2003 and 2011 that a book was checked out of the library. Book loans serve as one of the ways in which consumer demand for books is supplied, so these data enabled the researchers to test whether digitization affected demand for printed versions of works. The researchers also obtained sales data for a sample of approximately 9,000 books that Google digitized, as well as data on the number of new editions of each of these books. With these data, Nagaraj and Reimers engage a difference-in-differences method to compare loans and sales of digitized books to those of non-digitized books, before and after the year in which books were digitized.

If the GBP’s opponents are correct, then digitization should lead to a decrease in loans and sales, as cheaper and more easily accessed digital versions substitute for physical copies of books, especially for consumers who prefer digital to physical copies. According to the substitution theory, consumers basically know which books they want, and if they can get them for free, they will. If GBP’s proponents are correct, by contrast, consumers do not always know which books they want or need, and finding those books can entail substantial search costs. Digitization reduces the costs of discovering books and will lead some consumers to demand physical copies of those books.

Nagaraj and Reimers find that digitization reduces the probability that a book will be borrowed from the library by 6.3%, reducing total library loans for digitized books by about 36%. Thus, some consumers who can get free and easy digital access choose it over physical access. The figures for marketwide book sales are, however, reversed. Digitization increases market-wide sales by about 35% and the probability of a book make at least one sale by 7.8%. Accordingly, some consumers are finding books they otherwise wouldn’t have and are purchasing physical copies of them.

To further explore these effects, Nagaraj and Reimers disaggregate the data into popular and less popular books, and here the effects are starker. For little known works, digitization drastically decreases the costs of discovering new titles, and consumers purchase them at a 40% higher rate than non-digitized books. Discovery benefits trump substitution costs. But for popular works, where digitization does little to increase discovery of new works, sales drop by about 10%, suggesting substantial cannibalization.1

What do these findings mean for copyright law and policy? One implication is that substitution effects may not be that great for many works even when the whole work is available. Thus, the substitutionary effect of Google’s “snippet view,” which shows only about 20% of a work should be much smaller still. Also, it’s important to realize that these data help prove that otherwise forgotten or “orphan” works still have substantial value, if only people can find them. Consumers were willing to pay for less popular works, once they discovered their existence.

Ultimately, however, because the data does not tell a simple story, they may not be able to move the legal debate much. The study confirms both publishers’ fears about the works they care the most about (popular works) and GBP’s proponents’ hopes about the works they care the most about (orphan works). One possibility, however, is that we may see a more sophisticated approach to licensing works for digitization. Publishers may be more willing to allow Google or others to digitize unpopular works cheaply or for free, while choosing to release popular titles only in full price editions. This could provide the access that many people want to see while enabling publishers to stay in business.

  1. I find one of the authors’ robustness checks uncompelling. They consider the effect of digitization using the 1923 public domain date as a discontinuity to look for differences in loans and sales. The periods they consider for loans and sales are 2003-04 and 2010-11. They find that relative to 2003-04, post-1923 were loaned more often in 2010-11, and that relative to the earlier period, post-1923 were sold less often in the later period. I doubt the explanation, because the recession of 2008 occurred between the two periods, substantially decreasing consumers’ willingness to pay for works, and thus, increasing their willingness to borrow them. Because post-1923 books are more expensive that pre-1923 books, a change in consumer willingness-to-pay would produce exactly the results that the authors demonstrate.
Cite as: Christopher J. Buccafusco, Discovery, Cannibalization, or Both: How Digitization Affects Demand for Physical Copies, JOTWELL (April 16, 2019) (reviewing Abhishek Nagaraj & Imke Reimers, Digitization and the Demand for Physical Works: Evidence from the Google Books Project (2019), available at SSRN),

Should Patent Examiners Get More Time?

Michael D. Frakes & Melissa F. Wasserman, Irrational Ignorance at the Patent Office, 72 Vand. L. Rev. __ (forthcoming 2019), available at SSRN.

How much time should the U.S. Patent & Trademark Office (USPTO) spend evaluating a patent application? Patent examination is a massive business: the USPTO employs about 8,000 utility patent examiners who receive around 600,000 patent applications and approve around 300,000 patents each year. Examiners spend on average only 19 total hours throughout the prosecution of each application, including reading voluminous materials submitted by the applicant, searching for relevant prior art, writing rejections, and responding to multiple rounds of arguments from the applicant. Why not give examiners enough time for a more careful review with less likelihood of making a mistake?

In a highly-cited 2001 article, Rational Ignorance at the Patent Office, Mark Lemley argued that it doesn’t make sense to invest more resources in examination: since only a minority of patents are licensed or litigated, thorough scrutiny should be saved for only those patents that turn out to be valuable. Lemley identified the key tradeoffs, but had only rough guesses for some of the relevant parameters. A fascinating new article suggests that some of those approximations were wrong. In Irrational Ignorance at the Patent Office, Michael Frakes and Melissa Wasserman draw on their extensive empirical research with application-level USPTO data to conclude that giving examiners more time likely would be cost-justified. To allow comparison with Lemley, they focused on doubling examination time. They estimated that this extra effort would cost $660 million per year (paid for by user fees), but would save over $900 million just from reduced patent prosecution and litigation costs.

Litigation savings depend on Frakes and Wasserman’s prior finding that time-crunched patent examiners make mistakes, and that they are more likely to erroneously allow an invalid patent than to reject a valid one. When examiners are promoted up a step on the USPTO pay scale, they suddenly receive less time per application. Frakes and Wasserman found that they manage the increased workload by spending less time searching prior art and granting more patents. Based on both subsequent U.S. challenges and comparisons with parallel applications at foreign patent offices, these extra patents seem to involve more mistakes. Patents rejected by time-crunched examiners, on the other hand, are no more likely to be appealed within the USPTO. Extrapolating from these results, Frakes and Wasserman estimate that doubling examination times would lead to roughly 80,000 fewer patents granted and 2,400 fewer patent/lawsuit pairs each year, translating to litigation savings above $490 million. Similar calculations suggest about 270 fewer instituted PTAB challenges, for an annual savings above $110 million.

These savings alone might not quite justify the $660 million pricetag. But Frakes and Wasserman also suggest that giving examiners more time may lead to decreased prosecution costs for applicants. In a different earlier paper, they found that examiners often make rushed, low-quality rejections under time pressure near deadlines, which increases the number of rounds of review and the time the application is pending at the USPTO. Here, they predict that doubling examination time would be associated with 0.56 fewer office actions per application, translating to around $300 million per year in additional savings. (If this is right, should applicants be allowed to pay the USPTO for a more thorough initial examination?)

As Frakes and Wasserman note, increasing examination time is even more likely to be justified under a correct application of cost-benefit analysis that accounts for the broader social costs of erroneously issued patents. Through the supracompetitive pricing they enable, patents impose costs on both end users and follow-on innovators. Patents that do not satisfy the legal standards of patent validity are less likely to have innovation incentive benefits that outweigh these costs. These costs are difficult to quantify (and are the subject of active study) but that does not mean the USPTO should ignore them.

To be clear, this doesn’t mean the USPTO should immediately double its workforce. There are a lot of assumptions built into Frakes and Wasserman’s estimates, including that the effects they observed from examiners before and after promotion are generalizable. Could the agency hire additional examiners of similar quality? How will recent changes in patent law and litigation practice affect the benefits of increasing examination time? Is it really true that increasing examination time leads to fewer office actions? On the cost side, the $660 million pricetag for doubling examination time seems plausible based on examiner salaries and overhead expenses, but is significantly less than the nearly $3 billion the USPTO currently budgets for patent programs. Could greater efficiency be achieved without raising user fees, or is $660 million too low? Empiricists will surely quibble with many details of their methodological choices.

But an immediate doubling of the examiner corps isn’t Frakes and Wasserman’s goal. Despite remaining empirical uncertainties, they have produced the most evidence-based estimates to date of the tradeoffs between ex ante administrative screening and ex post review during litigation. The USPTO should take notice. Examination effort can be increased gradually: Frakes and Wasserman argue that increasing examination time is even more likely to be cost-justified if one focuses just on a marginal dollar for more examination. And there are open questions on the best way to spend this marginal dollar. Which examiners should get more time? Does investing more time up front on “compact prosecution” help? Could errors be reduced more through internal peer review? Peer review from outside experts? Technical experts within the agency to help with difficult cases?

Most importantly, any of these interventions should be implemented in a way that aids robust empirical evaluation. The USPTO has shown an encouraging willingness to experiment with pilot programs that might improve examination, but has not implemented them in ways that make it easy to evaluate their effectiveness, such as by randomizing over applicants who want to opt in to the programs. Rigorous pilot programs may be both financially and politically costly, but how much effort to spend on examination is a core question of patent policy with tremendous financial implications. And I’m sure the USPTO could easily find free help from academics—perhaps including Frakes and Wasserman—excited to help design and evaluate these initiatives.

Cite as: Lisa Larrimore Ouellette, Should Patent Examiners Get More Time?, JOTWELL (March 13, 2019) (reviewing Michael D. Frakes & Melissa F. Wasserman, Irrational Ignorance at the Patent Office, 72 Vand. L. Rev. __ (forthcoming 2019), available at SSRN),

What Copyright Might Teach Trade Secrecy

Joseph P. Fishman & Deepa Varadarajan, Similar Secrets, 167 U Penn. L. Rev. __ (forthcoming 2019), available at SSRN.

When an employee has had lawful access to her firm’s trade secrets and later uses them when working for a new employer or when starting her own firm, the former employer may well sue her for trade secret misappropriation. Disputes in such cases routinely focus on identifying the secrets at issue, examining the process by which the alleged misappropriation occurred, and assessing what advantages the employee may have gotten from use of those secrets.

Should courts also consider how much similarity exists between the plaintiff’s and the defendant’s products, processes, or services? And should courts also consider whether the defendant’s new firm and the old firm directly compete or operate in different and arguably unforeseen markets? Similar Secrets says the answer to both questions should be yes. Its thesis is that defendants should not be liable for misappropriation of lawfully acquired trade secrets unless later-developed products or methods incorporate material elements from those secrets and use those elements in the same market in which the plaintiff firm competes, or in an adjacent market into which it is reasonably foreseeable that the plaintiff firm might enter.1

Two considerations seem to underlie the authors’ recommendations: The first is employee mobility. No one, the authors argue, should have to get a frontal lobotomy when they change jobs, especially in technical fields. Employees should be able to continue to use knowledge they acquired on the job when they move on. Secondly, society will benefit if experienced employees can build on the knowledge they acquired on their previous jobs by developing new products in different market sectors.

Trade secrecy law, Fishman and Varadarajan argue, currently gives rightsholders more control over adaptive uses of their intellectual property than either copyright or patent law would do. Courts in copyright cases, for example, take into account how much similarity exists between the plaintiff’s work and the alleged infringer’s work, how transformative the second use was, and whether the adaptation operates in the same or reasonably foreseeable markets. The less similar the two works, the more transformative, and the more unforeseen or remote the second comer’s market segment, the more likely the adaptive use will be found noninfringing.

Patent law also allows more adaptive uses of earlier innovations than trade secrecy law does. Infringement cannot be found, for instance, unless all elements of the patent claims “read onto” the alleged infringer’s product. Second comers are thus free to use some elements of the invention, though not all. Moreover, a second comer’s innovation that builds on a patented invention can itself be patented, and that later patent will prevent the owner of the underlying patent from exploiting the improvement without the later patentee’s permission. In addition, the reverse doctrine of equivalents means that a product that seemingly literally infringed a patent will be held nonnfringing if it operates on a sufficiently different principle.

Fishman and Varadarajan would have trade secret adjudicators learn several other lessons from copyright. They recommend that courts in trade secret cases use a copyright-like methodology for judging misappropriation: first, a factual inquiry into whether the defendant used the plaintiff’s secret (akin to the probative similarity step in copyright to decide whether the defendant copied something from the plaintiff’s work), and second, a normative inquiry about whether the taking was sufficiently substantial to justify liability.

The authors would also borrow from copyright law the concept of filtering out unprotectable elements, as the Second Circuit directed in Computer Associates Int’l, Inc. v. Altai. A creative combination of public domain elements may infringe a copyright or constitute a protectable trade secret, but the larger the quantum of unprotectable elements, the less likely a court should find misappropriation, the authors argue. It is useful to recognize that trade secrets, like copyrights, may sometimes provide “thick” protection (e.g., the recipe for Coca Cola) and sometimes “thin” protection (e.g., customer lists). A higher degree of similarity should be required if trade secrets, like some copyrights, are eligible for only “thin” protection.

Trade secrecy law might also usefully borrow from copyright the idea that a defendant’s intermediate uses of protected subject matters should be given little or no weight if the product the defendant ships is not substantially similar in its use of copyrights or trade secrets.

As for foreseeability, Fishman and Varadarajan argue that migrating a secret from the plaintiff’s market to unforeseen or unforeseeable markets cannot harm the trade secret claimant. A plaintiff’s decision to invest in developing particular secrets was presumably based on its plans to exploit the secrets in foreseen markets. Use of the secrets (again, only those that were lawfully acquired) in unforeseen or remote markets won’t undermine the initial developer’s incentives to invest. Moreover, second comers also need incentives to invest in developing products in different markets that draw upon the secret but use it in different ways. The analogy is to copyright’s breathing space for transformative uses that operate in unforeseen markets. Similar Secrets offers examples of secrets that were reused in unforeseen markets that the authors think should be free from trade secrecy liability.

Fishman and Varadarajan suggest that the best way to implement their recommended adaptations of trade secrecy law would be for courts to consider the similarity and foreseeability factors in deliberations about whether misappropriation had occurred. But similarity and unforeseeability could also be affirmative defenses or affect remedies that should be imposed in trade secret cases. Legislation would not be needed to effectuate these changes.

While I will not hold my breath waiting for any of these changes to be adopted by courts in trade secret cases, the analysis in Similar Secrets was surprisingly persuasive. At least as a thought experiment, its thesis that trade secrecy law should not be in a bell jar, but open to learning useful lessons from other forms of IP law, struck me as sound. The adaptations recommended may not impact trade secrecy law as much as some practitioners might fear, as a great deal of misappropriation arises from wrongful acquisition or disclosure of trade secrets or from former employees or licensees who wrongfully use the secrets in direct competition with the trade secret claimant. But for the cases in which the adaptations the authors recommend might apply, why not inject more innovation policy considerations in trade secrecy misappropriation cases?

  1. Fishman and Varadarajan do not recommend consideration of these factors in cases involving unlawful acquisition and disclosure of trade secret cases. But the recommendations may apply to former licensees as well as former employees.
Cite as: Pamela Samuelson, What Copyright Might Teach Trade Secrecy, JOTWELL (February 5, 2019) (reviewing Joseph P. Fishman & Deepa Varadarajan, Similar Secrets, 167 U Penn. L. Rev. __ (forthcoming 2019), available at SSRN),

Lunney’s Paradox: More Copyright May Lead to Less Creativity

The title of Glynn Lunney’s new book, Copyright’s Excess, presents a puzzle for those of us who have reflected on the nature and function of copyright law. Copyright is typically justified as a system of incentives. By giving authors (and by contract, publishers) exclusive control over commercially significant reproductions and distributions of their artistic and literary works, copyright helps to ensure that the copyright owner, and not copyists, will capture any profits made from exploiting the work. Copyright, in short, is supposed to help authors make more money. And in a world that works as economists expect, more money for authors should lead to more creative output. The equation seems simple: more copyright = more art and culture. But is this true? Is it possible that copyright protections might backfire, and result in less creative output rather than more?

Lunney’s book offers considerable evidence that, at least in the music industry, it is at least an uncomfortably plausible possibility that more copyright has led to less creativity. Lunney, a lawyer-economist, takes a painstaking look at music sales data dating back to the early 1960s, including data tracking the rise of recording music revenues up to Napster’s debut in 1999, followed by their precipitous decline. The shock to music industry revenues caused by Napster (and what followed) provided the makings for a terrific natural experiment: did increased revenue from the early 1960s to 1999 lead to more new music? And did plummeting revenues after 1999 lead to a dearth of new music?

Assessing this natural experiment correctly presents Lunney with some serious complexities. One is that he must account for changes in the quality of recorded music produced over this long period, a task which requires an appraisal of how quality has varied over time. Lunney handles this task with some creative empirical and analytic strategies, the details of which I’ll omit here. Suffice to say that the quality adjustment methodology Lunney employs is, at least to my eye, the most contestable element of his project. But if the reader accepts what he’s done on that point, then his conclusions follow nicely, and they are beyond startling.

Despite a post-Napster decline in revenue approaching 80% (adjusted for inflation), the number of new albums released in the U.S. nearly doubled between 1999 (38,857) and 2012 (76,882). There is, moreover, no evidence of a decline in the quality of new music during that period. None of the various metrics Lunney uses for measuring quality show a positive correlation with revenue, and indeed for some of the measures, the correlation was statistically significant but negative – i.e., more money appears to have led to lower quality music.

What’s going on here? Lunney’s explanation is straightforward. Beyond a certain level of income, paying musicians more doesn’t make them work harder; it makes them work less. In other words, the most successful artists – at least in the music industry – are prone to satisfice as they get rich. Lunney presents evidence for this proposition. When revenues were high for the recording industry, as they were in the 1990s, top artists produced fewer studio albums and fewer Hot 100 hits in the first ten years of their career, compared with periods when music industry revenues were low (both in the 1960s before the advent of the federal sound recording copyright and in the online piracy era of the 2000s). During these low-revenue periods, top artists produced more studio albums and more Hot 100 hits.

Copyright may succeed in securing more money for top artists. But more money for those artists is not associated with more music. Nor does more money lead to better quality music. Indeed, there is evidence that the association, if there is one, is negative. This directly challenges the conventional incentive story that is said to justify strong copyright protection.

If Lunney is right, then it appears that strong copyright protections may get us less of precisely the music that fans find most attractive. Lunney admits that, at the same time, strong copyright laws that direct more money to artists are likely to attract more new musicians to enter the music industry. But Lunney argues, based again on data, that the benefits of this new entry are overwhelmed by the loss of productivity on the part of those satisficing popular artists.

The net result is that strong copyright makes a few star musicians richer, but results in a less creative music industry overall. And that last bit is crucial. Copyright is supposed to encourage more creativity. But what we may be getting – at least in the music industry – is exactly the opposite.

Copyright’s Excess is foundational work in the economics of intellectual property. It is also accessible to non-economists, and to those who don’t work in IP law and who haven’t thought much beyond the conventional wisdom. Lunney’s explication of the economic theory undergirding his insights is elegant, and persuasive. And the potential impact of the book is seismic. Lunney shows that copyright’s incentive effect – at least in the music industry – is complicated. Copyright does, in fact, draw new artists into the market. But it also suppresses the production of new works by established and popular artists. The result, at least by Lunney’s calculation, is a net negative for social welfare.

Lunney’s book should be widely read, especially by music industry insiders. But so far, there is little evidence that the industry is interested in grappling with Lunney’s work. That should not stop academics from closely examining what Lunney has done, but to critique it and, perhaps, to extend it.

The most obvious question that Lunney leaves unaddressed is whether the superstar-satisficing dynamic that he’s identified in music extends to other creative industries. Might we find a similar story of overpaid superstars in the movie industry? In publishing? In software? Those papers, and others, are waiting to be written.

Cite as: Christopher J. Sprigman, Lunney’s Paradox: More Copyright May Lead to Less Creativity, JOTWELL (January 18, 2019) (reviewing Glynn Lunney, Copyright's Excess: Money and Music in the US Recording Industry (2018)),

Copyright and the Single Work

Amy Adler, Why Art Does Not Need Copyright, 86 Geo. Wash. L. Rev. 313 (2018).

A photograph taken by Walker Evans is worth more—both in the market and in the eyes of critics—than a virtually identical photograph taken by an unknown artist, just as a handbag bearing the name Prada is worth more—both in the market and in the eyes of those who place social value on such goods—than a work of similar craftsmanship by an unknown manufacturer. In both cases, audiences are potentially responding to a number of factors: aesthetics, critical reception, reputational value, and the status conferred by scarcity.

Part of what enables audiences to evaluate these factors is a belief that the object has the genealogy it purports to have. We term this information state “authenticity,” a characterization that is capacious enough to embrace a number of modes of production. An authentic Warhol might have involved the work of several assistants, all making visible the concept formerly existing only in Warhol’s mind; an authentic designer bag might involve a certain amount of outsourcing to overseas fabricators. A Sol LeWitt wall is the physical manifestation of a two-part work: the directions to accomplish LeWitt’s conception in each new space and a certificate of authenticity. The certificate of authenticity is key to the work’s existence; without it, as a lawsuit once claimed, the work essentially ceases to exist. Authenticity can even, in some instances, certify falseness, as when the U.S. Postal Inspection Service seized and then auctioned off 12,000 fake Salvador Dali works in 1995. When asked why a buyer would pay $4,000 for a known fake, the curator of the Salvador Dali Museum mused that the notoriety that the works had garnered as “authorized fake Dalis” upped the price buyers were willing to pay.

Given the role that authenticity plays in the art market in particular, Amy Adler contends that copyright law is beside the point.

The traditional story undergirding U.S. copyright law is that many creators are motivated to create by the expectation of revenue from selling copies of their work; without a right that allows them to restrict copying by others, this expectation will be considerably diminished. A number of scholars, Rebecca Tushnet and Jessica Silbey among them, have challenged the explanatory power of this narrative by highlighting the myriad ways in which creation occurs for reasons other than economic ones, including the simple joy of creative activity. Such literature often suggests that to the extent that copyright law is meant to incentivize creation, it does little work for these creators.

In her article Why Art Does Not Need Copyright, Professor Adler takes another approach to this argument, suggesting that while the typical economic story may be true for some creators—best-selling novelists or movie studios—it is decidedly not true for creators of visual or fine art. Such artists, Professor Adler notes, have little to no desire to sell multiple copies of their work, and the number of artists who can count on licensing their work for derivative uses (museum postcards or calendars, for example) is few. With no copies to be made, copyright law enters the picture only to the extent it is used as a tool to vindicate reputational and related interests (perhaps not quite as surprising a reason as the article suggests); to the extent unauthorized copies are a concern, the market’s desire for authenticity serves a policing role, thus obviating the need for copyright law. And because the presence of copyright law complicates and makes uncertain other beneficial uses of visual art in scholarship and educational texts, copyright law, “rather than being essential for [visual] art’s flourishing, actually impedes it.” (P. 322.) Professor Adler’s article thus joins a growing literature (including works by Chris Sprigman, Kal Raustiala, and Kevin Collins) that questions copyright law’s relevance not by highlighting the ways in which creators are motivated by noneconomic incentives but by bringing the audience into the equation. For audiences who desire scarce, unique, or customized works, the creator can, in at least some instances, build that desire into the purchase price, rendering copies (and thus copyright law) both unnecessary and undesirable.

Professor Adler acknowledges that her theory may not have completely squared-off edges—her definition of “visual art,” borrowed from the Visual Artists Rights Act, inherently contemplates artists producing works in single or limited editions and thus does not include the many artists who might well anticipate revenue from the sale of multiple copies. The photographer who offers multiple copies of photographs for sale at a gallery and the artist who segments the market by selling a watercolor at a higher price and prints of the work at a lower price may well need copyright to preserve their income. It is only a select few artists, moreover, who can attract the attention needed to have their reputation guarded by the art world; the Matthew effect can be as entrenched in that world as in any other environment. So it may not always be the case, as Professor Adler writes, that “the small minority of artists lucky enough to have a market for copies or derivative works are artists for whom the price of their original, unique works is so substantial that the value of their income from copyright will be trivial in comparison”—at least, if we take a broad view of the term “artist.” (P. 337.) But it is almost certainly true, that for some artists-as-brand, “[a]rt’s market value, increasingly divorced from aesthetics, resides to a large extent in the identity and reputation of the artist to whom it is attributed.” (P. 347.) If that is the case, we might wonder whether the search for fakes aligns—as Professor Adler is suggesting—less with copyright law and more with trademark owners’ attempts to preserve authenticity (and, in the case of luxury goods, scarcity) on behalf of their customers.

We may never resolve the contested concept of authenticity in a world of Andy Warhol, Richard Prince, Sherrie Levine, and the Dafen Oil Painting Village; any concept that ultimately depends on the views of the audience is subject to continual reevaluation. A federal district court’s 1993 decision that a Calder mobile was authentic, to take but one example, was essentially overturned by the contrary conclusion of the market’s recognized expert, leaving the mobile without a buyer. This is not to say, however, that authenticity’s importance is thereby diminished. When, as Professor Adler notes, a Sherrie Levine photograph is aesthetically indistinguishable from the Walker Evans photograph on which it is based, the “work” in each case has been reduced to the statement of authenticity itself. What that says for the proper scope of copyright law, and the role that audiences should play in defining it, is the provocative question that Professor Adler asks us to ponder.

Cite as: Laura A. Heymann, Copyright and the Single Work, JOTWELL (November 28, 2018) (reviewing Amy Adler, Why Art Does Not Need Copyright, 86 Geo. Wash. L. Rev. 313 (2018)),

The Costs of Trademarking Dolls

Rebecca Curtin, Zombie Cinderella and the Undead Public Domain86 Tenn. L. Rev. __ (forthcoming 2018), available at SSRN.

Professor Curtin’s article, Zombie Cinderella and the Undead Public Domain, takes a recent case from the Trademark Trial and Appeal Board (TTAB) as the basis for an argument that trademark doctrine needs stronger protection against the exclusive commercial appropriation of characters that are in the public domain. In that case, a doll manufacturer sought to register the term “Zombie Cinderella” for a doll that was zombie-ish and princess-like. The examiner refused registration because the term “Zombie Cinderella” for this kind of doll was confusingly similar to the mark for Walt Disney’s Cinderella doll. Although the TTAB overturned the examiner’s “refusal to register” determination, it did so because it said Disney’s mark is a conceptually weak source indicator of “Disney” for dolls. This leaves open the possibility that Disney could build a stronger association between its mark and its dolls and eventually monopolize the term “Cinderella” as a mark for princess dolls. Professor Curtin’s article argues that leaving this opportunity open would be bad policy and should be precluded under a proper application of trademark law.

There are several aspects of this article that make it worth reading. First, it is a deep dive into a single case at the PTO, teaching readers about trademark registration and appeals from adverse rulings. The article reads as a compact case history from beginning to end. It appropriately balances the technical aspects of trademark practice with conceptual dimensions of trademark theory, such as aesthetic functionality (a doctrine that bars trademark protection for design and word elements that confer a non-reputationally-related advantage on the trademark holder). Second, drawing from her scholarly expertise in literary history, Professor Curtin provides rich and appropriately expansive details about Cinderella’s provenance as a character dating from 1558. The cultural history alone makes this article a joy to read, as Professor Curtin traces the Cinderella character forward to the 1899 Georges Melies’ film Cinderella and backward to a ninth century Chinese folk tale. This sets up the issue of what Disney can properly claim to own of the character’s name (as a trademark for its own princess doll toy) after the 1950 release of its film Cinderella.

The central problems this article addresses are “overreaching trademark claims that quash expressive uses of public domain characters” and “the competitive needs of multiple producers to make reference to public domain characters in the names and designs of their products.” (P. 3.) Overreaching trademark claims undermine the law’s goal of promoting competition through the use of distinctive marks that designate the source of those goods. Trademarks that monopolize valuable public domain elements and undermine the competitive advantage of other manufacturers to name and describe their goods injure both consumers and competition. The argument that underlies Curtin’s description of this problem is that there are and should be allowed to be more than one “Cinderella doll.” Disney may make such a doll and brand it as Disney’s Cinderella doll, but other manufacturers may also have Cinderella dolls and call them, for example, Zombie Cinderella or Dolly Pockets Cinderella Doll.

Trademark law does not permit restriction of the making of the goods themselves. It restricts only the use of confusingly similar marks on goods. Indeed, trademark law intends to enable copying of goods –  e.g., there are and should be many kinds of bottled waters with their negligibly perceptible differences in ingredients and taste. And yet on the shelves we can distinguish between Dasani, Poland Spring, and Fiji. Likewise, Curtin argues, there are and should be many kinds of Cinderella dolls. As with the bottled water where the product is the water whether or not purified the same way or containing any (or the same) additives, the product here is the doll with characteristics from the Cinderella fairy tale: rags to riches details, princess-like dress, fairy godmother, glass slippers, etc. But if Disney owns the mark “Disney Cinderella” for dolls that refer to and look like the fairy tale Cinderella, and other manufacturers can make dolls that look like Cinderella but cannot call them Cinderella dolls because of the broad scope of Disney’s trademark, competitors are inhibited from competing in the marketplace for Cinderella-like dolls.

This central problem of assessing the plausible scope of Disney’s Cinderella mark for dolls leads to other challenges related to the technical aspects of trademark practice – e.g., how do you prove competitive need or injury to limit a trademark holder’s claim? This is a question of evidence and proof at the registration (and appeal) stage of trademark practice and also informs the complex (and confused) federal case law on aesthetic functionality.1 Professor Curtin criticizes the TTAB’s guidance that the commercial weakness of Disney’s Cinderella mark was all that stood in the way of successful opposition to Zombie Cinderella’s registration because it “sends the message that it is possible to appropriate the public domain character like Cinderella, even for products like dolls, if only your marketing and enforcement strategies are aggressive enough.” (P. 25.) Instead, the article urges movement in the other direction: application of the aesthetic functionality doctrine at the registration stage to limit the appropriation of public domain characters and features when their depiction is a useful product feature conferring a non-reputationally-related advantage (e.g., a non-trademark benefit) on the trademark holder. As Curtin writes, “If a doll can’t be called ‘Cinderella’ or incorporate a glass slipper into its packaging, it can’t compete effectively with Disney’s Cinderella dolls, because resonance with that fairy tale is a source of the attraction to the product, a source that Disney was free to exploit and that second (or third or fourth) comers should be equally free to exploit.” (P. 43.)

There are other problems with allowing trademark holders to appropriate public domain characters other than restricting competition. Curtin’s description is irresistible: “a trademark holder who is successful in altering the meaning of a character so its primary significance is source identification impoverishes the public domain of the character’s living meaning, rendering it an inanimate corpse. No one, not even the potential trademark holder really wants to fully replace the cultural meaning in a public domain character with the reputational meaning or source identification of the trademark holder.” (P. 34.) Once appropriated, the public domain character dies because it is rendered inert by the trademark, whose meaning must remain constant for it to function as the predictable source identifier of the product.

Professor Curtin carefully stays within trademark doctrine for most of the article, mentioning the rule against generic trademarks as a cousin to aesthetic functionality that some have argued can be helpfully applied to characters to limit trademark scope.2 In addition to aesthetic functionality, she might consider more forcefully arguing for disclaiming practice in trademark law which, as with marks that contain generic words, requires applicants to limit their mark to that which excludes the generic word or uses it only in a specific combination. But she might also draw helpful support from well-established copyright doctrines for the principle that the public domain must remain available to all comers. Scene à faires doctrine requires stock characters and plots (e.g., a wicked witch and star-crossed lovers) to stay in the public domain as critical building blocks of cultural expression. And the separability doctrine applied to useful articles requires separating the useful features of the copyrighted work from the expressive ones (e.g., the useful cut of a dress from the design elements applied to dress fabric) for the purposes of only protecting that which is expressive and not useful to the article’s function.3 As an Article about the difficulty of persuading courts of the commercial and cultural values of aesthetics, it seems worth emphasizing through comparative analysis that trademark law resembles other intellectual property regimes by relying on and therefore protecting as a resource the vibrancy of literary culture in the public domain. In other words, the doctrine of aesthetic functionality is more familiar, conventional, and fundamental than the scholarly and case-law debates over the doctrine would have us believe.

Recently, Professor Curtin has had the opportunity to test her analysis of aesthetic functionality as applied to literary characters in the public domain. For the sequel to Zombie Cinderella and the Undead Public Domain, see her opposition (filed along with legal clinic students at Suffolk University Law School) to United Trademark Holding’s trademark application for Rapunzel for a line of dolls that depict the Rapunzel character.4

  1. See Justin Hughes, Cognitive and Aesthetic Functionality, 36 Cardozo L. Rev. 1227 (2015); Robert Bone, Trademark Functionality Reexamined, 7 J. Legal Analysis 183 (2015); Mark McKenna, (Dys)Functionality, 48 Houston L. Rev. 823 (2011-2012).
  2. Betsy Rosenblatt, Adventure of the Shrinking Public Domain, 86 U. Colo. L. Rev. 561 (2015).
  3. But see Star Athletica v. Varsity Brands. As the debate between the majority and dissenting opinions explain, separability is not always a self-evident exercise. Compare Id., slip op. at 13-14 with Breyer dissent, slip op. at 3-5, 10-12.
  4. Maria Cramer, Rapunzel, Rapunzel, Let Down Your Trademark Restrictions, Boston Globe (June 05, 2018); Karen Katz, Loletta Darden & Rebecca Curtin, Rescuing Rapunzel, IPWAtchdog (June 16, 2018).
Cite as: Jessica Silbey, The Costs of Trademarking Dolls, JOTWELL (November 6, 2018) (reviewing Rebecca Curtin, Zombie Cinderella and the Undead Public Domain86 Tenn. L. Rev. __ (forthcoming 2018), available at SSRN),