The Journal of Things We Like (Lots)
Select Page

Should Patent Examiners Get More Time?

Michael D. Frakes & Melissa F. Wasserman, Irrational Ignorance at the Patent Office, 72 Vand. L. Rev. __ (forthcoming 2019), available at SSRN.

How much time should the U.S. Patent & Trademark Office (USPTO) spend evaluating a patent application? Patent examination is a massive business: the USPTO employs about 8,000 utility patent examiners who receive around 600,000 patent applications and approve around 300,000 patents each year. Examiners spend on average only 19 total hours throughout the prosecution of each application, including reading voluminous materials submitted by the applicant, searching for relevant prior art, writing rejections, and responding to multiple rounds of arguments from the applicant. Why not give examiners enough time for a more careful review with less likelihood of making a mistake?

In a highly-cited 2001 article, Rational Ignorance at the Patent Office, Mark Lemley argued that it doesn’t make sense to invest more resources in examination: since only a minority of patents are licensed or litigated, thorough scrutiny should be saved for only those patents that turn out to be valuable. Lemley identified the key tradeoffs, but had only rough guesses for some of the relevant parameters. A fascinating new article suggests that some of those approximations were wrong. In Irrational Ignorance at the Patent Office, Michael Frakes and Melissa Wasserman draw on their extensive empirical research with application-level USPTO data to conclude that giving examiners more time likely would be cost-justified. To allow comparison with Lemley, they focused on doubling examination time. They estimated that this extra effort would cost $660 million per year (paid for by user fees), but would save over $900 million just from reduced patent prosecution and litigation costs.

Litigation savings depend on Frakes and Wasserman’s prior finding that time-crunched patent examiners make mistakes, and that they are more likely to erroneously allow an invalid patent than to reject a valid one. When examiners are promoted up a step on the USPTO pay scale, they suddenly receive less time per application. Frakes and Wasserman found that they manage the increased workload by spending less time searching prior art and granting more patents. Based on both subsequent U.S. challenges and comparisons with parallel applications at foreign patent offices, these extra patents seem to involve more mistakes. Patents rejected by time-crunched examiners, on the other hand, are no more likely to be appealed within the USPTO. Extrapolating from these results, Frakes and Wasserman estimate that doubling examination times would lead to roughly 80,000 fewer patents granted and 2,400 fewer patent/lawsuit pairs each year, translating to litigation savings above $490 million. Similar calculations suggest about 270 fewer instituted PTAB challenges, for an annual savings above $110 million.

These savings alone might not quite justify the $660 million pricetag. But Frakes and Wasserman also suggest that giving examiners more time may lead to decreased prosecution costs for applicants. In a different earlier paper, they found that examiners often make rushed, low-quality rejections under time pressure near deadlines, which increases the number of rounds of review and the time the application is pending at the USPTO. Here, they predict that doubling examination time would be associated with 0.56 fewer office actions per application, translating to around $300 million per year in additional savings. (If this is right, should applicants be allowed to pay the USPTO for a more thorough initial examination?)

As Frakes and Wasserman note, increasing examination time is even more likely to be justified under a correct application of cost-benefit analysis that accounts for the broader social costs of erroneously issued patents. Through the supracompetitive pricing they enable, patents impose costs on both end users and follow-on innovators. Patents that do not satisfy the legal standards of patent validity are less likely to have innovation incentive benefits that outweigh these costs. These costs are difficult to quantify (and are the subject of active study) but that does not mean the USPTO should ignore them.

To be clear, this doesn’t mean the USPTO should immediately double its workforce. There are a lot of assumptions built into Frakes and Wasserman’s estimates, including that the effects they observed from examiners before and after promotion are generalizable. Could the agency hire additional examiners of similar quality? How will recent changes in patent law and litigation practice affect the benefits of increasing examination time? Is it really true that increasing examination time leads to fewer office actions? On the cost side, the $660 million pricetag for doubling examination time seems plausible based on examiner salaries and overhead expenses, but is significantly less than the nearly $3 billion the USPTO currently budgets for patent programs. Could greater efficiency be achieved without raising user fees, or is $660 million too low? Empiricists will surely quibble with many details of their methodological choices.

But an immediate doubling of the examiner corps isn’t Frakes and Wasserman’s goal. Despite remaining empirical uncertainties, they have produced the most evidence-based estimates to date of the tradeoffs between ex ante administrative screening and ex post review during litigation. The USPTO should take notice. Examination effort can be increased gradually: Frakes and Wasserman argue that increasing examination time is even more likely to be cost-justified if one focuses just on a marginal dollar for more examination. And there are open questions on the best way to spend this marginal dollar. Which examiners should get more time? Does investing more time up front on “compact prosecution” help? Could errors be reduced more through internal peer review? Peer review from outside experts? Technical experts within the agency to help with difficult cases?

Most importantly, any of these interventions should be implemented in a way that aids robust empirical evaluation. The USPTO has shown an encouraging willingness to experiment with pilot programs that might improve examination, but has not implemented them in ways that make it easy to evaluate their effectiveness, such as by randomizing over applicants who want to opt in to the programs. Rigorous pilot programs may be both financially and politically costly, but how much effort to spend on examination is a core question of patent policy with tremendous financial implications. And I’m sure the USPTO could easily find free help from academics—perhaps including Frakes and Wasserman—excited to help design and evaluate these initiatives.

Cite as: Lisa Larrimore Ouellette, Should Patent Examiners Get More Time?, JOTWELL (March 13, 2019) (reviewing Michael D. Frakes & Melissa F. Wasserman, Irrational Ignorance at the Patent Office, 72 Vand. L. Rev. __ (forthcoming 2019), available at SSRN), https://ip.jotwell.com/should-patent-examiners-get-more-time/.

What Copyright Might Teach Trade Secrecy

Joseph P. Fishman & Deepa Varadarajan, Similar Secrets, 167 U Penn. L. Rev. __ (forthcoming 2019), available at SSRN.

When an employee has had lawful access to her firm’s trade secrets and later uses them when working for a new employer or when starting her own firm, the former employer may well sue her for trade secret misappropriation. Disputes in such cases routinely focus on identifying the secrets at issue, examining the process by which the alleged misappropriation occurred, and assessing what advantages the employee may have gotten from use of those secrets.

Should courts also consider how much similarity exists between the plaintiff’s and the defendant’s products, processes, or services? And should courts also consider whether the defendant’s new firm and the old firm directly compete or operate in different and arguably unforeseen markets? Similar Secrets says the answer to both questions should be yes. Its thesis is that defendants should not be liable for misappropriation of lawfully acquired trade secrets unless later-developed products or methods incorporate material elements from those secrets and use those elements in the same market in which the plaintiff firm competes, or in an adjacent market into which it is reasonably foreseeable that the plaintiff firm might enter.1

Two considerations seem to underlie the authors’ recommendations: The first is employee mobility. No one, the authors argue, should have to get a frontal lobotomy when they change jobs, especially in technical fields. Employees should be able to continue to use knowledge they acquired on the job when they move on. Secondly, society will benefit if experienced employees can build on the knowledge they acquired on their previous jobs by developing new products in different market sectors.

Trade secrecy law, Fishman and Varadarajan argue, currently gives rightsholders more control over adaptive uses of their intellectual property than either copyright or patent law would do. Courts in copyright cases, for example, take into account how much similarity exists between the plaintiff’s work and the alleged infringer’s work, how transformative the second use was, and whether the adaptation operates in the same or reasonably foreseeable markets. The less similar the two works, the more transformative, and the more unforeseen or remote the second comer’s market segment, the more likely the adaptive use will be found noninfringing.

Patent law also allows more adaptive uses of earlier innovations than trade secrecy law does. Infringement cannot be found, for instance, unless all elements of the patent claims “read onto” the alleged infringer’s product. Second comers are thus free to use some elements of the invention, though not all. Moreover, a second comer’s innovation that builds on a patented invention can itself be patented, and that later patent will prevent the owner of the underlying patent from exploiting the improvement without the later patentee’s permission. In addition, the reverse doctrine of equivalents means that a product that seemingly literally infringed a patent will be held nonnfringing if it operates on a sufficiently different principle.

Fishman and Varadarajan would have trade secret adjudicators learn several other lessons from copyright. They recommend that courts in trade secret cases use a copyright-like methodology for judging misappropriation: first, a factual inquiry into whether the defendant used the plaintiff’s secret (akin to the probative similarity step in copyright to decide whether the defendant copied something from the plaintiff’s work), and second, a normative inquiry about whether the taking was sufficiently substantial to justify liability.

The authors would also borrow from copyright law the concept of filtering out unprotectable elements, as the Second Circuit directed in Computer Associates Int’l, Inc. v. Altai. A creative combination of public domain elements may infringe a copyright or constitute a protectable trade secret, but the larger the quantum of unprotectable elements, the less likely a court should find misappropriation, the authors argue. It is useful to recognize that trade secrets, like copyrights, may sometimes provide “thick” protection (e.g., the recipe for Coca Cola) and sometimes “thin” protection (e.g., customer lists). A higher degree of similarity should be required if trade secrets, like some copyrights, are eligible for only “thin” protection.

Trade secrecy law might also usefully borrow from copyright the idea that a defendant’s intermediate uses of protected subject matters should be given little or no weight if the product the defendant ships is not substantially similar in its use of copyrights or trade secrets.

As for foreseeability, Fishman and Varadarajan argue that migrating a secret from the plaintiff’s market to unforeseen or unforeseeable markets cannot harm the trade secret claimant. A plaintiff’s decision to invest in developing particular secrets was presumably based on its plans to exploit the secrets in foreseen markets. Use of the secrets (again, only those that were lawfully acquired) in unforeseen or remote markets won’t undermine the initial developer’s incentives to invest. Moreover, second comers also need incentives to invest in developing products in different markets that draw upon the secret but use it in different ways. The analogy is to copyright’s breathing space for transformative uses that operate in unforeseen markets. Similar Secrets offers examples of secrets that were reused in unforeseen markets that the authors think should be free from trade secrecy liability.

Fishman and Varadarajan suggest that the best way to implement their recommended adaptations of trade secrecy law would be for courts to consider the similarity and foreseeability factors in deliberations about whether misappropriation had occurred. But similarity and unforeseeability could also be affirmative defenses or affect remedies that should be imposed in trade secret cases. Legislation would not be needed to effectuate these changes.

While I will not hold my breath waiting for any of these changes to be adopted by courts in trade secret cases, the analysis in Similar Secrets was surprisingly persuasive. At least as a thought experiment, its thesis that trade secrecy law should not be in a bell jar, but open to learning useful lessons from other forms of IP law, struck me as sound. The adaptations recommended may not impact trade secrecy law as much as some practitioners might fear, as a great deal of misappropriation arises from wrongful acquisition or disclosure of trade secrets or from former employees or licensees who wrongfully use the secrets in direct competition with the trade secret claimant. But for the cases in which the adaptations the authors recommend might apply, why not inject more innovation policy considerations in trade secrecy misappropriation cases?

  1. Fishman and Varadarajan do not recommend consideration of these factors in cases involving unlawful acquisition and disclosure of trade secret cases. But the recommendations may apply to former licensees as well as former employees.
Cite as: Pamela Samuelson, What Copyright Might Teach Trade Secrecy, JOTWELL (February 5, 2019) (reviewing Joseph P. Fishman & Deepa Varadarajan, Similar Secrets, 167 U Penn. L. Rev. __ (forthcoming 2019), available at SSRN), https://ip.jotwell.com/what-copyright-might-teach-trade-secrecy/.

Lunney’s Paradox: More Copyright May Lead to Less Creativity

The title of Glynn Lunney’s new book, Copyright’s Excess, presents a puzzle for those of us who have reflected on the nature and function of copyright law. Copyright is typically justified as a system of incentives. By giving authors (and by contract, publishers) exclusive control over commercially significant reproductions and distributions of their artistic and literary works, copyright helps to ensure that the copyright owner, and not copyists, will capture any profits made from exploiting the work. Copyright, in short, is supposed to help authors make more money. And in a world that works as economists expect, more money for authors should lead to more creative output. The equation seems simple: more copyright = more art and culture. But is this true? Is it possible that copyright protections might backfire, and result in less creative output rather than more?

Lunney’s book offers considerable evidence that, at least in the music industry, it is at least an uncomfortably plausible possibility that more copyright has led to less creativity. Lunney, a lawyer-economist, takes a painstaking look at music sales data dating back to the early 1960s, including data tracking the rise of recording music revenues up to Napster’s debut in 1999, followed by their precipitous decline. The shock to music industry revenues caused by Napster (and what followed) provided the makings for a terrific natural experiment: did increased revenue from the early 1960s to 1999 lead to more new music? And did plummeting revenues after 1999 lead to a dearth of new music?

Assessing this natural experiment correctly presents Lunney with some serious complexities. One is that he must account for changes in the quality of recorded music produced over this long period, a task which requires an appraisal of how quality has varied over time. Lunney handles this task with some creative empirical and analytic strategies, the details of which I’ll omit here. Suffice to say that the quality adjustment methodology Lunney employs is, at least to my eye, the most contestable element of his project. But if the reader accepts what he’s done on that point, then his conclusions follow nicely, and they are beyond startling.

Despite a post-Napster decline in revenue approaching 80% (adjusted for inflation), the number of new albums released in the U.S. nearly doubled between 1999 (38,857) and 2012 (76,882). There is, moreover, no evidence of a decline in the quality of new music during that period. None of the various metrics Lunney uses for measuring quality show a positive correlation with revenue, and indeed for some of the measures, the correlation was statistically significant but negative – i.e., more money appears to have led to lower quality music.

What’s going on here? Lunney’s explanation is straightforward. Beyond a certain level of income, paying musicians more doesn’t make them work harder; it makes them work less. In other words, the most successful artists – at least in the music industry – are prone to satisfice as they get rich. Lunney presents evidence for this proposition. When revenues were high for the recording industry, as they were in the 1990s, top artists produced fewer studio albums and fewer Hot 100 hits in the first ten years of their career, compared with periods when music industry revenues were low (both in the 1960s before the advent of the federal sound recording copyright and in the online piracy era of the 2000s). During these low-revenue periods, top artists produced more studio albums and more Hot 100 hits.

Copyright may succeed in securing more money for top artists. But more money for those artists is not associated with more music. Nor does more money lead to better quality music. Indeed, there is evidence that the association, if there is one, is negative. This directly challenges the conventional incentive story that is said to justify strong copyright protection.

If Lunney is right, then it appears that strong copyright protections may get us less of precisely the music that fans find most attractive. Lunney admits that, at the same time, strong copyright laws that direct more money to artists are likely to attract more new musicians to enter the music industry. But Lunney argues, based again on data, that the benefits of this new entry are overwhelmed by the loss of productivity on the part of those satisficing popular artists.

The net result is that strong copyright makes a few star musicians richer, but results in a less creative music industry overall. And that last bit is crucial. Copyright is supposed to encourage more creativity. But what we may be getting – at least in the music industry – is exactly the opposite.

Copyright’s Excess is foundational work in the economics of intellectual property. It is also accessible to non-economists, and to those who don’t work in IP law and who haven’t thought much beyond the conventional wisdom. Lunney’s explication of the economic theory undergirding his insights is elegant, and persuasive. And the potential impact of the book is seismic. Lunney shows that copyright’s incentive effect – at least in the music industry – is complicated. Copyright does, in fact, draw new artists into the market. But it also suppresses the production of new works by established and popular artists. The result, at least by Lunney’s calculation, is a net negative for social welfare.

Lunney’s book should be widely read, especially by music industry insiders. But so far, there is little evidence that the industry is interested in grappling with Lunney’s work. That should not stop academics from closely examining what Lunney has done, but to critique it and, perhaps, to extend it.

The most obvious question that Lunney leaves unaddressed is whether the superstar-satisficing dynamic that he’s identified in music extends to other creative industries. Might we find a similar story of overpaid superstars in the movie industry? In publishing? In software? Those papers, and others, are waiting to be written.

Cite as: Christopher J. Sprigman, Lunney’s Paradox: More Copyright May Lead to Less Creativity, JOTWELL (January 18, 2019) (reviewing Glynn Lunney, Copyright's Excess: Money and Music in the US Recording Industry (2018)), https://ip.jotwell.com/lunneys-paradox-more-copyright-may-lead-to-less-creativity/.

Copyright and the Single Work

Amy Adler, Why Art Does Not Need Copyright, 86 Geo. Wash. L. Rev. 313 (2018).

A photograph taken by Walker Evans is worth more—both in the market and in the eyes of critics—than a virtually identical photograph taken by an unknown artist, just as a handbag bearing the name Prada is worth more—both in the market and in the eyes of those who place social value on such goods—than a work of similar craftsmanship by an unknown manufacturer. In both cases, audiences are potentially responding to a number of factors: aesthetics, critical reception, reputational value, and the status conferred by scarcity.

Part of what enables audiences to evaluate these factors is a belief that the object has the genealogy it purports to have. We term this information state “authenticity,” a characterization that is capacious enough to embrace a number of modes of production. An authentic Warhol might have involved the work of several assistants, all making visible the concept formerly existing only in Warhol’s mind; an authentic designer bag might involve a certain amount of outsourcing to overseas fabricators. A Sol LeWitt wall is the physical manifestation of a two-part work: the directions to accomplish LeWitt’s conception in each new space and a certificate of authenticity. The certificate of authenticity is key to the work’s existence; without it, as a lawsuit once claimed, the work essentially ceases to exist. Authenticity can even, in some instances, certify falseness, as when the U.S. Postal Inspection Service seized and then auctioned off 12,000 fake Salvador Dali works in 1995. When asked why a buyer would pay $4,000 for a known fake, the curator of the Salvador Dali Museum mused that the notoriety that the works had garnered as “authorized fake Dalis” upped the price buyers were willing to pay.

Given the role that authenticity plays in the art market in particular, Amy Adler contends that copyright law is beside the point.

The traditional story undergirding U.S. copyright law is that many creators are motivated to create by the expectation of revenue from selling copies of their work; without a right that allows them to restrict copying by others, this expectation will be considerably diminished. A number of scholars, Rebecca Tushnet and Jessica Silbey among them, have challenged the explanatory power of this narrative by highlighting the myriad ways in which creation occurs for reasons other than economic ones, including the simple joy of creative activity. Such literature often suggests that to the extent that copyright law is meant to incentivize creation, it does little work for these creators.

In her article Why Art Does Not Need Copyright, Professor Adler takes another approach to this argument, suggesting that while the typical economic story may be true for some creators—best-selling novelists or movie studios—it is decidedly not true for creators of visual or fine art. Such artists, Professor Adler notes, have little to no desire to sell multiple copies of their work, and the number of artists who can count on licensing their work for derivative uses (museum postcards or calendars, for example) is few. With no copies to be made, copyright law enters the picture only to the extent it is used as a tool to vindicate reputational and related interests (perhaps not quite as surprising a reason as the article suggests); to the extent unauthorized copies are a concern, the market’s desire for authenticity serves a policing role, thus obviating the need for copyright law. And because the presence of copyright law complicates and makes uncertain other beneficial uses of visual art in scholarship and educational texts, copyright law, “rather than being essential for [visual] art’s flourishing, actually impedes it.” (P. 322.) Professor Adler’s article thus joins a growing literature (including works by Chris Sprigman, Kal Raustiala, and Kevin Collins) that questions copyright law’s relevance not by highlighting the ways in which creators are motivated by noneconomic incentives but by bringing the audience into the equation. For audiences who desire scarce, unique, or customized works, the creator can, in at least some instances, build that desire into the purchase price, rendering copies (and thus copyright law) both unnecessary and undesirable.

Professor Adler acknowledges that her theory may not have completely squared-off edges—her definition of “visual art,” borrowed from the Visual Artists Rights Act, inherently contemplates artists producing works in single or limited editions and thus does not include the many artists who might well anticipate revenue from the sale of multiple copies. The photographer who offers multiple copies of photographs for sale at a gallery and the artist who segments the market by selling a watercolor at a higher price and prints of the work at a lower price may well need copyright to preserve their income. It is only a select few artists, moreover, who can attract the attention needed to have their reputation guarded by the art world; the Matthew effect can be as entrenched in that world as in any other environment. So it may not always be the case, as Professor Adler writes, that “the small minority of artists lucky enough to have a market for copies or derivative works are artists for whom the price of their original, unique works is so substantial that the value of their income from copyright will be trivial in comparison”—at least, if we take a broad view of the term “artist.” (P. 337.) But it is almost certainly true, that for some artists-as-brand, “[a]rt’s market value, increasingly divorced from aesthetics, resides to a large extent in the identity and reputation of the artist to whom it is attributed.” (P. 347.) If that is the case, we might wonder whether the search for fakes aligns—as Professor Adler is suggesting—less with copyright law and more with trademark owners’ attempts to preserve authenticity (and, in the case of luxury goods, scarcity) on behalf of their customers.

We may never resolve the contested concept of authenticity in a world of Andy Warhol, Richard Prince, Sherrie Levine, and the Dafen Oil Painting Village; any concept that ultimately depends on the views of the audience is subject to continual reevaluation. A federal district court’s 1993 decision that a Calder mobile was authentic, to take but one example, was essentially overturned by the contrary conclusion of the market’s recognized expert, leaving the mobile without a buyer. This is not to say, however, that authenticity’s importance is thereby diminished. When, as Professor Adler notes, a Sherrie Levine photograph is aesthetically indistinguishable from the Walker Evans photograph on which it is based, the “work” in each case has been reduced to the statement of authenticity itself. What that says for the proper scope of copyright law, and the role that audiences should play in defining it, is the provocative question that Professor Adler asks us to ponder.

Cite as: Laura A. Heymann, Copyright and the Single Work, JOTWELL (November 28, 2018) (reviewing Amy Adler, Why Art Does Not Need Copyright, 86 Geo. Wash. L. Rev. 313 (2018)), https://ip.jotwell.com/copyright-and-the-single-work/.

The Costs of Trademarking Dolls

Rebecca Curtin, Zombie Cinderella and the Undead Public Domain86 Tenn. L. Rev. __ (forthcoming 2018), available at SSRN.

Professor Curtin’s article, Zombie Cinderella and the Undead Public Domain, takes a recent case from the Trademark Trial and Appeal Board (TTAB) as the basis for an argument that trademark doctrine needs stronger protection against the exclusive commercial appropriation of characters that are in the public domain. In that case, a doll manufacturer sought to register the term “Zombie Cinderella” for a doll that was zombie-ish and princess-like. The examiner refused registration because the term “Zombie Cinderella” for this kind of doll was confusingly similar to the mark for Walt Disney’s Cinderella doll. Although the TTAB overturned the examiner’s “refusal to register” determination, it did so because it said Disney’s mark is a conceptually weak source indicator of “Disney” for dolls. This leaves open the possibility that Disney could build a stronger association between its mark and its dolls and eventually monopolize the term “Cinderella” as a mark for princess dolls. Professor Curtin’s article argues that leaving this opportunity open would be bad policy and should be precluded under a proper application of trademark law.

There are several aspects of this article that make it worth reading. First, it is a deep dive into a single case at the PTO, teaching readers about trademark registration and appeals from adverse rulings. The article reads as a compact case history from beginning to end. It appropriately balances the technical aspects of trademark practice with conceptual dimensions of trademark theory, such as aesthetic functionality (a doctrine that bars trademark protection for design and word elements that confer a non-reputationally-related advantage on the trademark holder). Second, drawing from her scholarly expertise in literary history, Professor Curtin provides rich and appropriately expansive details about Cinderella’s provenance as a character dating from 1558. The cultural history alone makes this article a joy to read, as Professor Curtin traces the Cinderella character forward to the 1899 Georges Melies’ film Cinderella and backward to a ninth century Chinese folk tale. This sets up the issue of what Disney can properly claim to own of the character’s name (as a trademark for its own princess doll toy) after the 1950 release of its film Cinderella.

The central problems this article addresses are “overreaching trademark claims that quash expressive uses of public domain characters” and “the competitive needs of multiple producers to make reference to public domain characters in the names and designs of their products.” (P. 3.) Overreaching trademark claims undermine the law’s goal of promoting competition through the use of distinctive marks that designate the source of those goods. Trademarks that monopolize valuable public domain elements and undermine the competitive advantage of other manufacturers to name and describe their goods injure both consumers and competition. The argument that underlies Curtin’s description of this problem is that there are and should be allowed to be more than one “Cinderella doll.” Disney may make such a doll and brand it as Disney’s Cinderella doll, but other manufacturers may also have Cinderella dolls and call them, for example, Zombie Cinderella or Dolly Pockets Cinderella Doll.

Trademark law does not permit restriction of the making of the goods themselves. It restricts only the use of confusingly similar marks on goods. Indeed, trademark law intends to enable copying of goods –  e.g., there are and should be many kinds of bottled waters with their negligibly perceptible differences in ingredients and taste. And yet on the shelves we can distinguish between Dasani, Poland Spring, and Fiji. Likewise, Curtin argues, there are and should be many kinds of Cinderella dolls. As with the bottled water where the product is the water whether or not purified the same way or containing any (or the same) additives, the product here is the doll with characteristics from the Cinderella fairy tale: rags to riches details, princess-like dress, fairy godmother, glass slippers, etc. But if Disney owns the mark “Disney Cinderella” for dolls that refer to and look like the fairy tale Cinderella, and other manufacturers can make dolls that look like Cinderella but cannot call them Cinderella dolls because of the broad scope of Disney’s trademark, competitors are inhibited from competing in the marketplace for Cinderella-like dolls.

This central problem of assessing the plausible scope of Disney’s Cinderella mark for dolls leads to other challenges related to the technical aspects of trademark practice – e.g., how do you prove competitive need or injury to limit a trademark holder’s claim? This is a question of evidence and proof at the registration (and appeal) stage of trademark practice and also informs the complex (and confused) federal case law on aesthetic functionality.1 Professor Curtin criticizes the TTAB’s guidance that the commercial weakness of Disney’s Cinderella mark was all that stood in the way of successful opposition to Zombie Cinderella’s registration because it “sends the message that it is possible to appropriate the public domain character like Cinderella, even for products like dolls, if only your marketing and enforcement strategies are aggressive enough.” (P. 25.) Instead, the article urges movement in the other direction: application of the aesthetic functionality doctrine at the registration stage to limit the appropriation of public domain characters and features when their depiction is a useful product feature conferring a non-reputationally-related advantage (e.g., a non-trademark benefit) on the trademark holder. As Curtin writes, “If a doll can’t be called ‘Cinderella’ or incorporate a glass slipper into its packaging, it can’t compete effectively with Disney’s Cinderella dolls, because resonance with that fairy tale is a source of the attraction to the product, a source that Disney was free to exploit and that second (or third or fourth) comers should be equally free to exploit.” (P. 43.)

There are other problems with allowing trademark holders to appropriate public domain characters other than restricting competition. Curtin’s description is irresistible: “a trademark holder who is successful in altering the meaning of a character so its primary significance is source identification impoverishes the public domain of the character’s living meaning, rendering it an inanimate corpse. No one, not even the potential trademark holder really wants to fully replace the cultural meaning in a public domain character with the reputational meaning or source identification of the trademark holder.” (P. 34.) Once appropriated, the public domain character dies because it is rendered inert by the trademark, whose meaning must remain constant for it to function as the predictable source identifier of the product.

Professor Curtin carefully stays within trademark doctrine for most of the article, mentioning the rule against generic trademarks as a cousin to aesthetic functionality that some have argued can be helpfully applied to characters to limit trademark scope.2 In addition to aesthetic functionality, she might consider more forcefully arguing for disclaiming practice in trademark law which, as with marks that contain generic words, requires applicants to limit their mark to that which excludes the generic word or uses it only in a specific combination. But she might also draw helpful support from well-established copyright doctrines for the principle that the public domain must remain available to all comers. Scene à faires doctrine requires stock characters and plots (e.g., a wicked witch and star-crossed lovers) to stay in the public domain as critical building blocks of cultural expression. And the separability doctrine applied to useful articles requires separating the useful features of the copyrighted work from the expressive ones (e.g., the useful cut of a dress from the design elements applied to dress fabric) for the purposes of only protecting that which is expressive and not useful to the article’s function.3 As an Article about the difficulty of persuading courts of the commercial and cultural values of aesthetics, it seems worth emphasizing through comparative analysis that trademark law resembles other intellectual property regimes by relying on and therefore protecting as a resource the vibrancy of literary culture in the public domain. In other words, the doctrine of aesthetic functionality is more familiar, conventional, and fundamental than the scholarly and case-law debates over the doctrine would have us believe.

Recently, Professor Curtin has had the opportunity to test her analysis of aesthetic functionality as applied to literary characters in the public domain. For the sequel to Zombie Cinderella and the Undead Public Domain, see her opposition (filed along with legal clinic students at Suffolk University Law School) to United Trademark Holding’s trademark application for Rapunzel for a line of dolls that depict the Rapunzel character.4

  1. See Justin Hughes, Cognitive and Aesthetic Functionality, 36 Cardozo L. Rev. 1227 (2015); Robert Bone, Trademark Functionality Reexamined, 7 J. Legal Analysis 183 (2015); Mark McKenna, (Dys)Functionality, 48 Houston L. Rev. 823 (2011-2012).
  2. Betsy Rosenblatt, Adventure of the Shrinking Public Domain, 86 U. Colo. L. Rev. 561 (2015).
  3. But see Star Athletica v. Varsity Brands. As the debate between the majority and dissenting opinions explain, separability is not always a self-evident exercise. Compare Id., slip op. at 13-14 with Breyer dissent, slip op. at 3-5, 10-12.
  4. Maria Cramer, Rapunzel, Rapunzel, Let Down Your Trademark Restrictions, Boston Globe (June 05, 2018); Karen Katz, Loletta Darden & Rebecca Curtin, Rescuing Rapunzel, IPWAtchdog (June 16, 2018).
Cite as: Jessica Silbey, The Costs of Trademarking Dolls, JOTWELL (November 6, 2018) (reviewing Rebecca Curtin, Zombie Cinderella and the Undead Public Domain86 Tenn. L. Rev. __ (forthcoming 2018), available at SSRN), https://ip.jotwell.com/the-costs-of-trademarking-dolls/.

Creative Vigilantism

Amy Adler & Jeanne C. Fromer, Taking Intellectual Property into Their Own Hands, 107 Cal. L. Rev. __ (forthcoming 2019), available at SSRN.

It’s no longer news that a major proportion of property regulation happens outside the bounds of the law thanks to social norms and their extralegal enforcement. Yet legal scholars continue to find new and fascinating ways to advance this insight. The latest installment in the conversation about the norm-based regulation of intangible property is Amy Adler and Jeanne Fromer’s Taking Intellectual Property into Their Own Hands.

This sparkling article1 adds a novel perspective to the dialogue that has been developing for more than a decade about the extralegal regulation of creative production. Most of this work considers how a given group regulates their distinctive works via norms, without recourse to copyright or trademark law. This move has been made with respect to recipes developed by French chefs, roller derby skaters’ nicknames, clowns’ face makeup, tattoo artists’ ink designs, and many others.

Adler and Fromer add an important new dimension to the creative norms literature by focusing on extralegal enforcement rather than the substantive rules governing a particular category of work. As they show, many owners who could state plausible claims for copyright or trademark infringement increasingly choose not to file suit, but instead to deploy one of two surprisingly effective means of striking back at the purported infringer: shaming and retaking the copy.

It’s hard to choose, but in my view the best part of this article is Adler and Fromer’s luminous catalogue of these two novel forms of non-legal enforcement. Most informal norms are enforced to some extent by shaming sanctions, but no one seems better at deploying shame online than the operators of the website Diet Prada, who inflict humorless wrath on any fashion designer they perceive to have copied another without credit.

In terms of retaking the copy, the Suicide Girls finally figured out a way to outflank notorious reappropriation artist Richard Prince. They appropriated his unauthorized use of their Instagram posts, added the words “true art” in a comment, and sold the resulting near-exact copies of Prince canvases for a mere $90, undercutting the market for Prince’s works and giving all profits to charity. Adler and Fromer illustrate all of this with photographs, tweets, and other visuals that make their story all the more vivid.

One of the major insights that Adler and Fromer extract from this fascinating story of creative vigilantes is that this extralegal enforcement operates “without the backdrop of a single close-knit community.” This feature is important because, they point out, “legal scholars tend to see” such close-knit communities “as prerequisite to enforcing extralegal norms.”

While Adler and Fromer’s work is not the first to explore norm enforcement outside the context of close-knit communities,2 the question is indeed a fascinating one. The answer to this puzzle may not be that norm enforcement does not require a close-knit community, but that in the internet age what makes a community has changed. For the Shasta County ranchers Ellickson studied in Order Without Law, geographic proximity and relative isolation meant that shaming sanctions by one’s neighbors could be socially isolating.

But in the internet age, community is less a function of place than shared interests and values. So if the people in your online community of fashionistas decide to shame you on Twitter or Instagram, you may not even know the identity behind the online persona responsible, but their message of viral opprobrium among those who share your professional tastes and ambitious could spell ostracism nonetheless.

Adler and Fromer also do admirable work to frame this article about extralegal activity in terms of law. They argue that shaming and retaking the copy can advance the same aims as copyright and trademark law: remuneration and attribution (as well as avoiding misattribution). This account fits with some of their examples. 1980s designer Dapper Dan enjoyed a renaissance of popularity—and, presumably, income—when people shamed Gucci for failing to acknowledge their debt to him in a recent fashion show. And James Turrell’s surprisingly hip rebuke of Drake’s rip-off of his installations in the “Hotline Bling” video subtly but unmistakably made clear that Drake’s use of Turrell’s work was unlicensed and unauthorized.

Other examples, though, fit less cleanly with Adler and Fromer’s claim that extralegal copyright and trademark enforcement approximates the goals of those doctrines—namely, remuneration and attribution. Putting aside for a moment due skepticism of the claim that either copyright or trademark chiefly seeks to give creators attribution,3 consider the Suicide Girls. Their retaking of Richard Prince’s unauthorized copies got them zero money since all proceeds for sales of their reappropriated works went to charity. Nor did attribution seem to be an issue since Prince’s uses did not erase the owners’ Instagram usernames.

What is really going on may be bigger than Adler and Fromer’s legalist framing suggests. What the shamers and copy retakers seem to want is not just their own version of what copyright and trademark law promise, but something simpler and less admirable: good old fashioned revenge. The Suicide Girls’ delicious judo move got them neither money nor much else other than sticking it to Prince so poetically their scheme could have supplied the plot of a Quentin Tarantino film. This story seems less about creative incentives or avoiding consumer confusion, and much more about righting a wrong felt on a deeply visceral level, showing that moral intuitions about wrongful transgression animate owners of intangible property just as much as owners of physical property.

And to Adler and Fromer’s credit, their article evinces due care about the dark side of extralegal copyright and trademark enforcement. They acknowledge that both copyright and trademark law are calibrated to balance owners’ rights with the public interest in free access to information, and that aggrieved owners are unlikely to take the latter consideration into account when striking back at an infringer. Leaving enforcement to creative vigilantes threatens to enforce their interests more broadly than law would allow.

There’s another downside to self-help that Adler and Fromer don’t feature as prominently but merits mention: it often gets things wrong. This is familiar in the crime context, as when online sleuths misidentified a purported terrorist, leading to his suicide. But creators who seek to call out unauthorized copying can also err. The Rubinoos, for example, accused Avril Lavigne of ripping off their musical work in her song “Girlfriend” (and got a decent settlement from her). But closer analysis of the central chord progressions in the purportedly infringed Rubinoos song “Boyfriend” showed that it bore an eerie similarity to the sound of the Rolling Stones “Get Off My Cloud.”

The authors explain that self-help often misfires because owners have a “folk sense of law,” not an accurate sense of what law actually is. My observations of norm-based systems, however, suggest that the concern is a bit different, and perhaps more problematic. Such extralegal regulation of creative production does not seek to approximate law or substitute for it. Rather, it runs entirely independently of law and its entitlements. I’d wager that the Suicide Girls’ outrage at Prince was not rooted in any sense of law at all, but rather a reaction to affront at property transgression that is entirely unmoored from and prior to federal law’s entitlements to works or marks.

The remoteness of law to social norms systems is one of the major contributions that Ellickson in particular emphasized. Lawyers, of course, tend to be legal centralists. We read the world through the lens of the law and assume that others do as well. But most work in this area reveals that law is marginal at best to individuals’ decisions to regulate their property via norms. Norms arise out of strongly felt moral intuitions about right and wrong that hold true for people independently of what rights law promises them.

So when the authors criticize Diet Prada because on that site “copying is almost always assumed to be wrong,” this conflates morality and legality. Diet Prada seems to seek only to vindicate to a gut-level sense of what’s right and wrong. The law’s carefully crafted copyright and trademark doctrines, by contrast, are not meant to model moral intuitions but to achieve socially desirable allocations of private rights in information. So while the authors seek to emphasize the overlap between the agendas of creative vigilantes and the remedies supplied by copyright and trademark law, I wonder whether the differences swamp the similarities.

My thoughts and feelings about this article are many so I’ll rein them in here with one final thought. Among the valuable insights of Adler and Fromer’s article is that they distinguish between different kinds of creative self-help, chiefly shaming the infringer and reappropriating the copy. They further show that shaming strategies tend to be costlier and riskier given the downsides of self-help, while reappropriating the copy tends to be more constructive as well as consonant with the goals of copyright and trademark law.

The puzzle this leaves the reader with is: What can law do about this? If it’s right that copy retaking is a better extralegal enforcement strategy than shaming, is there a way to change copyright and trademark to encourage the former and/or discourage the latter? The answer to that question may merit a separate article, but the suggestion is that law may have some role to play even in this world that operates outside legal bounds.

  1. The authors term their piece an essay. I have never quite understood what separates an article from an essay but the latter is supposed to be shorter and less formal. Adler and Fromer’s piece warrants all kinds of superlatives, but at 77 heavily footnoted pages, short and breezy are not among them. It’s an article.
  2. Lior Strahilevitz called the effectiveness of norms outside close-knit communities “the puzzle crying out for an explanation” in a 2003 article about cooperation on file-sharing networks. See also Dave Fagundes, Queues.
  3. In some ways copyright infringement causes attribution to happen, but that’s not the same as showing that the aim of the doctrine is to provide authors with proper attribution.
Cite as: David Fagundes, Creative Vigilantism, JOTWELL (October 3, 2018) (reviewing Amy Adler & Jeanne C. Fromer, Taking Intellectual Property into Their Own Hands, 107 Cal. L. Rev. __ (forthcoming 2019), available at SSRN), https://ip.jotwell.com/creative-vigilantism/.

Decoupling Intellectual Property’s Incentive and Allocation Functions

Daniel J. Hemel & Lisa Larrimore Ouellette, Innovation Policy Pluralism, 128 Yale L. J. __ (forthcoming 2018), available at SSRN.

It’s rare for two JOTWELL editors to choose the same article to review. When that happens, it’s surely a sign of an “instant classic.” So even if you’ve read Kevin Collins’s laudatory jot of Daniel J. Hemel and Lisa Larrimore Ouellette’s superb piece a few months ago, you should read this one, too. And, if you didn’t read that review, you should definitely read this one.

If double coverage weren’t enough, three years ago, my jot reviewed Hemel and Ouellette’s brilliant article, Beyond the Patents-Prizes Debate. Besides explaining the importance of considering the full panoply of tools to incentivize innovation—such as patents, prizes, grants, and tax credits—Hemel and Ouellette showed that these tools could be decoupled and refashioned to create effectively new, mutant-like rights with potentially superior effects than in their “pure” form.

In this follow-up article, Hemel and Ouellette insightfully discern the broad theoretical ramifications of their previous IP reconstructions. Because Kevin Collins’s jot lucidly summarizes the expanse of the article’s exposition, I focus on the article’s most salient insight—namely, that IP’s “incentive” function is separable from its “allocation” function. Specifically, the “incentive” function refers to the market-based financial reward provided to innovators for producing an innovation (and here I elide the distinction between R & D-based “inventions” and commercialized “innovations”). The “allocation” function concerns the payment of a proprietary price by consumers (and intermediaries) to access innovations covered by IP rights.

Traditionally, IP scholars have recognized these two functions, but have assumed that they are essentially coupled. Indeed, the oft-stated benefit of IP rights over prizes and grants is that IP offers market-based rewards generated (if the rights are useful) by supracompetitive payments from the market to the innovator. Yet, as Hemel and Ouellette show, IP’s incentive and allocation functions are wholly separable.

Critically, Hemel and Ouellette rightly contend that many criticisms (and proposed reforms) of the IP system often concern either the incentive or allocation function, but not both. For instance, consider the criticism that intellectual property rights punish the less well-off by allowing for supracompetitive pricing, pricing out consumers who otherwise would have bought the protected goods in a competitive market. This is a criticism of IP’s allocation function, but not its incentive function.

For instance, once an innovator acquires a patent from the Patent Office, the government could acquire the patent (via mandate) at fair market value and then make the invention available to commercializers for free. To the extent that the fair market value roughly represents the profits or licensing fees the innovator would have earned by retaining the patent, the reward would be essentially the same as in a market system. Yet, the allocation of the product for free to intermediaries and end-consumers substantially differs from IP’s traditional market-based, allocation system. Namely, in place of market users paying supernormal prices for the patented product, the government would collect the funds for payment to the innovator by other means, such as general taxation.

Because taxation may offer efficiency and distributive benefits relative to a patent system, such an approach may be preferable to traditional allocation schemes. Although my own view is that governments cannot easily determine fair market value—and, indeed, there is a rich literature on the pros and cons of “patents buyouts” of this sort—Hemel and Ouellette are the first to recognize the core theoretical and policy implications of this sort of “pluralistic” “mixing” and “matching” of innovation incentives.

Hemel and Ouellette’s framework allows IP scholars to better theorize the function and role of IP, and to identify reasons why different combinations of IP and non-IP mechanisms may be optimal in specific circumstances. For example, market-based allocation mechanisms may be preferable when the value of the innovation is difficult to ascertain ex ante. Yet, because inventors are not usually the best commercializers of their inventions, it may sometimes be more sensible to provide the inventor a fixed payout from a patent auction—in turn offering a limited-term “commercialization” patent to the winning bidder simply to commercialize and sell the invention, which may reduce overall deadweight losses (see my piece, Commercializing Patents for a similar proposal, yet not as well-theorized as the proposals in Hemel and Ouellette’s article).

The meshing of IP and non-IP tools, which the authors term innovation policy “pluralism” (and Collins aptly terms “hybridism”), is not merely a theoretical refinement designed for forward-looking policymaking; it also provides a more nuanced account of today’s innovation landscape. In addition to the Bayh-Dole Act, prizes, grants, and patent auctions, the theory helps one to understand such diverse areas as the Hatch-Waxman Act, price caps on patented products, research tax credits, standards essential patents, compulsory licensing for digital audio streaming, and academic publishing. Given its broad theoretical and practical import, and immediate recognition among the community of IP scholars, this not-even-yet-published article will surely be a canonical work for many years to come.

Cite as: Ted Sichelman, Decoupling Intellectual Property’s Incentive and Allocation Functions, JOTWELL (September 3, 2018) (reviewing Daniel J. Hemel & Lisa Larrimore Ouellette, Innovation Policy Pluralism, 128 Yale L. J. __ (forthcoming 2018), available at SSRN), https://ip.jotwell.com/decoupling-intellectual-propertys-incentive-and-allocation-functions/.

Innovation Policy Pluralism, or Innovation Policy Hybridism?

Daniel J. Hemel and Lisa Larrimore Ouellette, Innovation Policy Pluralism, 128 Yale L. J. (forthcoming), available at SSRN.

In previous work, Daniel J. Hemel and Lisa Larrimore Ouellette explored the range of tools available to regulators interested in promoting innovation. (See Ted Sichelman’s jot.) While legal scholars addressing innovation policy frequently focus solely on patent law—in fact, the term “intellectual property” is often employed as a synecdoche to refer to the broader scholarly field of innovation policy—Hemel and Ouellette argued that viewing patents, prizes, grants, and tax credits as imperfect substitutes allows the public goods problem that underlies innovation policy to be solved in a variety of different ways, each with its own advantages and disadvantages.

In their most recent work, Innovation Policy Pluralism, Hemel and Ouellette push their earlier argument one step further. They again increase the number of tools in the innovation-policy toolkit by developing a divide-and-recombine approach to intellectual property and its quasi-substitutes. They argue that any given tool for promoting innovation has two “separate and separable” components. First, it has an innovation incentive or a “payoff structure for the producers of knowledge goods.” Second, it has an allocation mechanism that “establish[es] the conditions under which consumers can use knowledge goods.” Hemel and Ouellette provide a thorough, clearly argued, and convincing analysis of the combinatorial possibilities that arise from this finer-grained analysis of the components of innovation-policy regimes.

More specifically, Hemel and Ouellette identify three conceptually distinct types of combinatorial possibilities. First, matching combines components of different types: it allows the innovation-incentive component from one conventional innovation-policy tool to be coupled to the allocation-mechanism component from another. For example, if the government were to buy patents from innovation producers upon issuance at the market price and make the innovation freely available to the public, the innovation incentive would resemble the innovation incentive created by patent law (because it rewards producers with a fraction of the social value that their innovation generates, as measured by consumers’ willingness to pay) but the allocation mechanism would resemble the open-access regime conventionally associated with prizes and grants. Second, mixing treats each of the components as continuous rather than binary variables and allows amalgams of distinct components of the same type. For example, focusing on the innovation-incentive component, an upfront grant that does not cover the full expense of research and development expenses may be combined with patent exclusivity that is weaker than conventional patent law provides. Third, layering recognizes that different types of innovation incentives and allocation mechanisms can be deployed at the national and international levels.

While they do produce and analyze the pros and cons of an array of specific proposals, Hemel and Ouellette view their primary contribution as a conceptual framework for talking about innovation policy in a more precise and nuanced manner. The framework helps us to both conceive new innovation-policy proposals and evaluate their normative merits. Just as importantly, it also helps us to understand the jumble of laws that currently promote innovation in the United States. To prove this latter point, Hemel and Ouellette use their framework in a final section to describe the innovation-policy regime that already exists in the pharmaceutical industry more succinctly and accurately than would be possible if taxes, grants, and patents were conceived of as siloed innovation-policy regimes.

To my mind, the article’s title is a bit misleading. (This is akin to praising with faint damnation, I know.) Hemel and Ouellette’s earlier work was about innovation-policy pluralism. There is no one-size-fits-all optimal solution to a public goods problem; different circumstances may call for different tools from the toolkit; there is value in recognizing the diversity of tools available for remedying an innovation-policy problem. Taking the argument to the next level, Innovation Policy Pluralism embraces what I think could more tellingly be labeled innovation-policy “hybridism.” Here, the very notion that there is a finite set of discrete tools in the toolkit becomes a contingent artifact of the conceptual framework conventionally used to think about innovation policy. By breaking these tools down into the components that they all share, the conceptual framework developed by Hemel and Ouellette reveals that we can match, mix and layer our way to hybrid tools that are more perfectly crafted to the specifications of any given innovation-policy problem.

Cite as: Kevin E. Collins, Innovation Policy Pluralism, or Innovation Policy Hybridism?, JOTWELL (July 17, 2018) (reviewing Daniel J. Hemel and Lisa Larrimore Ouellette, Innovation Policy Pluralism, 128 Yale L. J. (forthcoming), available at SSRN), https://ip.jotwell.com/innovation-policy-pluralism-or-innovation-policy-hybridism/.

IP Norms’ Dark Side

Stephanie Bair & Laura Pedraza-Fariña, Anti-Innovation Norms, 112 Nw. L. Rev. 1069 (2018).

The early legal literature on law and social norms tended to paint a rosy picture. Social norms were generally depicted as an optimal set of organically developed rules, informed by the experience of a close-knit community, and thus superior to formal law as a way of regulating behavior. Later scholars came to realize that nothing guarantees the optimality of social norms: they may perpetuate practices that no longer make sense, or they may advance the interests of certain groups but not social welfare. In such cases, formal law holds the promise of overriding suboptimal norms and moving society to a better place.

A few years back, I wrote, together with Christopher Sprigman, one of the first papers on social norms in intellectual property law in which we explored how standup comedians informally regulate the ownership and transfer of rights in jokes and comedic routines. While we thought that in that particular case the system of social norms largely brought good outcomes in terms of inducing creativity, that conclusion came at the end of reviewing their advantages relative to copyright law, but, importantly, also their relative disadvantages. For example, we noted aspects of the norms system that we thought were unattractive for creativity relative to formal copyright law: standups norms’ system does not recognize a term limit (so the norms’ system discourages dissemination and reuse of old materials), does not harbor a concept of fair use (so it discourages derivative and sequential creativity), and grants a scope of protection that exceeds that which is available under copyright’s idea/expression doctrine (so it discourages borrowing high level concepts and themes from others).

Since we wrote our paper, the law and social norms field of IP has grown considerably, and is, I believe, one of the most important developments in IP theory over the last decade, not to mention a fun and interesting one. Others have explored social norms that are copyright-like, patent-like, and trademark-like. More recently, the literature moved to make more general claims and observations based on a rich array of norms’ systems and case studies. One of the latest and important additions to this recent literature is the paper by Stephanie Bair and Laura Pedraza-Fariña which makes the claim about the dark side—i.e. counterproductive creativity-wise—of social norms more generally. In doing so, it builds on much social science and psychology literature.

Bair and Pedraza-Fariña emphasize that anti-innovation norms are harmful because they tend to hinder breakthrough innovation, namely innovation that recombines knowledge across boundaries of (often distant) creative communities and industries. They create barriers between bodies of knowledge, reinforcing the use of specialized, field-specific terminology rather than one that is shared among fields. They also tend to be over-enforced.

Anti-innovation norms come in three types: research priority norms, methodology norms, and evaluation norms. Research priority norms determine which problems are worth artistic or technological attention. Their dark side is that they implicitly suggest which problems are not worth members’ time and attention. This is particularly costly when norms discourage members from exploring intersectional problems.

Methodology norms dictate the proper ways, tools, and routines appropriate to solve the problems community members identified as important. Such norms can have a negative consequence to the extent that they teach against the use of alternative, non-conforming methodologies. Intersectional problems pose a specific challenge as they are often best addressed through the combination of methodologies.

Lastly, evaluation norms dictate how a creative community assesses the contribution of members. Good work may be that which is patented, or published in certain fora, or meets some other community standard. Their dark side lies in reinforcing priority and evaluation norms and in discouraging cross-disciplinary innovation.

Two psychological biases explain why anti-innovation norms emerge and persist. First, the status quo bias describes a preference for the current state of affairs over change. Second, conformity bias describes individuals’ tendency to forego exercising their own judgement in favor of following social norms.

Having illuminated social norms’ dark side, Bair and Pedraza-Fariña propose solutions. They believe that IP law, which affects the appropriability of the fruits of one’s labor, is limited in its ability to incentivize cross-boundary innovation. Instead, they emphasize the need to turn to other mechanisms that affect innovation, such as government grants and tax credits.

Conducting the first general treatment of the topic, this article is important not only for drawing attention to, and advancing the understanding of, social norms’ failures, its causes and its effects, and for suggesting possible solutions thereto, but also because the analysis opens up the door for many potential follow-on research paths, of which I would like to suggest a few. First, the article reinforces the understanding that law and social norms are just two ways to foster innovation, and that each comes with relative, rather than absolute, advantages and disadvantages. Law and social norms do not only affect how much is being created, but also the type of innovation that is taking place. For example, in standup comedy, early 20th century norms suggested that copying and reuse were legit and humor tended to be generic, but late 20th century norms suggested that originality was key, and the type of humor created tends to be personal, observational, and point of view driven. The law and social norms literature suggests that in some cases law and social norms may work in tandem and reinforce each other, but that in others they may crowd out one another. After recognizing that social norms have their disadvantages too, an additional step forward would be to develop a theory of the optimal mix of innovation law (IP and non-IP) and social norms.

Second, the authors focus on social norms’ dark side in hindering cross-boundary research. It is at least conceivable that social norms also have negative effects within disciplinary boundaries (as the last comedy example may suggest). Examining norms’ failure in other communities may suggest whether this is a more general point, and perhaps help to enrich the general theory of norms’ failure.

Third, and related, since the article comes at a stage where a considerable number of industry-specific case studies already exists, it likely presents the opportunity to further contextualize and concretize the discussion. For example, for all or some of the communities explored in the literature, scholars could identify particular anti-innovation norms, map the category they fall into, highlight the psychological biases that reinforce them, and propose particular IP and non-IP reforms that would be most effective in countering them. Indeed, it seems that there is much to be gained from paying attention to anti-innovation norms in IP.

Cite as: Dotan Oliar, IP Norms’ Dark Side, JOTWELL (June 26, 2018) (reviewing Stephanie Bair & Laura Pedraza-Fariña, Anti-Innovation Norms, 112 Nw. L. Rev. 1069 (2018)), https://ip.jotwell.com/ip-norms-dark-side/.

Does Copyright Have a Framing Problem?

Margot E. Kaminski & Guy A. Rub, Copyright’s Framing Problem, 64 UCLA L. Rev. 1102 (2017).

Numerous provisions of the Copyright Act of 1976 (“1976 Act”) use the term “work” as a key referent for determining copyrightability, ownership, scope of rights, limitations on scope, and remedies. Yet, Congress did not provide a general-purpose definition of what counts as a “work,” even though it defined a plethora of arguably much less important terms. When the parties in litigation explicitly or implicitly disagree about the fundamental issue of what the plaintiff’s or defendant’s work is, what is a court to do?

This is a big and important question. While Kaminski and Rub do not provide a full response, they do frame the problem and illustrate how it plays out in many contexts. They demonstrate that courts have considerable flexibility in how to define the relevant work. Sometimes, courts use this flexibility to “zoom in” on particular facets of, for example, a design on a carpet that mixes public domain and original elements. Other times they “zoom out” to consider a work’s total concept and feel. Courts rarely defend their framing of the relevant work, and when they do, they do not use a shared set of criteria to justify their choices.

Kaminski and Rub have three goals for this article. First, they review numerous cases that illustrate the range of legal issues for which a court’s framing choice is consequential. They show the inconsistency of judicial framing choices and the rarity of justifications of these choices. Second, they argue that the doctrinal flexibility judges have exercised is more a feature than a bug in copyright law because it supplements a court’s ability to tailor the scope of rights or remedies in particular cases to better achieve copyright’s goals. Third, they argue that courts should more self-consciously exercise this framing flexibility and provide a justification for their choices. The authors close with a set of criteria courts might rely on to provide such a justification.

In their survey of the choices that are available to courts in framing the definition of the “work,” the authors start with the cases involving copying that is not wholesale, so-called “non-literal infringement.” The legal test is whether the two works are “substantially similar,” and how this is applied depends greatly on how the plaintiff’s and defendant’s works are framed. One of the more difficult issues in this analysis involves whether a character is a work independent of the larger narrative in which the character appears, and if it is a work, how is that work defined when the character, say, Sherlock Holmes, or the Batmobile, evolves? The article briefly discusses how the Ninth Circuit “zoomed out” from the many iterations of the Batmobile to focus on its general characteristics in the Batman series to define the work in DC Comics v. Towle.1 As a reader, I wanted the authors to dive a little deeper into this issue to compare and contrast some of the other character cases.

The article then turns to questions of authorship, showing that courts have decided in motion picture cases to zoom out and treat the dominant author of the motion picture as the author, rejecting the theory that an actor creates a distinct work in a scene for which the actor is primarily responsible. The article then dives deeper into the useful article doctrine, arguing that the Supreme Court, in its enigmatic recent decision in Star Athletica LLC v. Varsity Brands, Inc.,2 sent conflicting signals about how to frame the design features and the utilitarian aspects by zooming in on the chevrons on a cheerleading uniform in parts of the discussion and zooming out to look at the uniform as a whole in others.

Turning to the framing tests the courts have used, or could use, Kaminski and Rub argue that most courts use no test, showing the inconsistent framing in cases involving photographs copied from collections of photographs as an example. Courts sometimes ask whether something is a work or a constituent element by testing it for copyrightability, but the authors persuasively argue that this test does too little work. A market-based approach has some appeal, and some courts ask whether the putatively independent works can “live their own copyright life [sic].”3 A variant on this test is whether the putative work has independent economic value. The authors acknowledge the appeal and some utility of this approach, but they rightly caution that with increasing variety in bundling and unbundling of works in digital form, market signals are less likely to provide stable guidance.

The closing section of the article argues that judicial flexibility in framing the work is beneficial because it allows courts to tailor protection and remedies with respect to policy considerations such as: the incentives-access tradeoff, managing transaction costs, and channeling creativity. As a result, permitting inconsistent framing depending on the doctrinal context provides some benefit. Small units may be copyrightable, and therefore works, for the purposes of copyrightability, but if each copyrightable unit is eligible for a separate award of statutory damages, existing problems with proportionality in statutory damages would become greatly magnified, for example.

The authors set an ambitious task for themselves, and their ability to range across the copyright caselaw is impressive. Two caveats: As a reader, I would have liked to see greater engagement with Justin Hughes’s article on microworks,4 which the authors acknowledge that they are building on, particularly when considering the available framing tests. I also think that the framing choices they discuss in the subsections on fair use, statutory damages, and Section 201(c) are qualitatively related but distinct because the decisions in the cases discussed there focused more on choosing which work – the collective work or the contribution to the collective work – to focus on rather than whether to zoom in or out on aspects of the work, as in the substantial similarity or useful article cases. That said, Kaminski and Rub make a persuasive case that courts should more explicitly identify and justify their framing choices about the relevant work(s) when applying the wide range of doctrines tied to the “work.”

  1. 802 F.3d 1012 (9th Cir. 2015).
  2. 137 S. Ct. 1002 (2017).
  3. Gamma Audio & Video, Inc. v. Ean-Chea, 11 F.3d 1106, 1116 (1st Cir. 1993).
  4. Justin Hughes, Size Matters (or Should) in Copyright Law, 74 Fordham L. Rev. 575 (2005).
Cite as: Michael W. Carroll, Does Copyright Have a Framing Problem?, JOTWELL (May 29, 2018) (reviewing Margot E. Kaminski & Guy A. Rub, Copyright’s Framing Problem, 64 UCLA L. Rev. 1102 (2017)), https://ip.jotwell.com/does-copyright-have-a-framing-problem/.