Oct 13, 2015 Jessica Silbey
In December 2011, the UK Intellectual Property Office commissioned the Centre for Intellectual Property Policy and Management at Bournemouth University to research the effects of parody on copyrighted works. Do parodies harm the market for the underlying work? How might we measure the economic effects of parody, as incentive depressors or engines?
UK copyright law does not contain an exception specifically covering parodies. The authors of the study perceive the UK copyright law as one of the most restrictive in seven jurisdictions surveyed (US, Canada, Australia, France, Germany, Netherlands, UK) with regard to parodies. By commissioning this research, the UK appears to be considering reform. The study concludes that changes loosening the reign of copyright over parodies would further copyright’s underlying purposes of creation and dissemination.
This research is one of the first of its kind to measure empirically the economic effects of parody on the underlying copyrighted work. It was conducted in three parts and this paper is the synthesis of the full research. The focus of the study was of music videos on YouTube and whether “commercial exploitation of original works is affected by the presence of parodies where they can be considered to be part of the same market.” (p. 5.) It also investigates the potential for parodies to be a source of growth for the UK creative economy. The study compares treatment of parodies in the seven jurisdictions listed above to discern core concerns of parody to copyright. At the end, the synthesis applies the distilled principles of parody exceptions to the empirical sample to discern potential effects on creative and distribution incentives.
To be sure, there are many limitations to the generalizability of this study beyond music videos. The authors recognize that all on-line media may not adhere to the same audience behavior or market patterns. And offline parody markets (traditional print media or some visual arts) are likely to present “unique dynamics.” (p. 6.)
Nonetheless, the findings from this study are worth serious consideration by those jurisdictions in which parody exceptions are weak. Further, this study provides fodder for argument that complementary production of creative works – be they parodic, satirical or merely derivative – are unlikely to undermine the incentive to create and disseminate the work because parodies rarely substitute for the original in the marketplace. Indeed, as the authors describe, complementary works, or works in conversation with original works, do not intend to substitute for the original work but to draw attention to both the original work and the new work, lifting both up to a larger audience for inspection and enjoyment.
The main conclusion from the study is that parody is a significant consumer activity in the music video marketplace (24 parodic videos for each original video of a hit single song on average); and there is no evidence of economic damage to musical video rights holders through substitution. In fact, the presence of parody videos correlates with larger audiences for the original.
The nine criteria the authors distilled from the seven jurisdictions applying a parody (or parody-like) exception do not easily align with the empirical conclusions of the study. (The primary data set was of 343 original music videos comprised of the top-100 charting music tracks from 2011.) For example, the first two criteria – that the parody be non-commercial and not have an adverse affect on the market for the original – are often at odds in the empirical data. The study concludes that an enforced “non-commercial” criterion would prohibit 31.1% of the sampled videos and 91% of the audience for all the music videos. That is a lot of content that could be enjoined were “non-commerciality” required for a parody exception! And yet, if the predominant factor for the exception was that the parody must not adversely affect the market for the original, only 0.5% of the sample and 0.2% of the audience would be prohibited. That leaves a lot of content freely distributable. (The authors concede drawing the line of “commerciality” broadly and still these numbers are startling.)
Similarly, if one enforced the factor that the parody “must not use more of the original than necessary,” 81% of the sample would be prohibited along with 76.7% of the audience. And if it was important legally that the parody be “directed at the work” (a “target” parody rather than a “weapon” parody), 65% of the sample would be barred along with 39% of the audience. By contrast, the factor that the parody “add some significant new creation” (be modestly transformative or evidence independent creativity) would prohibit only 20% of the sample and 18% of the audience.
These criteria and their application to the sample of music videos indicate that the jurisdictions applying them tend to overprotect copyrighted works in relation to whatever market harm the original works might suffer and threaten with injunction large amounts of significantly creative work that engages in cultural conversation with and before very large audiences. There appears to be little reason to prohibit parodies if these conclusions carry over to other genres of copyrighted works.
The last criteria are about personality or moral rights of the authors and they, surprisingly, align with the sample’s conclusions that would keep the parodies circulating. The criteria that the parody not harm the personality rights of the original author would leave freely distributable 98.5% of the sample to 77.2% of the audience. And the criteria that the parody acknowledge the source of the original covers 100% of the sample. Jurisdictions applying these criteria – which are not about market harms but about moral or emotional concerns related to authorship – should have no problem with the hundreds and thousands of parodies of copyrighted music videos on YouTube.
And yet the regulation of parody under copyright law remains an open question, especially in the EU and in the UK in particular. What this important study shows – and sets the stage for more such analysis – is that copyright holders and the creative economy generally need not be afraid of parodies. In fact, parodies should be embraced as good for authorship, good for audiences and good for the economy.
Cite as: Jessica Silbey,
Parody and Fair Markets, JOTWELL
(October 13, 2015) (reviewing Kris Erickson, Martin Kretschmer, and Dinusha Mendis,
Copyright and the Economic Effects of Parody: An Empirical Study of Music Videos on the YouTube Platform and an Assessment of the Regulatory Options (2013).
),
https://ip.jotwell.com/parody-and-fair-markets/.
Sep 4, 2015 David Fagundes
Of the many things that may cause us to admire an article, one is the author’s identification of a meaningful relationship between fields that had otherwise seemed entirely disparate. In the past year, two pieces—Tony Reese’s Be Careful Where You Die and Brad Greenberg’s DOMA’s Ghost and Copyright Reversionary Interests—identified just such a non-obvious nexus between a popular issue of great social importance (marriage equality) and a relatively obscure topic of great statutory technicality (termination of copyright transfers).
Both of these articles explore issues raised by two of copyright’s distinctive future interest provisions. Descendants of authors whose copyrights vested prior to 1978 are statutorily entitled, under certain conditions, to reversionary interests in those copyrights. And heirs of any authors stand to inherit the inalienable right to terminate transfers established by the Copyright Act of 1976. The trick, in each case, is that the heirs who enjoy these potential future interests—typically, the surviving spouse and children—are determined by statute, regardless of an author’s estate plan or preferences to the contrary.
The marriage equality implications of these provisions that both Greenberg and Reese note is that in a country where some states recognize same-sex marriage and others do not, same-sex partners of authors suffer a relative disadvantage. Since copyright’s future interest provisions tend to defer to state definitions of marriage, the couple’s celebration of a valid union in one state could still mean that the surviving partner would not be recognized as a “spouse” depending on the state of the other’s death.
Of course, after the Supreme Court’s decision in Obergefell v. Hodges, we no longer live in such a country. The Court’s eliminates many of the immediately pressing social concerns raised by both authors. (Though it does not, as Reese observes, eliminate them all. Same-sex marriage remains unrecognized in many foreign jurisdictions, so should a couple be domiciliaries of a nation that does not recognize their union, they may remain subject to unequal treatment.)
And while one might think that Obergefell brings down the curtain on the problems identified by Reese and Greenberg, a deeper look shows just the opposite: The marriage equality issue is just one valence of copyright’s largely unexplored family law. The Copyright Act’s future interest provisions privilege a particular view of traditional marital and family structures that operate to the detriment of any author, regardless of sexual orientation, who does not adhere to those traditional norms.
Copyright’s familial structures work well for those who are in happy, traditional nuclear families. If you have an opposite-sex spouse and some kids, and you want them and only them to get your copyright-related future interests (e.g., termination rights or reversionary interests), the statute locks in that preference. But if you are not a member of a traditional nuclear family, securing these interests for those you care about is more complicated. These difficulties may no longer disproportionately burden same-sex couples, but it still works to the detriment of, say, authors who are in committed opposite-sex relationships that have not been memorialized by state-sanctioned marriage. Authors who simply choose not to pursue romantic relationships at all would similarly find themselves at a relative disadvantage, while copyright’s familial assumptions are simply incoherent as applied to the polyamorous (the Act’s use of “surviving spouse” is conspicuously singular). Copyright’s presumptions about family and author preferences may also work to the detriment of authors’ interests even when the author does fit into social norms about matrimony and reproduction. If you have a miscreant kid whom you don’t trust to take care of his part of your literary estate, or a beloved niece who you want to take charge of your literary estate, you’re out of luck: The Copyright Act entitles all an author’s children to a share of her reversion copyright or termination rights (and excludes more distant blood relations), regardless of whether that descendant is included in or disinherited by the author’s will.
Yet as Reese notes in the normative section of Be Careful Where You Die, it’s easier to point out these problems than to remedy them. The whole point of including family-friendly statutory overrides of an author’s testamentary intent was to make sure that the purpose of copyright termination and reversion was not undermined by having authors just devise their future interests to assignees and licensees. This is the bigger issue that is signaled in, but not resolved by, both articles: Does rendering an author’s posthumous rights inalienable requires picking some individuals who will benefit from those rights? Or is there some other statutory structure that would maintain inalienability without disadvantaging authors who do not adhere to a standard hetero- and repro-normative lifestyle?
Like all good work, Anthony Reese’s Be Careful When You Die and Brad Greenberg’s DOMA’s Ghost push readers to ask hard questions that lack easy answers. I was first drawn to these pieces by the novelty of their core insights about the rarely examined nexus between marriage equality and termination of copyright transfers. But Reese’s and Greenberg’s thoughtful analyses do more than just call attention to a doctrinal issue that has been largely (if not entirely) resolved by Obergefell. By shining a light on the underexplored assumptions animating the Copyright Act’s provisions instantiating and privileging a particular vision of the family, both articles open the door to investigations of the entanglement of copyright and family law beyond marriage equality.
David Fagundes,
Copyright’s Family Law, JOTWELL (August 11, 2015) (reviewing Anthony Reese,
Be Careful Where You Die: Termination of Copyright Transfers and the Road to Marriage Equality,
IP Theory (forthcoming, 2015) available at SSRN and Brad Greenberg,
DOMA’s Ghost and Copyright Reversionary Interests, 108
Nw. U. L. Rev. 1 (2014). ),
https://ip.jotwell.com/?p=748.
Jul 20, 2015 Ted Sichelman
In the 19th century, legal scholarship focused on legal doctrine. In the 20th century, legal scholars began to examine the policy effects of legal doctrine, paying particular attention to how changes in doctrine could yield better policies. Now, such policy-oriented approaches are cemented into nearly every U.S. law review article. Although this shift has in my view generally been beneficial, it still suffers from a doctrinal myopia: legal scholars usually write about only the swaths of law they know well, often overlooking other strands of law that are quite pertinent to the policy issues being addressed.
For example, although patent law scholars frequently opine about the nuances of patent doctrine and how changes in those nuances may affect innovation incentives, they have often ignored how other available policy tools—such as grants and government prizes—affect innovation. Although there is certainly law that deals with grants and prizes, it is rarely the subject of litigation and is fairly specialized (hence, occupying the minds of a small number of lawyers). None of it is taught in law schools. As such, law professors tend to know (and write) little about it.
On the other hand, economists who write about innovation tend to consider all of the available policy tools. Thus, there is a substantial literature in economics on grants and prizes. Yet, from a legal scholar’s perspective, most of this literature is too broad-brush, as it tends to abstract away from important legal nuances. Generally, economists either don’t examine in sufficient detail how changes in the law can impact policy, or else (at least in the absence of a law professor co-author), offer overly simplistic legal analysis.
Recently legal scholars, drawing from discussions in the economics literature, have applied more rigorous analysis to various policy tools not typically examined in the legal literature. In the field of patent law, several articles have explored the interaction between patents, prizes, and grants.
Oddly, none have explored in any detail the tax deductions and tax credits undertaken for R & D. Hemel and Ouellette make an impressive contribution by considering from a rigorous legal perspective this important—and otherwise overlooked—driver of innovation. This lacuna in the literature is even more shocking because, as the authors point out, the U.S. government spends tens of billions of dollars each year merely in R & D tax credits, plus surely tens of billions more in general deductions for R & D expenditures.
What is most valuable about Hemel and Ouellette’s paper is their “compare-and-contrast” analysis of the four main policy tools—patents, prizes, grants, and tax credits—used to promote innovation. Importantly, and in contrast with some recent works on prizes by others in the field, they properly recognize that there is generally “no free lunch” in the innovation game. As such, all of the policy alternatives tend to be costly.
My favorite example of theirs on this score regards an innovative drug for baldness. With some basic assumptions, they show that the number of potential consumers priced out from purchasing the drug is exactly the same under a patent and prize regime, as long as the specific users are taxed in order to pay for the prize. One can extend this example. Even if the prize amount is generated from taxpayers as a whole, this leads to a mandatory insurance system of sorts for innovation. The marginal additional tax paid by each person represents the premium charged for the right to purchase a “needed” innovation of interest at marginal cost. Of course, such a regime—as well as one premised on grants, tax credits, or patents—all lead to similar “deadweight losses,” at least from the ideal perspective in which innovation appears out of thin air.
Importantly, Hemel and Ouellette also contrast these policy tools, engaging in a sophisticated analysis of ex ante vs. ex post trade-offs, cross-subsidization, racing, coordination, risk, and administrative costs. Of course, this sort of discussion could occupy several lengthy books, and one cannot fault the authors for not considering every nook and cranny of these important topics. Instead, they have displayed in consummate fashion how such analysis should and could occur on a more detailed level.
On a broader level, Hemel and Ouellette’s analysis shows that legal scholarship—and to some degree, legal pedagogy—ought not to solely concern itself with a narrow set of legal doctrines as the means to policy ends. Rather, it should strive to consider a range of legal and non-legal options that are related by achieving a common goal—for instance, innovation.
In other words, merely considering one’s own realm of expertise—at least in a broad, policy-driven article—will tend to be myopic, resulting in an incomplete analysis. Of course, such a view implies that co-authoring (e.g., one expert in patent law, another in antitrust, another in tax, another in economics, etc.) will often be essential. Writing in larger and larger groups will certainly be a tough transition, individually and institutionally. Nonetheless, the sciences learned this valuable lesson many years ago. As Hemel and Ouellette have shown us, now it’s our turn.
Jun 17, 2015 Christopher J. Buccafusco
By now, most Jotwell readers will be familiar with the terrific empirical research that Paul Heald has been doing on the public domain. Now, Paul has teamed up with Kristopher Erickson and Martin Kretschmer, scholars at the University of Glasgow and the CREATe centre (which stands for Creativity, Regulation, Enterprise, and Technology). CREATe is a publicly funded multi-disciplinary program that provides research support to produce evidence-based assessments of IP policies—something I think we can all agree that we like lots.
Heald, Erikson, and Kretschmer (HEK) have recently posted a new paper that presents a section from CREATe’s larger empirical project on copyright and the value of the public domain. I strongly recommend the entire report, which includes two separate empirical studies, but will focus my comments on the shorter paper.
The authors begin by noting that copyright owners have become adept at offering quantitative assessments of the economic value that copyright industries produce. Although there are numerous estimates of the value of copyright law, there are, however, very few attempts to measure the economic value of the public domain. HEK’s paper begins to balance the ledger by estimating the value of a robust public domain for creative reuse.
To do so, the authors modify and extend a technique that was recently introduced by Abishek Nagaraj at MIT. The basic idea is to analyze Wikipedia pages for the use of photographs where the availability of photographs is affected by the public domain. HEK study the use of photographs of successful literary authors on their Wikipedia pages.
The sample includes 362 authors who had at least one New York Times bestseller from 1895 to 1969. The authors were born between 1829 and 1942 and, thus, span the 1923 public domain/copyright divide. Authors who were born and died before 1923 can only be represented by public domain images; authors born after 1923 can only be represented by copyright-eligible images; and authors whose lives span the divide can be represented by both. HEK hypothesize that, despite the many fewer images that exist of earlier authors, those authors’ pages will be more likely to include an image than later-born authors’ pages. This is because the public domain images can be freely used, but the copyrighted images likely have to be licensed.
Their results support this hypothesis. While authors born after 1920 have about an even chance of being represented by a photo on their Wikipedia pages, authors born between 1850 and 1880 have about a 90% chance of being represented by a photograph. The difference, according to the authors, is the much larger set of freely available images for the older authors.
This finding alone would provide significant evidence of the value of a robust public domain. But HEK want to go further to estimate the extent to which the addition of photographs on Wikipedia pages represents social welfare. First, they consider what it would have cost Wikipedia to license the public domain images. The same or similar images could have been licensed from Corbis or Getty for about $120 each, so HEK estimate that the Wikipedia page builders saved $77,400 over a five-year period. Extrapolating to Wikipedia as a whole, this would amount to a savings of about a quarter of a billion dollars per year.
HEK also attempt to estimate the social value added by the public domain images by comparing the number of page views for pages with and without photographs. The inclusion of photographs increases traffic to webpages (although the precise mechanism isn’t spelled out clearly), and increased traffic means increased advertising revenue (at least to pages that accept ads). The authors measure changes in page views from 2009 to 2014 for those authors who had an image added to their pages after 2009 and for those that did not. Presumably, this should help isolate the draw of the image. HEK estimate that the addition of an image increased page views by 19% during their sample period. Each additional page view is worth about $0.005 in additional ad revenue, so again HEK attempt to assess the hypothetical revenue that Wikipedia could be making based on its use of public domain images. They calculate that the increased traffic to Wikipedia from public domain images is worth about $38 million per year.
HEK conclude by offering policy recommendations regarding the harm of copyright term extension and the value of orphan works legislation. While these suggestions are important, the greatest value from this project, and the others that CREATe is producing, is the richer picture of the copyright landscape that they provide. When the next round of copyright legislation begins, both sides will be armed with quantitative figures about costs and benefits.
Finally, in the spirit of further encouragement and next steps, I would like to see more sophisticated analysis of the data, including regression analysis of the initial data set. HEK’s claims about the data would be bolstered with a fuller impression of the effects of each of their variables. Additionally, to deal with endogeneity problems associated with the existence of photographs and other variables, the authors could consider approaching the problem experimentally by randomly assigning different Wikipedia pages to receive a photograph. This might provide greater explanatory power about the relationship between images and page views.
Cite as: Christopher J. Buccafusco,
Estimating the Value of the Public Domain, JOTWELL
(June 17, 2015) (reviewing Paul J. Heald, Kris Erickson and Martin Kretschmer,
The Valuation of Unprotected Works: A Case Study of Public Domain Photographs on Wikipedia, 28
Harvard J.L. & Tech. (forthcoming, 2015), available at SSRN),
https://ip.jotwell.com/estimating-the-value-of-the-public-domain/.
May 20, 2015 Michael W. Carroll
Jacob H. Rooksby,
Defining Domain: Higher Education’s Battles for Cyberspace, 80
Brooklyn L. Rev. __ (forthcoming, 2015), available at
SSRN.
Professor Rooksby studies the way in which higher education institutions manage their brands online by enforcing their trademarks through domain name disputes. His article sits at the intersection of two lines of scholarly discourse. The first is literature about trademark law in cyberspace and the way that it has been privatized. ICANN’s Uniform Domain Name Dispute Resolution policy (UDRP) is the source of law for the majority of domain name-based trademark disputes, and platforms such as eBay and Google have incorporated their private trademark laws into their Terms of Use. The second is the “Internet governance” literature about the role of the domain name system and ICANN in cyberspace and the way in which ICANN is and is not a private government.
Because UDRP disputes are resolved through online proceedings that are formally non-binding and non-precedential, scholars in the United States tend to leave these decisions in the shadows, focusing attention instead on the work of the federal courts. Taking a different tack, Professor Rooksby set out to find out how frequently U.S institutions of higher education initiated UDRP proceedings and why, with a particular emphasis on whether their enforcement strategies aligned with the free speech values upon which the modern academy is founded.
He found that from 2000 to 2013, 100 U.S. institutions initiated at least one UDRP proceeding. Some were repeat players, so a total of 233 complaints were filed concerning 373 domain names. These institutions were more than 90% successful in obtaining awards of transfer or cancellation (P. 36). Baylor turns out to have been far and away the most active consumer of UDRP proceedings, having initiated 62. The University of Texas system comes in second at 19, and then a handful of schools, including my employer, have initiated between 5 and 7 proceedings. Eighty-seven percent of the 100 schools have filed only one UDRP complaint.
While many of these proceedings involve domain names designed to confuse, such as [school name].biz, Baylor also pursued actions against ihatebaylor.com and baylorsucks.com, both of which resolved to parked pages at the time the actions were brought. Using these examples and others, Professor Rooksby highlights the tension between these enforcement decisions and the university’s responsibility to encourage critical discourse. He sets forth a useful set of guidelines that he proposes should guide university counsel’s decisions, or oversight of decisions, to initiate UDRP proceedings in the future.
There’s the seed of a different line of inquiry in the article that I hope Professor Rooksby or other trademark scholars pursue. Under what circumstances does a top-level domain name become a certification mark? The .edu top-level domain (“TLD”) is “governed” differently than many of the other generic TLDs. Its registry, EDUCAUSE, has a policy that limits registration rights to accredited institutions of higher education. Initially, these were only four-year institutions, but now community colleges and a range of other schools also are eligible. EDUCAUSE has not been that vigilant in enforcing this policy, and a number of .edu second-level domains were registered to non-academic registrants who successfully confused consumers. Are these cases of confusion actionable against only the registrant, or may the registry of a restricted top-level domain also be held liable under the Lanham Act?
As Professor Rooksby notes, the top-level name space is expanding dramatically. Even with so-called sunrise policies aimed at allowing for preemptive registrations by trademark owners, most observers expect the number of run-of-the-mine trademark disputes to increase. More interesting, however, is that many of these registries are likely to “govern” their respective domains with restrictive policies for the purpose of making the top-level domain name either a source identifier or a representation about at least one attribute of the second level domain name registrant. The experience with the .edu domain will be instructive about the certification mark and shared liability issues that may well follow.
May 6, 2015 Lisa Larrimore Ouellette
- Tejas N. Narechania, Patent Conflicts, 103 Geo. L. Rev. (forthcoming 2015), available at SSRN.
- Jacob S. Sherkow, Administrating Patent Litigation, 90 Wash. L. Rev. (forthcoming 2015), available at SSRN.
In these new articles, Tejas Narechania and Jake Sherkow push the contextualizing trend in IP scholarship in a novel direction. As noted by Rob Merges, scholars are increasingly recognizing that formal IP laws are embedded in a broader economic context, and this wave of scholarship includes case studies of fields in which innovation is supported by norms and market incentives (like fashion, cuisine, roller derby names, and tattooing) and increased analysis of non-IP mechanisms like tax credits and direct transfers through which the state provides significant financial support for innovators. But in addition, I think this contextualizing move involves recognition that the innovation ecosystem is shaped not only by non-IP laws and norms, but also by a broad array of institutions.
Most discussions of institutional actors in patent law have analyzed interactions among the Federal Circuit, the Supreme Court, Congress, and the PTO. But these two new articles by Narechania and Sherkow focus on administrative agencies beyond the PTO. Building on terrific work by scholars such as Arti Rai, Sapna Kumar, and Kali Murray, Narechania and Sherkow provide detailed examples of the ways in which agencies such as the FTC, FCC, ITC, NIH, and FDA have played key roles in influencing patent policy.
Sherkow’s Administrating Patent Litigation focuses on agencies’ role in directing and managing patent litigation, while Narechania’s Patent Conflicts takes a broader look at how non-PTO agencies deal with patent policies that interfere with their regulatory aims. Narechania provides a helpful descriptive taxonomy of the range of agencies’ legal responses to these patent conflicts:
- Inaction: The FCC concluded it cannot require licensing of patents implicated by new 911 standards (though Sherkow argues that the FCC’s report on these patents has been crucial in ongoing patent disputes). Similarly, the EPA declined to regulate certain chemical emissions due to patents on emissions control devices (despite explicit authority to grant compulsory licenses).
- Indirect Action—Supreme Court: The FTC’s views on reverse payments in pharmaceutical settlements and the NIH’s views on gene patents (as advocated by the DOJ) won at the Supreme Court in Actavis and Myriad, respectively. (Sherkow also notes that the FTC has been involved as an amicus party in high-profile patent disputes like Apple v. Motorola.)
- Indirect Action—Congress: The IRS convinced Congress to effectively ban tax strategy patents in the America Invents Act; the Department of the Navy forced early airplane industry into cross-licensing agreement by getting Congress to pass a bill for confiscation of key patents.
- Direct Action: The FCC required mandatory licensing at reasonable and nondiscriminatory rate of telecommunications network element patents.
Sherkow discusses some of these same examples, though he places greater emphasis on agencies’ more informal influence on adjudication through patent-related whitepapers, such as the FCC’s 911 report or the ITC’s reports on non-practicing entities, which have been cited by litigants seeking to restrict the ITC’s patent jurisdiction. The authors also both discuss the FDA, which plays a limited gatekeeping role in the pharmaceutical patent context but claims to have only “ministerial” authority to record patents related to approved drugs in its Orange Book.
Both articles advocate greater agency involvement in patent policymaking, although they focus on different problems. Sherkow argues that “agencies’ myopic view of their own powers” creates procedural issues such as regulatory gamesmanship, industry and political capture, and inconsistent judgments. He contends that the FDA can and should weed out improperly listed patents. And he suggests that agencies that lack the FTC’s independent litigation authority should push the DOJ to allow them to intervene in patent litigation or should publicize their patent-related views more often through agency whitepapers.
Narechania is more concerned about substantive conflicts between patent law and other regulatory goals, and he suggests that the FCC’s direct regulation of telecommunications patents offers “a path forward.” Even where agencies lack express authority to issue patent-related regulations, he argues that they may do so under the theory of “ancillary authority” from Southwestern Cable. Under this authority, the FCC could require licensing of patents necessary for implementation of its 911 standards, and the FDA could implement substantive review of Orange Book listings. Narechania also argues for greater executive coordination, perhaps through the White House’s Office of Science and Technology Policy. And finally, he suggests that agencies might use post-grant review proceedings to challenge PTO decisions involving “unsettled legal question[s] that [are] important to other patents,” which might be a useful procedure for issues such as tax strategy patents.
Distinct from the question of whether agencies can address patent issues is whether they should. It is hard to argue with interventions to limit the litigation abuses that Sherkow focuses on, but substantive conflicts between patents and other policies pose a problem that is both more significant and thornier. Where the conflict is confined within the domain of innovation—as in the competing visions of biotech innovation at issue in Myriad—Narechania suggests that non-PTO agencies can help “craft a more context-sensitive (and less formalistic) regime.” Indeed, as I’ve argued, patent uniformity has significant costs, and one argument for greater involvement of non-PTO agencies in patent policy is that they might be better at balancing patents with non-patent mechanisms for facilitating financial transfers to innovators. Where the conflict is between patents and other interests—which often boils down to the age-old tension between innovation and access—Narechania notes that self-interested agencies may not be the best actors to balance the conflict, and that an Executive Branch arbiter or an impartial court might be better at prioritizing policies.
Of course, figuring out how to balance patents with competing concerns remains daunting. The standard tool of cost-benefit analysis is, as Narechania aptly puts it, “like comparing apple seeds to orange seeds,” with the need to make impossibly difficult predictions of the future value of innovation and competing policies. He notes that agencies can look for areas in which holdup problems and the transaction costs of dealing with fragmented rights are likely to be significant, but correctly identifying even these more limited problems is challenging. But one article can’t solve everything, and I think the main contribution of this work is the novel take on who should undertake this balancing, and when.
In short, these articles provide valuable descriptive contributions to help expand scholars’ understanding of the relevant institutional actors in patent law and the doctrinal limits on their powers. And they also add thoughtful normative analyses of how patent law and other legal fields can best benefit from these agencies’ involvement. While some readers might object to their specific policy suggestions, I think that after reading these articles, it is hard to argue that non-PTO agencies can be ignored.
Cite as: Lisa Larrimore Ouellette,
The PTO Is Not the Only Patent Agency JOTWELL (May 6, 2015) (reviewing Tejas N. Narechania,
Patent Conflicts, 103
Geo. L. Rev. (forthcoming 2015) and Jacob S. Sherkow,
Administrating Patent Litigation, 90
Wash. L. Rev. (forthcoming 2015)),
href="https://ip.jotwell.com/the-pto-is-not-the-only-patent-agency.
Apr 22, 2015 Pamela Samuelson
Michael C. Donaldson,
Refuge From The Storm: A Fair Use Safe Harbor For Non-Fiction Works, 59
J. Copyright Soc’y U.S.A. 477 (2012), available at
SSRN.
So much has been written on the fair use case law in the U.S. that it would seem nigh impossible to find something new to say about it. But new things are indeed possible. Michael Donaldson, who practices entertainment law in Los Angeles and has represented many clients in copyright disputes in the film industry, has made four significant contributions to the fair use literature in his article “Refuge From the Storm: A Fair Use Safe Harbor for Non-Fiction Works,” which was published in the Journal of the Copyright Society in 2012.
One contribution is the concept of fair use safe harbors. Copyright professionals are used to speaking of safe harbors when it comes to the statutory limits on liability of Internet service providers for the infringing acts of others. Some of us also use that term when discussing the judge-made limit on secondary liability for developers of technologies having substantial non-infringing uses. But we have shied away from the safe harbor concept in fair use cases, perhaps because the Supreme Court was unwilling to endorse presumptions of fairness for parodies in its Campbell v. Acuff-Rose decision. Donaldson’s article makes a persuasive argument that a fair use safe harbor does exist for certain uses of pre-existing materials in non-fiction works, and it opened my mind to the possibility that other fair use safe harbors might also exist.
A second contribution is the set of three questions that Donaldson asserts will, if answered in the affirmative, will qualify a particular use as within a fair use safe harbor: 1) does the material the non-fiction author wants to use illustrate or support a point that author is trying to make?, 2) does the non-fiction author use only as much as is reasonably appropriate to illustrate or support that point?, and 3) will the connection between the point being made and the material being used to illustrate or support that point be reasonably clear to the ordinary person who is the intended audience for the non-fiction work? Notice that these questions build off of the statutory fair use factors, but recast them in a way that makes analysis of the issues much more precise and that yields answers that are more likely to predict the outcome if the use is challenged.
A third contribution is Donaldson’s discussion of 82 non-fiction fair use cases he studied to illustrate the kinds of litigated cases that, he believes, easily qualify for this safe harbor, and those that may or may not be fair uses because the answers to one or more of the three questions above are more equivocal. This leads Donaldson to make another useful point: fair use, he argues, is best perceived as a spectrum, in which some cases are easily fair use, some easily not fair use, and some in a gray zone. Donaldson gives an example of working with a filmmaker who wanted to use a short clip from a John Lennon song in a documentary film. Because it was quite likely that Yoko Ono would sue for infringement, Donaldson worked with the filmmaker to craft the use of the song in a manner that would likely satisfy courts that the use was fair.
Finally, Donaldson discusses a dozen or so arguments that copyright owners frequently make to defeat fair use defenses that Donaldson thinks are “distractions”; i.e., that, in Donaldson’s view, are not convincing counters to fair use defenses, at least in the non-fiction fair use work context. They include the failure of the defendant to ask permission, the use of the material after permission was refused, the existence of negotiations that failed to yield a license, the availability of alternatives that could have been used as substitutes, and a claim that the material used was the “heart” of the work. Donaldson argues that as long as the three questions he’s sketched out can convincingly be answered in the affirmative, these “distractions” should not derail the fair use defense.
Donaldson’s article builds on the works of a number of scholars who have argued that fair use is more predictable than some commentators claim. But Donaldson’s experience in the practice of law has yielded insights into the predictability of fair use in the important sector of non-fiction works which makes his article even more persuasive than the works that he builds on.
Cite as: Pamela Samuelson,
Fair Use Safe Harbors?, JOTWELL
(April 22, 2015) (reviewing Michael C. Donaldson,
Refuge From The Storm: A Fair Use Safe Harbor For Non-Fiction Works, 59
J. Copyright Soc’y U.S.A. 477 (2012), available at SSRN),
https://ip.jotwell.com/fair-use-safe-harbors/.
Mar 25, 2015 Stacey L. Dogan
Annemarie Bridy,
Internet Payment Blockades,
Fla. L. Rev (forthcoming), available at
SSRN.
The law of intermediary liability in intellectual property reflects a constant struggle for balance. On the one hand, rights owners frustrated by the game of whack-a-mole have good reason to look for more efficient ways to stanch the flow of infringement. While this concern is not a new one, the global reach and decentralization of the Internet have exacerbated it. On the flipside, consumers, technology developers, and others fret about the impact of broad liability: it can impede speech, limit competition, and impose a drag on economic sectors with only a peripheral relationship to infringement. As the Supreme Court put it thirty years ago in the seminal Sony case, the law must seek a “balance between a [rights] holder’s legitimate demand for effective – not merely symbolic – protection of the statutory monopoly, and the rights of others freely to engage in substantially unrelated areas of commerce.”
For the most part, the battle of these competing interests has played out in litigation, legislation, and deals involving online intermediaries whose services are used to infringe. The Digital Millennium Copyright Act’s notice-and-takedown procedures, the peer-to-peer copyright battles, keyword advertising suits, and lawsuits against websites like eBay are giving shape to the relative rights and responsibilities of IP owners and intermediaries.
While few would defend the existing structure as perfect, it has a number of features – such as the (usual) requirement of actual knowledge – that are designed to minimize IP enforcement’s collateral effects. More generally, except in the case of inducement, the obligation to stop infringement has been limited to those with a close technological relationship to it – parties that may not have initiated the infringing act, but have the tools to stop it surgically, with minimal collateral effects.
These standards of secondary liability, however, have not satisfied rights-holders. In addition to substantive objections to the rules, IP owners point out that many online intermediaries lie beyond the reach of United States courts. In the last several years, they have tried to widen the net of responsibility to include not just technological intermediaries, but also payment intermediaries that process Internet transactions.
The efforts began in the courts, which held that absent evidence of collusion or inducement, financial intermediaries were too far removed from infringement to justify liability against them. Attention then turned to the legislature, which showed initial enthusiasm but backed down in response to a wave of protest that reflected both substantive and procedural objections to PIPA, SOPA, and related legislation.
In this article, Annemarie Bridy (who has written insightfully about the lead-up to, and fallout from, the PIPA and SOPA debacle) explores the latest front in this battle over responsibility for online enforcement: “voluntary” Internet payment blockades.
The picture she paints is not pretty. If SOPA and PIPA raised transparency concerns, and if the prospect of broad liability cast a pall over financial intermediaries’ engagement in “substantially unrelated areas of commerce,” the latest chapter will make the winners of earlier battles wonder what they fought for. As Bridy explains, in the wake of the SOPA and PIPA collapse, rights-holders enlisted the executive branch to pressure payment intermediaries to “voluntarily” take on obligations that mirrored those in the failed legislation. Under the agreement, financial intermediaries commit to a “notice-and-terminate protocol,” in which merchants selling infringing products are cut off from access to payment services.
Even if the agreement were truly voluntary, it would raise troubling questions of substance and process. The notice-and-takedown system does not appear to require any inquiry into the relative volume of infringing and non-infringing sales by the offending merchant. The agreement, moreover, arose out of a closed and non-transparent process, as evidenced by the government’s failure to produce a single document in response to a FOIA request about negotiations. Equally distressing, the “voluntary,” non-mandated and non-contractual nature of the agreement means that no court will adjudicate its meaning and scope.
Bridy makes a persuasive case that this form of “private ordering” is really regulation in disguise. After years on the defensive, and in the shadow of prospective regulation, the payment intermediaries acceded to pressure and joined the ranks of intellectual property enforcers. Without transparency in either design or implementation, the public has no way to know whether this quasi-regulatory system reflects any of the balance that’s so critical in the Internet context – between freedom and responsibility, and between infringing and non-infringing use.
I found Bridy’s article engaging, informative, and disturbing. She does a terrific job of introducing readers to the past and present of payment blockades. She lays the historical foundation, from initial case law through PIPA and SOPA, and offers a clear, simple, and informative explanation of how the best practices protocol works. And her critique is powerful. For those of us focused primarily on developments in litigation, legislation, and formal administrative law, the article offers a sobering reminder of the limitations of these formal fora, and the doggedness and resourcefulness of rights-holder advocates.
The article left me eager for more details about the agreement’s content and effects – will it impede access to websites offering plenty of non-infringing stuff? Where are the incentives for financial intermediaries in that scenario? Market forces may well substitute for legal doctrine and lead payment intermediaries to avoid cutting off legitimate trade. But investigation is expensive, and it may prove expedient to terminate in the questionable case.
Ultimately, whether this system will cause more harm than good is an empirical – and probably untestable – question. It may well be that, with a narrowly tailored notice-and-terminate system, payment intermediaries offer an efficient way to reduce rampant counterfeiting and infringement. But Bridy’s article gives us reason to question the jump to that conclusion, especially in a process as muddy as this one.
Feb 23, 2015 Laura A. Heymann
Trademark surveys have traditionally been seen as a core element of any trademark infringement or dilution dispute. How else would we discover, the theory goes, whether the typical consumer is confused about the source of a particular product, believes the prestige of a famous mark to have been diluted, or considers a once valid mark to have become generic?
Recent empirical work, focusing on published judicial opinions, has debated whether surveys have indeed played as significant a role as some have asserted or whether they are generally disregarded by courts, perhaps in favor of judges’ own intuitions. In a recent symposium contribution published in the Texas Law Review, Shari Seidman Diamond and David J. Franklyn help to expand the field. Because published opinions tell only part of the story, Profs. Diamond and Franklyn surveyed trademark practitioners in an attempt to discover how surveys are used in early stages of legal disputes. The results provide some useful food for thought both for trademark practitioners and for empirical legal scholars.
Some background for readers who are unfamiliar with trademark doctrine: Trademarks function, in the now prevalent description, as source identifiers — they tell consumers who is responsible for (or, perhaps, who sponsors or authorizes) the good or service at hand. Unlawful uses of trademarks disrupt this function in impermissible ways, either by confusing consumers into thinking that two parties are related when they are not or by diluting the strength of a famous trademark by using it on unrelated goods or services. When a trademark no longer performs its source-identifying function — that is, when it becomes the common word for a good or service rather than identifying a particular producer of that good or service — the mark is said to have become generic and so can no longer be claimed by anyone as a mark for that good or service.
The conventional wisdom was that courts trying to determine how consumers interpreted a particular trademark would want to see evidence of that interpretation in the form of a survey. Consumers would be intercepted in a mall, or contacted via the telephone, and asked certain questions about the mark. There are, of course, limits to survey evidence, since any results must be assessed with the knowledge that the artificial nature of the survey environment is likely to encourage focus on small similarities or differences that might otherwise go unnoticed. (Likewise, there are equally troublesome instances of judges believing that their own views, arrived at in the quiet of chambers, are equivalent to those of the reasonable consumer.)
In a careful and influential 2006 study of the multifactor tests for trademark infringement, Prof. Barton Beebe concluded that courts did not ultimately place as much weight on surveys as litigants might have assumed. Given that only 20 percent of the 331 opinions he studied discussed survey evidence, and only 10 percent credited the survey evidence, Prof. Beebe concluded that survey evidence was “in practice of little importance.”
Prof. Beebe’s conclusions were challenged to some extent by Profs. Sarel and Marmorstein in their 2009 study. Profs. Sarel and Marmorstein narrowed the field to 126 opinions in which the plaintiff’s trademark rights were undisputed, in an attempt to focus their assessment on surveys regarding likelihood of confusion. They found that in about one-third of their cases, the plaintiff offered a likelihood-of-confusion survey, with a significant impact on the results of decided cases in which the survey was admitted.
A third study was published by Profs. Bird and Steckel in 2012. Like the previous two studies, the Bird/Steckel study focused on published opinions, expanding Prof. Beebe’s data set with an additional 202 later cases. With this larger dataset, Profs. Bird and Steckel found that only 16.6 percent of their 533 cases discussed survey evidence, and so their results supported Prof. Beebe’s finding that surveys were not as broadly influential as assumed. They also found, however, consistent with Profs. Sarel and Marmorstein, that surveys could be influential in particular kinds of cases (such as where the marks were similar but the products dissimilar).
In their contribution to this line of research, Profs. Diamond and Franklyn note — as the previous authors acknowledge — that all of these studies focus on published opinions, and so tend to show us what judges and juries think about trademark surveys. To learn more about how litigants assess the value of surveys, Profs. Diamond and Franklyn obtained the permission of INTA (the International Trademark Association) to survey its members to determine when and under what circumstances attorneys chose to commission a survey in a trademark case and what effect the survey had over the course of the litigation.
The survey included questions assessing the respondents’ experience with surveys, the factors they considered in deciding whether to commission a survey, and the effect the survey had in the respondents’ most recent case, such as leading one party to drop or settle a claim. The survey also asked respondents to assess the effect of any surveys conducted by the opposing party in the case. (It should be noted that, inevitably, the dataset for this study had its own limitations, in that it asked respondents about surveys in trademark and deceptive advertising litigation, which would not include the use of surveys in proceedings before the Trademark Trial and Appeal Board, where they may well play a different role.)
Although, as they acknowledge, the overall response rate to the survey was fairly low (333 practicing attorneys), Profs. Diamond and Franklyn conclude that “not only are surveys widely used in pretrial stages, but that the attorneys who commission them generally perceive their impact as quite influential on the outcome of the case.” (P. 2052.) Indeed, although only 19.2 percent of the surveys that respondents reported were eventually presented at trial, more than half of the 333 attorneys reported having commissioned at least one survey.
Profs. Diamond and Franklyn also asked about the factors used in deciding to commission a survey, and here, the results are not particularly surprising: the cost to the client; an assessment of how close the case was; how high-stakes the litigation was; and the likelihood that the survey would return a favorable result. Finally, the responses indicated that surveys played a substantial role in early stages of litigation: 54 percent reported that the survey persuaded one of the parties to drop the claim or settle, while 36 percent reported that the survey was presented at a preliminary injunction hearing and/or at trial (which, as Profs. Diamond and Franklyn note, could also have motivated settlement of the case).
The authors do not purport to offer a normative assessment on when and whether trademark surveys should be conducted or proffered, but their results can’t be fully interpreted without taking such questions into account. The effect of a survey in a trademark case is as much about which party has the resources to fully commit to the survey process as it is about a search for the truth about consumer perception. A party with the resources to commission a carefully designed survey, conduct pilots or pretests that yield useful information about predicted results (perhaps via a nontestifying expert in an attempt to shield the results of the pilot from discovery), and have an expert on hand to question the methodology of the opposing survey will be in a better position in settlement negotiations than the party without such resources. (When six percent of survey respondents believe that a parody advertisement featuring “Michelob Oily” was an actual Anheuser-Busch advertisement, one can appreciate the benefit of an expert survey team.) So, relatedly, a survey may well be a type of signaling device that communicates a belief in the strength of one’s case quite separate from the results of the survey itself. We should not, therefore, draw broad conclusions about the value of surveys in litigation without considering how the survey came to be in the first place.
All of this, however, is fodder for future inquiry. For now, this article provides a very helpful reminder to all scholars embarking on empirical projects that when we focus only on published opinions, we can only say so much about the world.
Jan 20, 2015 Mark McKenna
Bhaven N. Sampat,
Serendipity (Mar. 8, 2014), available at
SSRN.
“Serendipity, the notion that research in one area often leads to advances in another, has been a central idea in the economics of innovation and science and technology policy.” Yet, Bhaven Sampat argues, “serendipity is a hypothesis, not a fact,” and it therefore needs to be tested. So Sampat set out to do just that, designing an impressive study to measure serendipity in pharmaceutical research. In this context, Sampat defines serendipity in terms of cross-disease spillover: results are serendipitous when research that was originally funded for one purpose (to target a certain disease) turns out to be useful for another purpose (in the treatment of different diseases).
Sampat tests the serendipity hypothesis by comparing the disease foci of NIH grants to the research output from those grants. Because most of the NIH Institutes and Centers are explicitly disease-oriented, Sampat is able to compare the disease foci of the granting Institutes to the foci of publications that result from the grants, to citations to these publications in patents, and then to marketed drugs associated with those patents. Finally, Sampat focuses on the subset of drugs held by NIH researchers themselves (the Bayh Dole drugs). Publications, patents, and drugs with different disease foci than the granting Institute are deemed evidence in support of the serendipity hypothesis.
This approach to measuring serendipity is useful because, despite good evidence that spillovers in the innovation context have significant overall economic benefits, evidence of the kind of serendipity to which Sampat refers has been mostly anecdotal. Viagra, for example, is a well-known example of a drug originally developed as a blood pressure medication but later discovered to be useful for treating erectile dysfunction.
Sampat ultimately finds strong evidence of serendipity by all of his measures. In particular, 50-60% of the publications resulting from grants were outside the main disease area and nearly 50% of drugs link back to a grant in a different disease area. The story is similar for drugs based on patents owned by NIH researchers. About 30% of those drugs link back to “basic” grants with no disease focus, and almost a quarter (23%) of the drugs which can be traced back to a disease-focused grant relate to a different disease area than the drug’s first indication.
It’s worth noting that, as Sampat acknowledges, this kind of cross-disease spillover isn’t necessarily a result of serendipity, which obviously implies an unintended outcome. Some of the NIH Institutes explicitly fund basic research, which makes it harder to know that research funded by those Institutes was actually intended for any specific purpose, let alone one that differed from the outcome. Some Institutes also fund research across several diseases, so it should not be surprising to find more diverse research outputs from those Institutes. It seems unlikely, however, that these factors explain the magnitude of cross-disease spillover that Sampat finds. Some significant amount of that spillover is very likely serendipitous.
Sampat’s research is quite timely, as there recently has been a renewed interest in innovation institutions beyond intellectual property, and government grants play an important role in the overall innovation ecosystem. But grants can be structured in a variety of ways, and differently structured grants will interact with other innovation institutions (including patent law) in different ways. Certain combinations of institutions will take us down one innovation path; other combinations will take us down a different path. One important consideration in choosing the mix of institutions is the extent to which we believe we can determine, ex ante, which specific research outputs we want, or whether instead we are better off choosing policies that leave open the greatest number of potential outcomes.
One way to read Sampat’s research is as evidence that policymakers simply can’t fully direct innovation because the research process is inherently unpredictable. On that view, less may be at stake in structuring innovation institutions than previously assumed – even when government grants are intended for particular ends, we can still expect to see a broader range of outputs simply because the research process is not linear. In this respect, those concerned about innovation policy being managed in an excessively top-down manner might have less to worry about than they feared.
But whether that is right depends a lot on whether these research outputs were in fact serendipitous and don’t simply look that way because researchers had to find a way to get their projects funded. To put it somewhat crudely, some applications for disease-focused grants might simply have been directed to particular Institutes because that is where the money was. The extent to which we should still worry about focusing research grants also depends on the range of serendipity that might have resulted under some less restrictive institutional arrangements – whether, for example, more research would have been undertaken that did not have a clear objective at all, or whether the research would have generated an even wider range of outputs simply because the researchers were aiming at a more diverse range of targets. In this sense, the range of targets might anchor the outputs such that, even if the results turn out to be broader than the focus of the grants, they are still more limited than they might otherwise have been.
Of course, none of this answers the question of how much we should want to focus innovation. For while it might be true that less restrictive institutional arrangements would allow for more serendipity, it might also mean that we are less likely to get specific research outputs that we can be confident we want. We might, for example, get a wider range of cancer treatments, but less effective treatments for particular types of cancer. Which kind of system we should prefer is an important normative question that Sampat doesn’t try to answer. But we need to have a meaningful and empirically-grounded sense of the tradeoffs involved in order to think clearly about our priorities, and Sampat has done us a great service by beginning to fill the empirical vacuum.