In the past sixteen years, copyright law has undergone important changes. Court have issued major decisions, such as Skidmore v. Led Zeppelin, which clarified the Ninth Circuit’s substantial similarity test and rejected the inverse ratio rule, and Capitol Records, LLC v. Vimeo, LLC, in which the Second Circuit elucidated a more concrete red flag knowledge standard for purposes of the Digital Millennium Copyright Act. Significant new copyright legislation, in the form of the Music Modernization Act, has also been promulgated. And during this period, fair use jurisprudence has also continued to grow apace. Many of the cases that are now considered copyright law canon for students, academics, and practitioners alike were decided during this period, including Bill Graham Archives v. Dorling Kindersley, Ltd., Perfect 10, Inc. v. Amazon.com, Inc., Cariou v. Prince, and Authors Guild, Inc. v. Google, Inc. Barton Beebe’s recent article analyzing fair use opinions from 1978 to 2019 thus provides a welcome update to his earlier work that covered fair use cases from 1978 through 2005.
Both Beebe’s original article and this update use statistical analyses of all the fair use opinions issued during the period to draw conclusions about how judges have applied the four fair use factors and their subparts. Beebe’s earlier work provided an important statistical analysis baseline for anyone wanting to understand, modify, or improve fair use. This long-awaited update will no doubt prove useful in providing the most recent data on fair use determinations to those in the copyright space.
The updated article, in addition to those opinions issued during 1978-2005, analyzes a further 273 fair use opinions from 220 cases. Perhaps surprisingly given the number of fair use opinions issued over the past decade and a half, fair use analyses largely remained the same during the 2006-2019 period. For example, the vast majority of courts have continued to primarily apply only the four factors listed in Section 107, even though the factors are explicitly meant to be nonexclusive. Courts also tend to apply them mechanically, moving through each factor to see which party it favors. The Second and Ninth Circuits, as well as the Southern District of New York, also continue to exert the most influence on fair use cases, although the Ninth Circuit is growing in importance.
However, Beebe discovered several important trends during this period. On average, the number of opinions addressing fair use is on the rise. Many more have arisen in opinions addressing motions to dismiss, which Beebe—no doubt correctly—chalks up, at least in part, to the Supreme Court’s stricter motion to dismiss standard from Bell Atlantic Corp. v. Twombly and Ashcroft v. Iqbal, both of which were decided after the initial study. The fair use defense has also been increasingly adjudicated at the summary judgment stage.
In addition, Beebe found that, like in his earlier study, lower courts continue to cite to overturned precedent and dicta. For example, in Sony Corp. of America v. Universal City Studios, Inc., the Supreme Court established the presumptions that commercial uses are unfair, noncommercial uses are fair, and commercial uses harm the plaintiff’s market. But in Campbell v. Acuff-Rose Music, Inc., the Supreme Court limited these standards by reducing the importance of commercial use to a considered factor rather than a per se fair use rule. Yet district courts have continued to cite to Sony unabashedly for these rules. This has even increased since 2005. Similarly, courts continue to cite the Supreme Court’s dicta in Harper & Row v. Nation Enterprises that factor four is “undoubtedly the single most important element of fair use,” even though the Supreme Court overrode this statement in Campbell by stating that all factors should be considered and that the transformativeness inquiry was at the heart of fair use.
The core of Beebe’s article, however, is how he uses data on the fair use factors to determine both the impact of a factor on the overall outcome and its correlation with the other factors. The first and fourth factors—the purpose and character of the work (including transformativeness) and market effect—continue to predominate, with the fourth factor correlating the most strongly with the overall fair use determination. The first and fourth factors also strongly correlate with each other.
The determinativeness of the fourth factor may, at first blush, surprise many commentators who have argued that the transformativeness inquiry drives the fair use analysis. Beebe found that as compared to 2005, when it appeared that the importance of transformativeness was waning, courts now consider whether a use is transformative in the vast majority of cases. Indeed, transformativeness, taken alone, was the single most determinative subfactor for the overall fair use outcome, even more so than market effect. Despite this influence on the overall outcome, Beebe found that transformativeness has not yet eaten the entire fair use inquiry.
Beebe notes that statistics cannot be a replacement for traditional doctrinal analysis, but the data he has gathered does provide a valuable high-level understanding of the trends in fair use jurisprudence and opens the way for further research on fair use. Hopefully, Beebe continues this long-running project. The Supreme Court’s decision in Google LLC v. Oracle America, Inc., is the first Supreme Court decision to address fair use since Campbell in 1994. How courts decide to interpret Google v. Oracle could prove significant for fair use decisions in the coming years, especially those involving computer programs and other technological innovations.
Significant new technologies have often been invented nearly simultaneously, and some scholars have worried that patent law’s rewards for the first to file create incentives to race to the patent office and do less to refine the invention. Similar concerns have been voiced about competition for academic priority leading to rushed, low-quality publications. But measuring whether competition for IP or academic credit actually decreases quality has proven difficult, and this difficulty limits the usefulness of models of innovation races.
In a creative and important new working paper, Race to the Bottom: Competition and Quality in Science, economists Ryan Hill and Carolyn Stein tackle this empirical challenge. They focus on structural biologists, whose research deciphering protein structures has advanced drug and vaccine development (including for COVID-19) and led to over a dozen Nobel Prizes. Journals and funding agencies generally require structural biologists to deposit their structures for proteins and other biological macromolecules in a worldwide repository, the Protein Data Bank (PDB). Using this rich dataset, Hill and Stein have documented that structures with higher expected reputational rewards induce more competition and are completed faster—but at lower scientific quality. Recognizing and navigating this tradeoff is important for scholars and policymakers concerned with allocating awards among competing innovators through a range of policy instruments, ranging from academic credit to intellectual property.
Three key features of the PDB make it a viable setting for this research. First, it has objective measures of project quality. The quality of a PDB structure is based on how well it fits to experimental data, resulting in quantitative, unbiased quality metrics. Second, it provides measures of project timelines. The authors could observe both the time between collecting experimental data and depositing a structure (as a measure of project speed) and the time between a first deposit and the deposit of similar structures (as a measure of competition). Third, it enables estimates of the expected reputational reward from winning the priority race to deposit a given protein structure. The detailed descriptive data in the PDB allows a structure’s potential to be estimated based on information that would have been known to researchers before they began working, including the protein type, organism, and prior related papers.
If scientists can choose whether to invest in a research project and how long to refine their work before publishing, then the projects with the highest potential reputation rewards should induce the most entry—but entrants concerned about being scooped may also rush to publish their work prematurely. And this is exactly what Hill and Stein find. Structures in the 90th versus the 10th percentile of the potential distribution induce more competition (30% more deposits), are completed faster (by 2 months), and have lower scientific quality (by 0.7 standard deviations). The fact that high-potential projects are completed more quickly suggests these results aren’t driven by high-potential projects being more complex. Additionally, the authors show that these correlations are smaller for scientists who receive lower reputational rewards from publication and priority: researchers at government-funded structural genomics consortia, who are focused achieving a comprehensive protein catalog rather than publishing individual results.
The welfare implications of rushed, low-quality protein structures appear significant. Improving a structure generally requires inefficient reinvestment of the same costs expended by the original research team. But optimizing existing incentives is challenging. Hill and Stein consider increasing the share of credit allocated to the second-place team—such as through recent journal policies that treat scooped papers on equal footing with novel papers—and conclude that if the total rewards are fixed (as seems plausible with scientific credit), the quality improvement might be outweighed by decreased investment. As another option, they argue that both investment and quality could be improved by barring entry by competitors once one team has started working on a protein structure—a sort of academic prospect theory, as was the norm in the early days of structural biology, before the size of the field made the norm too difficult to enforce. Importantly, this result depends on the specific nature of their model, with quality differences driven more by rushed work to avoid being scooped than by the skill of the research team. Reintroducing this kind of entry barrier for academic research would be challenging (and problematic under antitrust laws), but this result may inform debates over the optimal timing of awarding patent rights.
Hill and Stein’s rigorous empirical evidence that innovation races can lead to decreased quality scientific work is a welcome addition to the innovation racing literature, including because many racing models omit this consideration altogether. And their paper is also well worth reading for their thoughtful discussion of key factors for allocating rewards among competing innovators. First, how easy is it to build on incomplete work, both scientifically and legally? Unlike in structural biology, follow-on work is not always particularly costly; for example, if an ornithologist releases an incomplete dataset of bird species, a subsequent team can pick up the project relatively seamlessly, increasing the value of early disclosure. Second, how important are differences in research skill relative to the decline in quality caused by rushing? Ending innovation races early may be effective in structural biology, but in many cases, giving the first team time to complete work well may not be worth the cost of preventing a better team from stepping in. Third, are rewards fixed? Creating additional academic credit may be difficult, but financial rewards—including through government prizes and subsidies—can be used to increase the second team’s payoff without reducing the first’s.
Before reading this paper, I had thought about the problem of rewards for incomplete research primarily in terms of quality thresholds such as patentability criteria, but choosing a threshold that applies across projects of varying difficulty is challenging in practice. Hill and Stein have given me a richer understanding of the relevant variables and policy instruments for tackling this challenge, and I look forward to seeing the impact this work has on the innovation law community.
Professor Carroll is not the first copyright scholar to have asserted that text and data mining (TDM) is and should be lawful as a matter of copyright law (and he probably won’t be the last). The hook that pulled me through the 72 pages of his excellent article was the introduction’s announced intention to explain why use of TDM tools to run searches on digital repositories of infringing copies of copyrighted works do not infringe, at least as a matter of U.S. copyright law.
Text and data mining is a multi-stage technical process by which researchers compile and refine large quantities of text and other data so that it can be processed with statistical software to detect patterns that would be difficult or impossible for a human to perceive without the aid of the machine. The article considers the legality of TDM using SciHub as an exemplar. SciHub is a well-known repository of vast quantities of the scientific journal literature. Many scientists want to do TDM research using SciHub, but courts have held that that database is infringing. Although SciHub has more than once been forced to shut down, it has re-emerged every time and can still be found on the Internet.
Well-documented in this article, as well as in the technical literature to which Carroll copiously cites, is the promise of myriad scientific insights that researchers’ use of TDM tools could unlock in a wide variety of fields. (For those not already conversant with TDM technologies, this article provides a very useful primer that is neither too nerdy nor too simplistic for lay readers to follow.) If promoting progress in science and useful arts continues to be copyright’s constitutional purpose, the logical conclusion follows, Carroll intimates, that copying of in-copyright works to enable TDM research is and should be lawful.
Thanks to the Supreme Court’s Eleventh Amendment jurisprudence and the audacity of Google and the University of Michigan when agreeing to allow Google to scan all eight million books in the university’s library in exchange for the library’s getting back a digital copy, and thanks also to the Authors Guild for its unsuccessful lawsuits charging Google, the University of Michigan and its HathiTrust repository with copyright infringement, we know that digitally scanning in-copyright books for TDM and other non-consumptive purposes is non-infringing.
Carroll methodically works through each type of copying that happens in the course of collecting, formatting, processing, and storing data for TDM purposes. The article works through the relevant copyright case law for each type of copying that TDM involves. The ground over which the article travels will be familiar to many readers, but it provides a useful recap of how the law of digital copying has evolved over the last two decades.
Copyright is not, of course, the only potential obstacle to TDM research. Numerous proprietary publishers of scientific journals offer institutional database subscriptions to universities and other research institutions. However, those digital repositories are not interoperable. Researchers consequently cannot run searches across various databases. Cross-publisher collaborations are rare, and the license terms on which databases are available may impair researchers’ ability to make the full use of TDM tools. Publishers and the Copyright Clearance Center are promoting licensing of TDM as a value-added service and some of these licenses are more restrictive than TDM researchers would want.
One can understand why scientific researchers, even at institutions with institutional database subscriptions, would be attracted to using SciHub for TDM research. It is easier to use than some of the publisher repositories; the SciHub database is far more comprehensive than any of the proprietary databases; and there are no license restrictions to limit researcher freedom to investigate with TDM tools to their hearts’ content.
Downloading SciHub seems a risky strategy for TDM researchers who do not want to be targets of copyright infringement lawsuits. Carroll argues that running TDM searches on the SciHub collection hosted elsewhere involves only the kind of transient copying that the Second Circuit found too evanescent to be an infringing “copy” of copyrighted television programming in the Cartoon Networks case. The results of the TDM research would be unprotectable facts extracted from the SciHub collection.
This is a bold assertion, which is well-documented. Read it for yourself to see if you agree.
Carys J. Craig, Transforming ‘Total Concept and Feel’: Dialogic Creativity and Copyright’s Substantial Similarity Doctrine
, 38 Cardozo Arts & Ent. L. J.
__ (Forthcoming), available at SSRN
Carys Craig is far from the first scholar to criticize copyright law’s vague “substantial similarity” test for infringement, especially when that test is based on the even vaguer “total concept and feel” standard, but the difference is that in her new article, Transforming “Total Concept and Feel”: Dialogic Creativity and Copyright’s Substantial Similarity Doctrine, Professor Craig advances an alternative approach that might get some traction.
Professor Craig centers her critique on a recent case that involves the two images below. A jury could look at these two photos and decide that an ordinary person could view the “total concept and feel” of the two images as the same. But Craig explains why that’s not the right outcome.
The image on the left is a photograph by Lynn Goldsmith. The subject, of course, is the late and much-lamented Prince Rogers Nelson: the musician the world knew and will remember as “Prince.” Goldsmith made the photograph in 1981, just as Prince was breaking into the public’s consciousness. The photograph was made during a shoot that was originally undertaken for a planned story on Prince in Newsweek—a story that never ran in that magazine. In 1984, Vanity Fair licensed the Goldsmith photo for $400, but they didn’t use it in the story they published on Prince. Instead, they gave it to pop art colossus Andy Warhol, who cropped the photograph down to Prince’s face and used it as the raw material for 16 iconic portraits in Warhol’s “Prince Series,” one of which, pictured below at right, was used to illustrate the Vanity Fair article.
After Vanity Fair used the Warhol portrait again as the cover illustration for a 2016 special issue commemorating Prince’s death, Goldsmith made her displeasure known. The Warhol Foundation responded by filing a complaint in federal district court in Manhattan seeking a declaratory judgment that Warhol’s Prince Series did not infringe the Goldsmith photograph. The Foundation argued both that the Warhol portraits were not substantially similar to the Goldsmith original, and that the Warhol works were protected as fair uses. In 2019 the district court ruled for the Warhol Foundation, finding that the works in the Prince Series were fair use without considering the Warhol Foundation’s substantial similarity arguments. The case is now on appeal to the Second Circuit.
Professor Craig’s article focuses on the substantial similarity issues that the litigation over Warhol’s Prince Series raises. It is perhaps an odd fact that on perhaps the single most important issue in copyright law—the degree of similarity between a plaintiff’s work and a defendant’s that is necessary to support liability—the copyright statute is conspicuously silent.
In the absence of any statutory command, the federal courts have developed a set of related tests for similarity that all boil down to the same ultimate inquiry: would an ordinary lay observer, reader, or listener (the final word used depends on the medium involved) consider the works to be impermissibly similar?
As Professor Craig notes, the reaction of an ordinary lay observer is certainly relevant, but it should not comprise the entirety of the test for what is referred to as “substantial similarity.” Section 102(b) of the Copyright Act directs that courts must not base infringement findings on facts, ideas, concepts, and other elements of creative works that are outside the scope of what copyright protects. And it’s true that before remitting an infringement lawsuit to the ordinary observer’s judgment, courts often perform an analysis in which they dissect the work into protectable and unprotectable elements, disregard or “filter” the latter, and compare the degree of similarity with respect only to the former.
But courts do this only to ensure that there is enough similarity in protected expression for the case to go to the jury, which will then apply the ordinary observer test. So if a reasonable jury could find infringement based on similarity in protected elements alone, the “dissection” phase of the test concludes and the case is given to the jury to apply the ordinary observer test. Often, courts treat the two phases of the analysis as disjoint. That is, juries often are not instructed, in performing their ordinary observer analysis, to disregard similarities that relate to elements of the plaintiff’s work that the court previously has found unprotected.
Consequently, the jury’s ordinary observer inquiry often is little more than the application of unguided intuition. As Professor Craig notes—and others have noted—nothing in the “ordinary observer” directs that juries, or even judges in bench trials, confine their intuitions about impermissible similarity to the protectable elements of the plaintiff’s work.
Professor Craig argues that this problem is made worse by a formulation of the substantial similarity test that appears in decisions of both the Ninth and Second Circuits directing juries to assess similarity in terms of the “total concept and feel” of the two works at issue. That formulation is indeed a misfire. It directs juries to focus on an element, the work’s “concept,” that Section 102(b) of the Copyright Act identifies specifically and by name as unprotectable. The formulation also directs juries to focus on an element, the “feel” of a work, that may differ from observer to observer and is only dubiously copyrightable even when its content can be articulated with any measure of precision.
In short, copyright’s substantial similarity test is a doctrinal failure. But Professor Craig has a suggestion for how to salvage the test—one which uses elements of the current approach and therefore is in the nature of a course-correction that a court might actually entertain. Here, in a nutshell, is Professor Craig’s approach:
[T]he unlawful appropriation step should begin with the holistic comparison of the two works to determine if their overall aesthetic appeal is substantially similar. If the works are, perceived in their totality, substantially different, then the infringement inquiry should end there: The defendant has created a non-infringing new work that is, in its “total concept and feel,” more than merely a colorable imitation of the plaintiff’s work. If the works are substantially similar in their overall impression, the decision-maker should proceed (with the necessary expert evidence) to dissect the plaintiff’s work into protectable and public domain elements, and to filter out the latter from the infringement analysis. The defendant’s work can then be compared again to the protected elements in the plaintiff’s work. If they are not substantially similar after the unprotected elements have been appropriately filtered out, there is no infringement, notwithstanding the similarities in their “total concept and feel.” If, on the other hand, the defendant’s work is substantially similar to the protected expression in the plaintiff’s work, prima facie infringement is established, and the decision-maker can proceed to consider the availability of a fair use defense.
Professor Craig revises the substantial similarity test at two levels. First, she’s re-fashioned the “total concept and feel” test into a tool for identifying when a defendant’s work differs so substantially from a plaintiff’s that it should escape copyright liability altogether. In such instances, defendant’s copying was in the service of creating something substantially new—an outcome which fulfills copyright’s grounding purpose of encouraging the production of new works, and which therefore, Professor Craig argues, should be outside the scope of the copyright holder’s monopoly.
The Warhol Prince Series, Professor Craig suggests, should escape liability: Warhol’s works, which boldly outline Prince’s face against various brightly-colored backgrounds, portray Prince as an icon at the height of his power, as distinguished from the young, vulnerable artist captured in the Goldsmith photograph. It is unclear precisely how Warhol achieves this transformation; Warhol’s ineffability is entwined with his greatness. It is clear—at least to me—that Warhol does in fact produce work that is both recognizably based on the Goldsmith and yet indisputably new.
Professor Craig’s first move thus re-conceptualizes the “total concept and feel” formulation as a helpful way of framing the inquiry into whether the defendant’s work is new enough to slip the bonds of plaintiff’s copyright, rather than as a misleading way of inquiring into the presence of the degree of similarity required for liability.
Professor Craig’s second innovation is equally helpful. She re-positions the “dissection” part of the substantial similarity test to a place—after the revised “total concept and feel” inquiry rather than before—where it can actually do some good. After Professor Craig’s re-ordering, the two parts of the test are no longer disjoint. Rather, dissection must be undertaken only if the initial inquiry does not fall in favor of the defendant. In that case, the “total concept and feel” of defendant’s work is close enough to plaintiff’s where the factfinder, be it jury or judge, must inquire whether the similarities between the two works are due to defendant appropriating substantial protected expression, or re-using unprotected facts, ideas, concepts, such stock elements that are properly treated as scenes a faire, or expression that merges with underlying ideas.
While I would have welcomed an explanation of how her revised substantial similarity test could be administered in a jury trial, the article merits a close reading, and I recommend it.
Martin Husovec, The Essence of Intellectual Property Rights under Art 17(2) of the EU Charter
, 20 German L. J.
840 (2019), available at SSRN
The complex interface between intellectual property and fundamental rights is a fascinating field for research which has attracted considerable scholarly attention in the last decades. U.S. IP scholars are well aware of fundamental rights under the U.S. Constitution. The European Union has “constitutionalized” IP rights as well as fundamental freedoms in the Charter of Fundamental Rights of the EU placing them at the very top of the hierarchy of norms.
In The Essence of Intellectual Property Rights under Art 17(2) of the EU Charter, Martin Husovec explores the constitutional notion of the “essence of rights”—according to which any fundamental right has an inviolable core that needs to remain untouched (or only touched with very strong justifications) from any legislative activity—in order to determine if Art. 17(2) of the EU Charter includes a notion of essence of IP rights. If so, this would have profound consequences for legislators as it could prevent changes made by them to the IP legal framework, or at least make them very difficult. In particular, this question has high relevance in the situation where a legislator, after empirical analysis and assessment of the merits of a particular IP right, would decide to legislate it away because of its incapacity to deliver its promises.
This is important since there is a tendency in the EU to create a new IP right any time a new intellectual asset emerges, and therefore it should be possible at some point in time to repeal it if the expected results do not materialize. A good example is the creation in the nineties in the EU of the new sui generis IP right for database producers, which after several evaluations done by the European Commission turned out not to have had the expected incentive function for European players to create more and better databases. In short, it failed to deliver its promised results.
This fascinating question however, developed in more length by the author in another publication, is only touched upon in this article, as this discussion is only the consequence of the acceptance that there is an ‘essence’ for IP rights. The author analyzes therefore the two different opposing constitutional theories of ‘essence’. In fact, as Husovec recalls, “the absolute theory of essence says that essence of rights cannot be interfered with or taken away, including by the legislator. The relative theory of essence, on the other hand, claims that an interference with essence is just a more serious interference which is still subject to the typical proportionality analysis”. This means that under the absolute theory of essence, there would be an untouchable core of rights which would make any legislative intervention unconstitutional, while under the relative theory, the interference with the essence of an IP right would only lead to a particular level of scrutiny with regard to competing interests and fundamental rights that justify this legislative intervention.
In order to find out if such a perpetual core of rights is recognized in the field of IP, Husovec explores the case-law of the Court of Justice and the European Court of Human Rights as well as selected examples of national case-law. He then compares the way intellectual property protection at the constitutional level is interpreted by these courts with the wording of some international sources also applicable in the European context, such as Art. 15.1(c) of the International Covenant on Economic, Social and Cultural Rights (ICESR), in which the signatory states recognize the right of everyone “to benefit from the protection of the moral and material interests resulting from any scientific, literary or artistic production of which he is the author”.
After careful analysis of several cases implementing Art. 17 (2) of the Charter, Husovec comes to the conclusion that this provision “is void of any inviolable core understood as a red line which cannot be bridged by any considerations of proportionality” and that “any reference to essence in the case-law of the CJEU only points towards a higher level of scrutiny, but not towards an untouchable core of rights that may not be abolished by the legislator”.
Even in Germany, which recognizes the most the idea of core constitutional rights, the German Constitutional court held in the context of IP rights that they are not immune to legislative change. The author concludes that if core rights should ever be recognized, it could be in light of the international obligations resulting from Art. 15 ICESR, but then only to the benefit of creators as physical persons (not corporations). Furthermore, in this context, IP rights would have to be interpreted in an instrumental manner in order to serve the society’s right to culture that this article aims to protect. Thus, Husovec’s theory of legislative power to tailor IP rights to achieve social good bears resemblance to the US “promote the progress of science and useful arts” idea of the US Constitution.
In short, Husovec adds in his article an important building block to the construction of a sound human rights framework for intellectual property rights: that, even despite its potentially “essential” character, balance with other competing rights and public interest rationales still remains at the core of assessing the level and extent of IP protections in both legislatures and courts. In the end, this is very far from any absolutist understanding some have advocated for after the entry into force of Art. 17(2) of the Charter. Moreover, constitutional protection of IP rights will not stand in the way of legislating them away when it is established that they have not delivered on their promises.
Aman Gebru, Patents, Disclosure, and Biopiracy
, 96 Denv. U.L. Rev.
535 (2019), available at SSRN
A new patent application claims to have invented a process for using turmeric to “augment the healing process of chronic and acute wounds.” Unbeknownst to the patent examiner, this spice has been used for this purpose—for centuries—in India. Because the process isn’t new, it shouldn’t be patentable. But what if the patent examiner doesn’t know about that longstanding prior use?
Because traditional knowledge (TK) isn’t typically found in the sources of information that patent examiners can easily access—such as other patents or printed publications—an applicant may be able to get a patent for something they didn’t invent. Or the patent they get may cover significantly more than whatever refinements or improvements the applicant actually did invent.
Indeed, in the real-life case alluded to above, a U.S. patent did issue. When the fact that turmeric had been long used to treat wounds in India was brought to the attention of the U.S. Patent & Trademark Office (USPTO), the patent was ruled invalid. But for a time, there was a U.S. patent covering this old technology.
If we don’t want patents like this to issue, we need to get better information to U.S. patent examiners. But how can we do that? In Patents, Disclosure, and Biopiracy, Aman Gebru argues that patent applicants should be required to disclose their use of genetic resources or traditional knowledge. This article is noteworthy for its detailed examination of how such a requirement could fit into U.S. patent law, even without legislation. Also noteworthy is its use of law-and-economics arguments for this position rather than the more conventional approaches that have relied on equity and distributive justice arguments.
Gebru argues that a TK disclosure requirement can be justified (for a number of reasons) on efficiency and social welfare grounds. Most persuasively, he argues that an information-forcing rule for TK could lead to better patents. In the case study mentioned before (which involved the longstanding use of turmeric in India), the U.S. Patent & Trademark Office (USPTO) ultimately decided that the claims were not patentable.
Gebru points out that “patent or no patent” is not the only issue at stake. Disclosure of TK use or inspiration could also help the USPTO make sure patent claims match (or at least, match more closely) to whatever the applicant actually invented. If someone devises a legitimately new and useful improvement to a TK process, their patent should cover only that improvement—not the underlying process itself.
Of course, the problem of information asymmetry at the USPTO is not limited to TK. But Gebru argues that it may be particularly acute when it comes to TK. And these case studies give us a window into the human side of novelty.
When we talk about patents and novelty, it’s easy to get bogged down in the details of the statutory scheme. Gebru’s article is a good reminder that novelty is not just about wading through complicated statutory provisions and calculating global effective filing dates.
When we think about novelty, what types of human knowledge and innovation should “count” against patentability? And even if things count in theory, how can we make sure they count in practice? Are there other areas of human knowledge that might falling through the cracks of the U.S. patent system?
All in all, I found this to be a timely and thought-provoking article. I keep recommending it to my students and now I am recommending it to all of you.
Dev Saif Gangjee, Trade Marks and Innovation?
, in Trademark Law and Theory II: Reform of Trademark Law
(Edward Elgar), (forthcoming), available at SSRN
In his soon-to-be-published chapter Trade Marks and Innovation?, Dev Gangjee brings needed critical analysis to a growing body of recent research that focuses on the relationship between trademarks and innovation. As Gangjee notes, the claims of this research challenge traditional understandings about trademark law. The “stable consensus,” Gangjee describes, has been that “trade mark law had nothing to do with innovation, the ‘ordinary trade mark [having] no necessary relation to invention or discovery.’” But what if it does?
Trademark law has “nothing to do with innovation” in the sense that eligibility for trademark protection doesn’t consider the mark’s relation to innovation. Marks are eligible for trademark protection, as the Supreme Court held in 1879, because of their source-indicating capacities, not because of any “novelty, invention, discovery, or any work of the brain.” Indeed, the fact that trademark law protects only source indication traditionally has been used as justification for the potentially perpetual rights that a trademark bestows.
In contrast, the patent term is limited precisely because patent law, unlike trademark law, offers protection to “real” innovation that we need others (eventually) to build upon. In recognition of this difference, trademark doctrine has long been hostile to attempts to use trademark or unfair competition as a substitute for patent.
Recently, however, a good deal of scholarship and policy advocacy, particularly in Europe, claims that trademarks do promote innovation. This research is clearly meant to support “stronger” or “more” trademark protection. It is useful to disentangle the conceptual and empirical dimensions underlying these claims.
Some early empirical work focused on trademark registrations. It emphasized that “innovative firms” (usually using number of patent applications filed or R&D expenditures as the measure of innovative activity) apply to register more trademarks. But the fact that trademark applicants also file a lot of patent applications tells us nothing about whether trademarks promote innovation. It probably only indicates that innovative firms are sophisticated IP claimants that create new products and file trademark applications for the names of those new products.
More recent research on trademarks and innovation, however, claims to demonstrate a causal relationship. This research is founded on a pair of conceptual claims about the role of trademarks in the innovation process. One is that branding increases market share and profitability, which allows parties to recoup their investments in innovation (presumably creating additional incentives to make those investments). The premise here is obviously questionable: at best it is true that certain winning brands have greater market share and profitability—at other brands’ expense. Even for the winning brands, however, this is not strong evidence that trademarks promote innovation, only that some trademarks help firms’ profitability, which enables but does not require them to engage in innovative activity.
The second conceptual claim is exemplified by the European Commission’s explanatory memorandum to the draft of the Trade Marks Directive in 2015. There, the EC claimed that “in order to retain [the customers] the ‘mark works … as an engine of innovation: the necessity to keep it relevant promotes investments’ in R&D, which leads in turn to a continuous process of product improvement and development.”
That claim bears some resemblance to standard-fare law and economics justifications of trademark protection in the US—namely that trademark protection, by fixing credit and blame, promotes consistent quality. But the EC’s claim is importantly different: traditional economic arguments focus on promotion of *consistent* quality, not necessarily high quality. Denny’s marks are perfectly good marks because we always know what a Moons Over My Hammy (a scrambled egg sandwich) will be like. Trademark law is indifferent to the fact that it will, by most accounts, be consistently bad. The more robust claim, exemplified in the EC memo, is that trademarks help attract customers, and to keep those customers, firms must innovate.
One striking aspect of this new conceptual framing is its frank admission that brands drive demand. Starting with Ralph Brown, trademark scholars have long been concerned that trademarks “sell the goods” by promoting (or reflecting) more product differentiation than a competitive market would otherwise supply, and that excess product differentiation is often inefficient. To that charge, law and economics scholars (and, overwhelmingly, courts) have responded that trademarks merely provide information about the source of goods, and that consumers value the goods for reasons intrinsic to the goods. The new research turns Brown’s concern upside down and promotes it as the mechanism through which trademarks promote innovation.
Gangjee’s primary attack on this new approach exposes the emptiness of its concept of “innovation.” For one thing, it’s unclear whether these accounts use “innovation” in the same way as patent law. As Gangjee reminds us, branding might promote certain kinds of innovation not captured by the patent system, and that innovation might actually substitute for other forms of innovation by signaling something new when all that is new is the messaging.
Gangjee also doubts the empirics, persuasively arguing that much of the evidence presented for the “trademarks are engines of innovation” claim is ambiguous at best. Proponents of that theory point to trademark filings for marks for new products as evidence of innovation. But new filings don’t necessarily reflect meaningfully new products. In some (many?) cases, new trademark filings might just reflect rebranding efforts. Or they might be the result of increasingly layered branding (multiple marks applied to the same goods). They might simply reflect extension of an existing brand into somewhat new product markets. In those cases, while the goods or services might be new to the brand, there’s no strong reason to believe they are new to the market.
From case studies, we know firms sometimes apply for new trade marks in cases of meaningless or nil differentiation, for exchange reasons …, to prolong other Intellectual Property Rights (IPRs) … to leverage brand equity … to [pre-emptively] avoid trade mark squatting … to pack product spaces …, to control franchisees .. to support low risk entry in foreign markets … to enable ingredient marketing … to protect slogans, or for advertising purposes …
Gangjee helpfully reminds us that it is always in the trademark owner’s self-interest to invest in quality in the context of a competitive market, and that no rule of trademark law (nor any proposed rule) requires mark owners to improve product quality. Proponents of the “trademarks promote innovation” thesis offer no means of evaluating the nature of innovation supposedly reflected in trademark applications.
Importantly, if this evidence *did* suggest that trademarks promoted meaningful innovation, surely we would want to consider that fact in designing the patent system. Perhaps we have overstated the importance of patent law in providing incentives and understated the role of trademarks in enabling firms to capture that value. Or perhaps the incentives here are substitutionary–by enabling firms to capture the benefits of the types of innovation promoted by the trademark system, which may be cheaper and faster to develop, firms substitute their innovation dollars away from the types of innovation for which patent protection would be most natural.
These are all hypotheses, of course. But as Gangjee so well demonstrates, that is the state of the evidence. And that is clearly insufficient to displace the traditional understanding.
Cite as: Mark McKenna, Innovating Trademark Theory
(November 20, 2020) (reviewing Dev Saif Gangjee, Trade Marks and Innovation?
, in Trademark Law and Theory II: Reform of Trademark Law
(Edward Elgar), (forthcoming), available at SSRN), https://ip.jotwell.com/innovating-trademark-theory/
Joseph Fishman, Originality’s Other Path
, 109 Cal. L. Rev.
__ (forthcoming, 2021), available at SSRN
It is something of a received dogma among intellectual property scholars that, despite their common goal of incentivizing creativity, the copyright and patent regimes achieve this goal through wildly divergent legal structures. Professor Joseph Fishman’s excellent piece, Originality’s Other Path, sets out to question this dogma.
As I often describe these twin systems to my Intellectual Property (IP) students, they are in many ways mirror images of each other: patent law demands much more from inventors up-front—a patentable invention must represent a significant creative leap from prior technology—while copyright grants protection more indiscriminately to any artistic work possessing a modicum of creativity. Once an inventor clears this higher patentability hurdle, however, she is in possession of a powerful sword against anyone who would seek to practice her invention. Copyright, on the other hand, makes up for its more indiscriminate grant of protection by creating many safety valves for would-be infringers: independent creators are not liable for copyright infringement, and (owing to the fair use doctrine) neither are those who, although borrowing from copyrighted works, manage to sufficiently transform those works. Thus, although the Supreme Court has spoken of a “historic kinship” between patent and copyright law, to the trained observer, this kinship has always seemed to begin and end with their shared goals. And, whether by design or by historical accident, these divergent structures are generally taken to be a good thing. After all, what could be more fittingly different than two legal regimes designed to incentivize two radically different domains—two cultures, even, as C.P. Snow famously put it —of human creativity?
In Originality’s Other Path, Professor Fishman unearths a line of copyright cases stretching back to the 1800s that reveal a now-forgotten common origins story of patentability and copyrightability thresholds. In the process, he pushes us to reconsider whether the boundaries between “art” and “technology” are as immutable as C.P. Snow would have us believe or, as scholars in science and technology studies have long argued, they are instead fragile and porous. In turn, recognizing hidden kinships between art and technology opens the door for a more fruitful—and more deliberate—cross-pollination between patent and copyright law.
Professor Fishman begins his article by taking us back to the 1800s IP jurisprudence of Justice Story, who saw no inherent difference between the patent and copyright systems, and who considered that neither patent nor copyright law required a creative leap for obtaining protection. To the contrary, in Justice Story’s opinions, the bar for both copyrightability and patentability was low, with patentability requiring only a new invention, and copyrightability some independent intellectual labor. This story is largely known. What is new in Professor Fishman’s narrative is his uncovering of another later story, this time one that focuses on Justice Nelson’s efforts to increase the threshold of protection in both patent and copyright law. While Justice Nelson was successful in the field of patents—and his 1851 decision, Hotchkiss v. Greenwood, is often cited as the cornerstone for our modern non-obviousness doctrine requiring an inventive leap for patentability—his earlier decision advocating for a similarly high threshold of protection for copyright law had largely been lost to history.
In exploring this second Justice Nelson’s opinion that preceded Hotchkiss, Jollie v. Jaques, Professor Fishman reveals “a moment in jurisprudential time at the century’s midpoint when a heightened creativity threshold looked just as likely for copyright as it did for patent law.” (P. 18.) Jollie’s language describing copyright’s “originality” requirement was expansive. It required a patent-like showing that a particular artistic work “surpassed whatever the author’s peers ‘with experience and skill might readily make.’” (P. 23.)
What is particularly interesting about these two stories is that for both Justice Story and Justice Nelson, despite standing at opposite ends of the protectability threshold, patentability and copyrightability standards rose and fell together. In this sense, our modern-day understanding of copyright and patent doctrines as reflecting some essential difference between the domains of art and technology is not rooted in historical practice.
In excavating the history of this forgotten, heightened originality standard, Professor Fishman finds surprising traces of Jollie’s standard in modern-day musical arrangement cases. Professor Fishman deftly traces the contribution of particular individuals embedded in specific legal and musical communities to the development of our current doctrine. He reminds us that common-law evolution is often contingent on the social norms, beliefs, and practices of communities of lawyers, artists, and scientists. In Fishman’s account, it was through the work of a lawyer and musician, Joseph James, who advocated for a higher standard of creativity for musical arrangements, that Jollie began to influence modern-day musical arrangement cases. In a later case, Woods v Bourne Co., it was again the background and experience of the presiding judge, Richard Owen, also a lawyer and musician, that likely influenced his reliance on Jollie, and his adoption of a heightened copyright standard for musical arrangements. This heightened standard could be seen as an effort to transpose a distinction made in the world of music between “cocktail-hour pianists” and “real composers” to the world of law, by denying copyright protection to the former, more “trivial” arrangements.
As intellectual property scholarship has become increasingly interested in understanding the social processes that underlie artistic and technological creativity, insights such as these, linking community membership to litigation strategies, are incredibly valuable to our understanding of doctrinal evolution. Indeed, in a seminal case in patent law, Diamond v. Chakrabarty, this type of historical “process tracing” similarly revealed the crucial role that community social norms played in litigation strategies. In Chakrabarty, it was General Motors lead attorney’s embeddedness in a community of engineers that allowed him to conceptualize patenting genetically engineered bacteria, long-thought unpatentable by the biological sciences community, as equivalent to patenting mechanical parts of a machine.
Finally, Professor Fishman’s article joins a growing literature in IP law that advocates tailoring IP doctrines to the specific needs and dynamics of technological or artistic communities. Viewed through a tailoring prism, Jollie’s heightened threshold for copyrightability in the narrow field of musical arrangement cases could represent an efficient doctrinal evolution to fit the needs of this particular community and foster creativity. Of course, there is nothing to guarantee that the social norms that prevail in any particular community are welfare maximizing.
Professor Fishman, I think correctly, doesn’t push this normative point in his article too far. Rather, he leaves the possibility of tailoring and cross-pollination between patent and copyright regimes open for future study. He suggests that some policy problems arising in copyright law have elements in common with problems in technological innovation, such as the problem of thickets of rights that can harm, rather than foster, cumulative creativity. This rediscovered historical and policy kinship between patents and copyrights also implies that copyright law can sometimes learn from solutions implemented by its patent law cousin. It is also intriguing to consider what a deep engagement by the federal bench with a heightened originality standard in discrete areas of copyright law might teach patent law. Reviving the historic kinship between copyrights and patents, and identifying cross-cutting similarities between areas of copyright and patent law, may infuse both domains with a jolt of legal creativity.
In 1984, the McDonald’s Corporation obtained a federal trademark registration for the mark MICKEY D’S for restaurant services, stating in its application that it had been using the mark since 1981. Research suggests, however, that the MICKEY D’S mark wasn’t the result of ad agency brainstorming but instead originated in the Black community in the United States as early as 1976. The McDonald’s Corporation’s subsequent adoption and use of the mark seems to have been part of a strategy to promote the restaurant chain back to the community from which the name emerged. (Specimens submitted in connection with maintenance activities include a national full-color ad that appeared in Jet and Ebony magazines in 1982.) David Green, senior vice president for marketing at the McDonald’s Corporation, implied this, glibly, in a 1996 article in AdAge, writing that the company’s work with the Black-run advertising firm Burrell Communications gave McDonald’s “the highest share of the African-American market. Talk to anyone in the ’hood, and they talk about Mickey D’s. We’ve become part of the vernacular.” Black consumers may have created “Mickey D’s,” but the federal trademark rights in the term now belong to McDonald’s.
The larger complicated relationship between McDonald’s and the Black community is the subject of Franchise: The Golden Arches in Black America, the excellent book by Marcia Chatelain, a professor of history and African American studies at Georgetown University. It’s a relationship that, in Chatelain’s telling, rests at the intersection of social change, economic development, and corporate profit, with seemingly contradictory valences. The company’s corporate practices were the subject of boycotts and protests at individual restaurants, but local franchises were also welcome sources of financial support for the communities in which they were located. The company took advantage of existing inequalities in various regions, including cheap land and a depressed labor force, and yet it also positioned itself as a source of employment and advancement for the Black community, particularly through its efforts to diversify its franchisee ranks in the wake of white franchisee flight in the 1960s. (Those franchise opportunities were not, however, often in economically prosperous neighborhoods.)
For trademark scholars, particularly those engaging in interdisciplinary work, Franchise’s exploration of these complications gives historical and sociological depth to the consideration of what trademarks mean. Modern trademark law’s embrace of the franchise structure assumes a certain level of consistency from location to location to give the mark meaning. Indeed, established procedures, supply chains, and marketing campaigns are ostensibly part of what makes franchising attractive to those new to business ownership. But reflecting and appealing to the communities in which a franchise is located require acknowledgment that the brand can have different meanings to different consumers. As Chatelain describes, McDonald’s and other chains attempted to shape their offerings and advertising in this way, often through naming choices, celebrity endorsements, advertising, or adapting the trade dress of stores to serve the communities in which they were located, all with varying success.
Chapter Five (“Black America, Brought to You by . . .”) in particular makes for rewarding reading for trademark scholars. Chatelain describes how the McDonald’s brand served as a reliable constant for consumers in underserved areas, as locations provided meals, financial donations, safe spaces, and other forms of support during local crises. As Chatelain writes, “When McDonald’s managers could be relied on more than school administrators or police officers, then the lines between where leadership and power rested in a city could become so blurred that a fast food restaurant could begin to look like a solution instead of a symptom.” (P. 168.)
The concept that “McDonald’s” connoted structural reliability was deliberately shaped by Burrell Communications, which promoted “the corrective possibility of advertising,” using images and language to convey Burrell’s normative vision of Black life in public spaces. (P. 173.) The artist Hank Willis Thomas’s work So Glad We Made It (featured in Franchise) highlights this dynamic by removing the branding language from a 1977 McDonald’s advertisement to, as the Museum of Modern Art puts it, “encourage[ ] viewers to look critically at how mass–produced and widely disseminated images in the media construct and reinforce stereotypes about African American life, and how the public participates in these narratives.”
But Burrell Communications also understood the ways in which advertising could reflect lived experience. Chatelain notes that Burrell “personally studied consumer reaction” to McDonald’s well-known national slogan of the 1970s and 1980s “You deserve a break today” and concluded that “black customers were not getting it. Black customers were confused. There were no breaks in their America.” (P. 169.) And while today’s viewers might see as outdated the advertisements that Hank Willis Thomas critiques, Chatelain suggests another interpretation: the advertisements tapped into concerns about equal access to restaurants by suggesting that McDonald’s could be a place of ease and relaxation. “Burrell and his creative team,” she writes, “knew about the fears that followed blacks when they took a seat in a booth or made a left turn into their favorite drive-thru.” (P. 176.) These observations remind us once again that brands — and the law that protects them — are in a constant process of both prescription and description.
“For most Americans,” Chatelain writes in her concluding chapter, “it may be hard to imagine a world without McDonald’s or Kentucky Fried Chicken or Taco Bell.” (P. 263.) The trademark applications for these famous brands, whether based on marks originating from the company or marks originating from the community, may refer simply to restaurant services, merchandise, and related goods. But “[w]hen resources that create steady infrastructure for well-paying jobs, a multitude of food options, and safe spaces for children and senior citizens to build community are absent, then fast food is able to present itself as capable of providing sustenance rather than simply feeding.” (P. 264.) In a year when companies are eager to communicate their commitment to social justice, Chatelain’s important and relevant work helps us to think more deeply about what a brand means and about the (in)ability of a company, relying on the strength of that brand among consumers, to address fundamental inequities.
Amanda Levendowski, Trademarks as Surveillance Transparency
, 36 Berkeley Tech. L. J.
__ (forthcoming 2021), available at SSRN
I call this paper a “Levendowski special.” It follows the signature format of much of Professor Levendowski’s prior work which, as in the latest article, recruits a legal tool typically aimed at one set of problems for the purpose of cleverly addressing a different set of problems. Her past articles harnessed copyright law to “fix artificial intelligence’s implicit bias” (2018) and to “combat revenge porn.” (2014). This paper draws on Professor Levendowski’s expertise working in private practice as a trademark attorney to address the problem of surveillance technology opacity. It is a primer on how to investigate trademark filings for hard-to-access information about surveillance technologies.
Levendowski laments and hopes to provide a partial solution to the informational asymmetry between law enforcement and the public about surveillance technologies. Private companies create surveillance technology – doorbell cameras, facial recognition tools, license plate readers – which are frequently used by law enforcement and embedded in communities. Community members are often unaware of the networks of surveillance until years later. Professor Levendowski explains that journalists and regulators often have difficulty investigating or tracking these surveillance tools because of weak or misaligned disclosure regimes.
But trademark filings – a process that aims to promote information clarity for consumers and other trademark owners – require specific public disclosures about the terms of the use of the good or service. And these trademark filings, as the article’s author demonstrates, reveal surprising and sometimes secret information about what surveillance technologies do, how they function, and sometimes who is financing them. These filings are usually made coincident with the product being used in commerce or sometimes before the product goes on the market. And so timely and rewarding disclosure of information is possible if an investigator knows how and where to look.
Trademark filings disclose several useful pieces of information about the surveillance technologies. For example, trademark filings declare when the product was first used in commerce (or when the trademark registrant intends that the product will be used in commerce). Trademark filings do this with a declared date of use in commerce and also a “specimen of use” filed with the proposed trademark. Sometimes this information, filed by a trademark attorney, unwittingly undermines the company’s desire for secrecy about its technology. For investment and fundraising purposes, trademark filings are sometimes made before there is a public announcement of the product or before a demonstration of a technology’s efficacy. This filing would thus show the technology works and is in service before the company has announced as much publicly. In her search of the trademark registration system and attached specimens proving commercial use, Professor Levendowski uncovered: surveillance technology that was subject to non-disclosure agreements; the identities of surveillance targets; and the specific terms of financial arrangements between a company and law enforcement.
Levendowski’s paper has many such surprising disclosures—including what some who are familiar with trademark registration procedure would consider unforced errors. Trademark attorneys do not have to file specimens of use that disclose confidential contracts, financing arrangements, or client information. In fact, it is possible that once this paper circulates among the relevant readers, there may be fewer such revelations. Trademark attorneys can redact documents and choose more carefully how to demonstrate use in commerce. They could coordinate better with the company’s other legal and corporate counsel to make sure the right hand knows what the left hand is doing. Does Professor Levendowski mean her article to be a tool that is so short-lived in utility?
Levendowski responds by doubling down. First, working with Dyllan Brown-Bramble at Georgetown Law Center (a former student of the well-regarded and innovative course “Computer Programming for Lawyers” taught by Professor Paul Ohm), Levendowski built a “Trademark Watcher” tool to use with her clinic students at Georgetown Law Center. This tool will help uncover as quickly and efficiently as possible many more of these revelations that are already part of the PTO record. So we can expect that the case studies she describes in her article are just the tip of the iceberg. Second, she hopes that this search tool and the paper – while it may lead to more careful practice by some trademark lawyers in the future – will nonetheless immediately provide a critically important investigative tool for journalists and advocates seeking to right the balance of surveillance transparency. Third, as a former trademark law practitioner, Levendowski is uniquely positioned to propose reform to the requirements for specimens of use at the Trademark Office. By doing so, she could transform what was previously an accidental disclosure into a required one, e.g. financial arrangements or roll-out plans for next phase technology. Doing so may require connecting the required disclosure with the purpose of trademark law. But since information clarity about sponsorship, affiliation, and use is a core trademark goal, the reform effort would align with long-standing legal principles.
I look forward to witnessing the benefits that flow from this article and hope they may be enduring. Surveillance transparency is hard to achieve for the reasons Professor Levendowski explains, which have to do with the entrenched features of capital systems and organizational infrastructures that promote hierarchies of control and minimize accessibility of information. Her cogent and incisive article may be just the effort to set us on a new and productive path. Perhaps it will succeed at aligning trademark’s consumer-oriented regime, whose goal is fair competition, with justice-enhancing disclosures about surveillance technologies which, when used en-mass and in secret, undermine community well-being and consumer autonomy.