Oct 14, 2022 Graeme Dinwoodie
In recent years, the Trademark Trial and Appeal Board has increasingly relied on the so-called “failure to function” doctrine to reject applications to register marks suffering from a number of quite different deficiencies. But all these varied deficiencies are said to prevent the putative marks from identifying the source of the products to which they are affixed, that is to say, from functioning as marks. The doctrine’s flexibility (and perhaps its opaqueness) has caused many scholars (most notably, Alexandra Roberts) to embrace its capacity both to reconfigure what they perceive as unduly generous approaches to trademark registration and to ensure that validity determinations take account of the context in which marks are used.
In Failure to Function and Trademark Law’s Outermost Bound, an excellent law review note, Lucas Daniel Cuatrecasas carefully and critically catalogues the different strands of failure to function case law at the TTAB, demonstrates the inconsistencies in its application by the Board, and offers a creative reform that highlights the under-appreciated multivalence of trademark law. Cuatrecasas’ organization, and careful dissection, of TTAB decisions on the doctrine is invaluable. But the reform proposal that derives from this analysis is even more illuminating.
Cuatrecasas detects more complex normative underpinnings to the failure to function case law than conventional accounts suggest. Because most scholars situate the failure to function inquiry in the Board’s assessment of consumer perception, proposed tweaks of failure to function doctrine typically play with integrating the inquiry into assessments of distinctiveness.
But Cuatrecasas suggests that the TTAB sometimes also engages in competition-based reasoning in its failure to function cases. He argues that the commendable objectives of the failure to function doctrine would be better served by enlisting other more explicitly competition-grounded doctrines of trademark law. In particular, he proposes that that the aesthetic functionality inquiry (presently relevant only to non-word marks) should be extended to all marks. Thus, all marks could be rejected if their registration “would significantly hinder competition in the relevant market.” For example, an applicant would be unable to register the slogan mark “I ♥ DC” on a T-shirt; it would hinder competition in ways that triggered the aesthetic functionality doctrine.
Cuatrecasas recognizes that the distinctiveness inquiry (perhaps aided by inquiries about “use”) already accommodates (or should accommodate) some of what is achieved by the failure to function doctrine. A determination that a term is descriptive will preclude its registration because such a mark will not identify the source of the product on which it is affixed. Thus, it fails to function as a mark for consumers. But we might simply say that, as a result, such a mark lacks distinctiveness.
And assessments of distinctiveness need not be confined to abstract interpretations of the meaning of words. The parallel doctrinal mechanisms for determining inherent distinctiveness of non-word marks make attention to context explicit. One such test, the Seabrook test, asks how a design mark would be understood by consumers in light of the designs already existing in the marketplace and whether the use of the design mark would make a separate commercial impression on consumers in the larger context (such as the overall product packaging) in which the mark appears.
Nothing should stop any of this from happening with assessments of words. This is especially true with use-based applications for registration or in an infringement action where the context of a plaintiff’s use is on the table. Use and distinctiveness have always been intertwined, as can be seen from the definitions that courts offer about “use” (intent to use applications based upon inherent distinctiveness are the biggest challenge, and where Robert’s agenda-setting article was of greatest importance).
Cuatrecasas argues for more explicit assessment of competitiveness concerns in the analysis of a word mark’s validity. A failure to function doctrine that incorporated aesthetic functionality-like considerations might do just that, in a way that Cuatrecasas suggests would be more valuable—and, based on his analysis of TTAB caselaw, as reflective of concerns expressed by the Board.
I might take issue with the deployment of aesthetic functionality as a tool for greater certainty. That particular doctrine is notoriously enigmatic and difficult to apply. But Cuatrecasas’ proposal highlights the important point that trademark validity is not simply an empirical question about consumer association.
Although aspects of the distinctiveness doctrine historically recognized this, and incorporated competitiveness concerns, the Supreme Court’s recent 8-1 decision in Booking.com arguably casts such approaches in doubt. In that climate, for pragmatic reasons, we may need safety-valves that recognize the importance of competitiveness concerns in the judicial assessment of trademark validity. Cuatrecasas’ article intriguingly puts a revised failure to function doctrine in play as a candidate for such a safety-valve.
Sep 14, 2022 Michael W. Carroll
In a memorable skit on Saturday Night Live, Pete Davidson-as-Eminem inquires about what a non-fungible token (“NFT”) is and why they are selling for so much money. In this succinct article, The Treachery of Images: Non-Fungible Tokens and Copyright, Professor Guadamuz admirably answers the first question and explains why the second one is a puzzle. With respect to the market for NFTs, he explains why those using copyright law to capture the economic value in this emerging market often misunderstand how distinct the exclusive rights under copyright are from ownership of a non-fungible token on a blockchain.
Based in the United Kingdom, Professor Guadamuz applies U.K. copyright law to determine whether and when the process of creating and selling an NFT may (or may not) implicate copyright rights. For the most part, this analysis tracks with that under U.S. law, but to readers based in the United States, this aspect of the article also yields some comparative insights about when and why the differences between U.S. and U.K. copyright law may matter.
So What Is an NFT?: The question is really two: what is a “token” and what makes it “non-fungible”?l The answer to both starts with a blockchain, that is, a cryptographic distributed and decentralized ledger that records transactions in an immutable record. The immutable characteristic of the ledger enables digital assets to be unique, and cryptography is supposed to protect against copyists who would undermine a token’s “non-fungible” character by distributing counterfeit copies. Various kinds of data can be recorded on such a ledger. A “token” is one type of data—a programmable digital unit recorded on the blockchain. Functionally, a token is encryptable code that can represent any digital or physical object, and this code can be unique.
While use of NFTs in connection with copyrightable works of art is a subject of considerable interest, minting NFTs in connection with art works is merely one use for an NFT. More particularly for copyright lawyers, Professor Guadamuz nicely explains how and why in most cases the NFT is metadata about a copy of a copyrighted work, not about the intangible work of authorship itself. In the case of digital art, a digital copy of the work frequently is used in the creation of the metadata, but the resulting metadata recorded on the blockchain as a token generally is not a copy of the work of authorship. This is the central point of confusion in much copyright discourse around NFTs.
To dispel this confusion, Professor Guadamuz carefully describes the process of creating, a.k.a. “minting,” an NFT of a digital image with a reader concerned about copyright law in mind. His example uses the Ethereum blockchain that supports, among other things, the Ether cryptocurrency, and which is currently the most popular blockchain for recording data about copies of works of art. The blockchain uses technical standards for recording both fungible and non-fungible tokens.
As he explains, the two key components of the relevant standard are the tokenID and the contract address. The first is a unique value generated upon creation of the token. The contract address is the source of some copyright confusion because a copy of a digital work can be used to generate the contract address, but the address itself is metadata about that copy. In the example of Professor Guadamuz’s image, the contract address is: 0x8c5aCF6dBD24c66e6FD44d4A4C3d7a2D955AAad2, which no one would argue is a reproduction of the original expression in the source work. When combined with the tokenID and any other optional data that the relevant standard permits, the token is unique—a unique piece of metadata about a copy of a copyrighted work.
I encourage readers to work through Professor Guadamuz’s full description of how NFTs work at a technical level. For purposes of this review, the key takeaway about the facts is that, although the metadata file that comprises the NFT is unique, there is no technical constraint on the owner of a copyrighted work (or any other referent of the metadata) from minting additional tokens. Each token will be unique, but an unlimited number of these unique tokens can represent the same copy of a copyrighted work.
Since scarcity is the source of much of the economic value or speculation in markets for NFTs, the scarcity is, at a technical level, only in the metadata file. Any other constraints that would provide scarcity in relation to a copy of the copyrighted work or in the work itself must be legal constraints. If you record a song and then mint an NFT with that file and transfer the NFT to me, what do I own? Presumably I own title to the NFT. You could also represent that ownership of title to the NFT entitles the owner to also hold title to a particular copy of the song. In that case, it is the law that treats your intent to affect the transfer title in a copy of a work. Without that additional legal element, transactions in NFTs are transactions in title to metadata.
Professor Guadamuz identifies examples that show how the press and, likely, many participants in the NFT marketplace do not appear to understand this. After describing some high-profile NFT transactions—such as Jack Dorsey’s $2.9M sale of an NFT of his first tweet – Professor Guadamuz very rightly points out: “It seems difficult to believe that all these buyers are spending such amount[s] of money for what amounts to a short metadata file and a string of numbers and letters of dubious artistic value, but in fact, this is what most NFTs actually are.” (P. 1372.)
What’s Copyright Got to Do With It?: A reader knowledgeable about copyright would likely answer, “nothing,” and Professor Guadamuz mostly agrees. He discusses some transactional uses of NFTs, including a “tick box” option to indicate that transfer of the NFT also transfers ownership in the copyright in the NFT’s referent work. The law in the U.K. is similar enough to U.S. law that this issue is primarily whether ticking that box is sufficient to be a signed writing by the author. He demonstrates that using NFTs for licensing purposes is difficult but that the NFT can be programmed to act as a royalty collection vehicle, which potentially could create an automated droit de suite or otherwise as a payment mechanism.
With respect to rights enforcement, he first works through any possible claims that the act of minting an NFT, even without authorization of the copyright owner, could directly infringe the rights to reproduce, adapt or communicate the copyrighted work to the public. The only plausible claim would involve an NFT that includes a hyperlink to a copy of the source work, as many NFTs do. Under the complex jurisprudence in the European Union, one could treat the minting of an NFT that includes a hyperlink to a copy of the source as a communication to the public, but only under narrow circumstances. The analysis under U.S. law would be similar, except that inclusion of the link would not be a direct infringement of any exclusive rights and could at most give rise to secondary liability for infringement of the rights to display or perform the work publicly. This result could change if the appellate courts agree with some district courts that have held that an embedded hyperlink exercises the right of public display.
In this article, Professor Guadamuz performs an important public service by patiently walking the reader through the technical complexities of how NFTs work and then, with similar care, by considering the potential copyright implications for the creation and transfer of NFTs.
Aug 1, 2022 Christopher J. Buccafusco
In their new paper, From Trade Secrecy to Seclusion, Charles Tait Graves and Sonia Katyal have performed a monumental service for researchers interested in the ways that firms and governments have leveraged trade secret and confidentiality laws to prevent the disclosure of large swathes of information. I describe Graves and Katyal’s effort as monumental not just for its scholarly value, but also for its breadth (and length). They catalog an enormous variety of “increasingly aggressive attempts to use the law to shield information from the public eye that either does not fall within the traditional, market-competitive ambit of trade secrecy at all, or that faces a strong public interest for at least some degree of disclosure.”
Examples abound, from Merck’s attempts to shield data on the harms of its drug Vioxx, to the unwillingness of manufacturers of voting machines and criminal justice algorithms to share their source code, to employers’ claims that employee diversity and salary data are confidential trade secrets. In these and other examples, Graves and Katyal demonstrate how firms have pushed trade secrecy law beyond its traditional bounds of promoting innovation by preventing unfair competition. Coupled with the considerable deference that the government shows to corporations faced with FOIA requests, these efforts, they argue, are creating an “all-purpose seclusion doctrine.”
Now, if you’ve been following some of the scholarship in this area you may already recognize these issues from your reading of Chris Morten and Amy Kapczynski, David Levine, Rebecca Wexler, Hannah Bloch-Webha, and Jamillah Bowman Williams. But this is part of the article’s immense value. Graves and Katyal have recognized and articulated a theme running through much recent scholarship. Importantly, they have brought it all together in one paper, and, admirably, they credit no fewer than twenty scholars with above-the-line mentions (including, Hrdy, Varadarajan, Sandeen, Brauneis, Goodman, Menell, Feldman, Bone, Lemley, Samuelson, Lyndon, Calo, Citron, Chessman, and Vladek). Sadly, neither Graves and Katyal’s cataloging efforts nor their humility in recognizing so many others’ contributions are incentivized by contemporary scholarly practices.
But Graves and Katyal haven’t simply produced an annotated bibliography. Their article taxonomizes the disparate practices they identify, helpfully organizing them into three distinct categories. Examples like Vioxx they label “investigative concerns,” where journalists, whistleblowers, or other researchers attempt to report unflattering information about firms. Separately, claims of secrecy for voting machines and criminal justice algorithms are categorized as “delegative concerns,” because governments or parties acting on their behalf claim secrecy for core governmental functions. Finally, “dignitary concerns” involve situations where firms try to shield information about their workplaces, including information about salaries, diversity, and workplace injuries.
The authors also begin to assign blame for trade secrecy’s expansion into a general seclusion law. Graves and Katyal fault trade secrecy law itself, for its uncertain theoretical underpinnings and its weak claiming doctrine. They also note that firms are both more motivated to claim confidentiality and can more easily do so when so much of their value is embodied in software. Finally, the Supreme Court’s recent opinion in Argus Leader Media has broadened FOIA exemptions to include information that would not traditionally count as a trade secret. Perhaps this is a matter for a later article, but I would have valued further analysis of the relative contributions of these and other potential causes.
The article concludes with normative proposals for reining in trade secret law’s expansion, including general rules like a more robust standing doctrine, a heightened claiming requirement, and the inclusion of defenses like trade secret fair use. Graves and Katyal also mention more targeted interventions for particular misuses of trade secrecy. Again, many of these suggestions have been floated elsewhere, and the authors have helpfully brought them together for the reader. Given the synthetic nature of the paper, the task of providing a more focused analysis of the various proposals’ respective costs and benefits remains for a later paper.
As someone just making my way into this field, I’m enormously grateful for the work that Graves and Katyal have done. They demonstrate the value of synthetic, taxonomic research, and they have done so in a way that appropriately credits others’ contributions. This is an excellent model for junior scholars across the legal academy.
Jul 1, 2022 Christopher J. Sprigman
Mark Bartholomew of the University at Buffalo School of Law recently published an article in the Notre Dame Law Review, Copyright and the Creative Process, which offers a fresh perspective on a central question in copyright law—what is “creativity?” Creativity is the thing that copyright law is meant to encourage. Copyright, in other words, is justified as a way of incentivizing creativity. But copyright law’s understanding of creativity is notably spare. The U.S. Copyright Act states that a work must be “original” in order to be protected. But the Act does not define originality, or situate it within the broader concept of “creativity.” The Supreme Court in its decision in Feist Publ’ns, Inc. v. Rural Tel. Serv. Co. was only a bit more forthcoming. Originality, the Feist Court made clear, does not require the sort of novelty that eligibility for patent protection does. Rather, what is required is only independent creation (i.e., that the work originate with the author, rather than being wholly copied from another), and that is “possess[es] some creative spark, no matter how crude, humble or obvious is might be.” Id. at 345 (internal quotations omitted).
Feist makes it clear that the standard is not demanding. It does not make clear, however, how to assess in borderline cases whether a work meets that low threshold and is creative enough to be protected. Copyright’s reticence on this point is, at minimum, a bit strange. Some have reacted by suggesting that we drop or at least de-emphasize creativity as an entry condition for copyright protection. Others have gone in the opposite direction, suggesting that the creativity standard be raised. But it’s difficult to know what to do with copyright’s creativity requirement, if anything, until we understand the concept better.
Mark Bartholomew’s new article attempts to harness recent advances in neuroscience to inform our understanding of what should count as “creative,” and how copyright should make that assessment. Bartholomew begins with a critique of some commonly-held beliefs about creativity’s ineffability; beliefs which, he argues, lead us to overlook scientific evidence about the nature of creativity that could better inform copyright law:
The main reason for the creativity criterion’s impoverishment is a belief—indeed, a faith—in the almost magical quality of the creative process. So conceived, the creative process is wholly and necessarily subjective, impervious to description or measurement by objective criteria. A corollary position warns of aesthetic prejudice. Because there are no objective benchmarks available to keep them honest, judges and juries will lend an undesirable bias to any attempt to rigorously evaluate artistic creativity, unfairly favoring some kinds of artworks over others. As a result, creativity is mostly presumed rather than proven in copyright cases.
This understanding of creativity, Bartholomew argues, is outdated. In particular, neuroscience, Bartholomew writes, is beginning to give us insights into what creativity is, and how it happens. For example, neuroscience investigations show us that creativity is usually the result of planning and focus, rather than serendipity or sudden inspiration. And yet, copyright law ignores the creative process and focuses on the work itself, inquiring whether creativity can be found within the four corners of the putatively creative product. Recent evidence also shows that much artistic creativity harnesses and indeed depends on a creator’s knowledge of prior work in a particular creative field. And yet copyright, unlike patent, does not inquire into the creativity of a particular work by comparing it to the prior art. Evidence suggests, moreover, that experts in a particular creative field can readily recognize important creative advances in their field. And yet copyright courts do not, for the most part, rely on expert testimony in determining whether copyright’s creativity standard has been met.
On all of these points and more, Bartholomew writes, copyright doctrine may be ripe for reappraisal, especially as the science (which the author readily acknowledges is still formative) advances. Bartholomew first deals with the originality “abolitionists.” He argues that copyright’s minimal creativity requirement is already too low, and that lowering it further would be likely to stifle creativity by burdening follow-on creators who would face even greater barriers to creative re-use than they do presently. The author favors a more robust creativity requirement. But the question is how to execute that.
On that question, Bartholomew identifies several ways that the science could alter how we analyze whether copyright’s creativity requirement has been met. In particular, we should be more willing, Bartholomew argues, to investigate artistic motivation as part of assessing whether a work meets copyright’s creativity criterion. The author points to Justice (then Judge) Gorsuch’s opinion in Meshwerks, Inc. v. Toyota Motor Sales U.S.A., Inc., as a useful example. In determining whether a mathematically-precise computer model of a Toyota car body was “creative,” Gorsuch noted that the model was not motivated by artistic considerations or ambitions but rather represented “an attempt accurately to depict real-world, three-dimensional objects as digital images viewable on a computer screen.” Id. at 1269. Bartholomew approves, stating that “Given what we now know about the centrality of authorial intent to creative output, [the inquiry into intent] should be deployed in all copyright cases where creativity is at issue.”
Bartholomew argues, moreover, that copyright courts should inquire not only into authorial intent, but should also situate the creativity assessement in the prior art, as is done in patent law. The science suggests, Bartholomew writes, that creativity in the copyright area depends as intimately as in the patent area on knowledge about and use of relevant prior art—and this is true even in instances where a creative work departs substantially from the field’s prior output. The Supreme Court’s rejection in Bleistein v. Donaldson Lithographing Co., of “aesthetic discrimination,” which, Bartholomew argues, is a big part of what blocks consideration of whether a putatively copyrighted work advances the prior art. Bartholomew counsels that we tack away from Bleistein, which has transmuted into the equivalent of a flat ban on qualitative assessment of creativity:
Even if taste is relative, agreement can coalesce over such topics as what is the appropriate definition of a particular genre of visual art or what are the conventions of a specific musical domain. Research shows that those with expertise in a domain tend to independently agree on their assessment of the creativity of new works in that domain. Even if one thinks that a layperson’s judgment of an artwork’s beauty is a subjective practice [that] would normally be anathema to the ideal of objective legal standards, elements of evaluation of aesthetic worth can submit to reasoned interrogation, particularly by those with experience and training in the domain.
There is a lot more in Bartholomew’s article, which is a lively, informative read. The article’s major service to the reader is in the questions it raises about how copyright law should respond to our increasing understanding of the workings of the human mind in general, and of human creativity in particular. Our current knowledge about how human creativity works is nowhere near complete. But already the science calls into question what lawyers think they know about how to encourage new creative work. Those questions will only deepen as the science advances.
May 31, 2022 Pamela Samuelson
Clark Asay,
An Empirical Study of Copyright’s Substantial Similarity Test, 13
UC Irvine L. Rev. __ (forthcoming, 2022), available at
SSRN.
Far fewer empirical studies in the intellectual property field have focused on copyrights than on patents. Each new entrant to this small field is, however, welcome. The latest offering by Clark Asay reports on his study of a random sample of 1005 judicial opinions assessing substantial similarity infringement analyses in 974 cases decided between 1978-2020. He coded for subject matters, rights in dispute, procedural postures, subtests, expert evidence, whether courts discussed copyright limitations, sources of authority, and outcomes.
Those of us who teach copyright law have long had the impression that substantial similarity analyses in the opinions we teach are a hot mess. Asay has now provided empirical evidence that this impression is mostly quite accurate.
Each circuit, he reports, “has its own particular way of applying [the substantial similarity test], with different circuits employing a multitude of subtests within the larger substantial similarity inquiry.” Asay concludes that substantial similarity analysis is “notorious for its lack of uniformity, both within circuits and across them, despite commentators’ attempts to provide sanitized versions of what different circuits do.” Treatises “seek to provide clarity around the law,” he notes, but this “sometimes [comes] at the expense of accuracy.”
So what surprises does Asay uncover with this empirical study? For one thing, he reports that the first step of the conventional substantial similarity test—which focuses on whether the challenged work is similar enough to the plaintiff’s work to prove that the defendant copied something (even if not protected expression) from the plaintiff’s work—is “mostly moribund.” Although commonly cited infringement cases have endorsed using experts for this first step, Asay says that courts rarely rely on them. When courts do consider this step, however, plaintiffs prevail more often than defendants.
To me, the biggest surprise was that defendants prevailed in more than two-thirds of the cases when courts reached the second step—whether the defendant has improperly appropriated protected expression from the plaintiff’s work based on substantial similarities in the two works’ expression—and discussed at least some of the limiting doctrines of copyright law, such as the merger and scenes a faire doctrines. Asay reports that sixty-three percent of the opinions did indeed discuss some of these doctrines. By contrast, plaintiffs prevailed in two-thirds of the cases in which judges made no mention of any limiting doctrines.
Another surprise was that although courts in the Second Circuit between 1978 and 1999 produced well over twice as many published substantial similarity analysis opinions as the Ninth Circuit, these two courts have produced roughly the same number of such opinions since 2000. And if one counts unpublished substantial similarity analysis opinions dating from 2000 on, Ninth Circuit courts have produced nearly three times as many opinions per year as Second Circuit courts.
Also surprising was that the late 20th century courts rarely decided copyright substantial similarity cases on motions to dismiss, but starting in 2006, there was an upward trend of cases decided on motions to dismiss. In 2018 and 2019, the last two years Asay studied, more substantial similarity cases were decided on motions to dismiss than on summary judgment. Yet, almost half of the overall opinions in his study were decided on summary judgment motions, although the second most common procedural posture was motions to dismiss.
Asay kept track of the subtests that courts used in substantial similarity opinions. He found several variations among them: ordinary observer, more discerning observer, average lay person, audience, intended audience, reasonable observer, and reasonable person. It is, of course, unclear how much (if any) difference which characterization the courts used.
Twenty-eight percent of the substantial similarity cases applied the ordinary observer test, while 24 percent applied the extrinsic/intrinsic test, 21 percent the total concept and feel test, and 11 percent the abstraction-filtration-comparison (AFC) test. Asay reports that courts in the Ninth Circuit have applied the AFC test more often than courts in the Second Circuit from whence the AFC test originated.
One last surprising result before this jot concludes: Nearly 98 percent of the 1005 substantial similarity opinions Asay studied cited at least some authorities. Fifty-six percent cited to Second Circuit precedents. Next most influential was the Supreme Court whose precedents (usually the Feist decision) were cited in just over half of the 1005 opinions. Ninth Circuit opinions were the third most influential source of authority. The least influential authority was the Federal Circuit, which accounted for only 2 percent of the authorities cited in Asay’s sample of cases.
Many other useful insights can be found in Asay’s study, so I highly recommend it to Jotwell’s readers.
Apr 29, 2022 Christophe Geiger
Do you believe in the human heart? I don’t mean simply the organ, obviously. I’m speaking in the poetic sense. The human heart. Do you think there is such a thing? Something that makes each of us special and individual?” This is a question put to Klara, the narrator of Kazuo Ishiguro’s novel Klara and the Sun, who is an “artificial friend”—an artificial intelligence (AI)-operated android- that, in a not-too-distant future imagined by the Nobel Prize winner for literature, is meant to replace companions for children.
This philosophical question also lies at the heart of the question of the protectability by copyright of AI-generated outputs: Is there something in the human creative process that makes it unique and different from any output generated by a machine? And is the copyright system apt to incentivize and reward these moments of genius that generate new creative works and induces cultural enrichment?
In their article Copyright and Artificial Creation: Does EU Copyright Law Protect AI-Assisted Output?, Bernt Hugenholtz and Joao Pedro Quintais of the Institute for Information Law at the University of Amsterdam try to address this crucial issue from the perspective of the legal framework for copyright protection in the European Union (EU).
Their approach is not a philosophical one, but a legal, more technical one: can an AI-assisted output satisfy the criteria of copyright protection? Of course, the underlying policy question is not far, for if copyright protection is denied, this might induce policy makers to reflect on a possible legal instrument to be introduced to this effect. Also, the big normative questions such as the desirability of protecting AI by copyright, are deliberately not addressed. Nevertheless, their analysis is absolutely fascinating.
Indeed, their doctrinal approach towards advancing the AI protectability discussion provides an excellent examination of the various legal conditions developed over time by the Court of Justice of the European Union (CJEU) for an intangible output to benefit from copyright protection, which they put to the test regarding AI-generated outputs. By doing so, they undertake a categorization that has hardly been done (at least in such a clear manner) in literature before on the standard for protectability of copyrighted works in the EU.
It might come as a surprise to many but, as the authors rightly recall, “the EU currently is mostly silent on questions of copyright subject matter and authorship. Despite extensive copyright harmonization, no single directive harmonizes the concept of the work of authorship in general terms”. The conditions have been developed incrementally by the case law of the CJEU, and the authors scrutinize each condition to see if AI-generated outputs can qualify. As they underline, in order to be protected, AI-generated outputs need to be a “work” that is “original” in the sense that it reflects “creative freedom”. But since an AI-generated output has never been analyzed so far by the CJEU, the authors also reflect on possible unwritten aspects of the law as might be encountered when dealing with AI-generated works.
For instance, the authors examine the general idea that a work, in order to be protected, and further in line with the international copyright framework, needs to the result of a “human creative effort”. As they put it, “although EU copyright law nowhere expressly states that copyright requires a human creator, its ‘anthropocentric’ focus (on human authorship) is self- evident in many aspects of the law”. Indeed, the notion of originality is often defined as the expression of the personality of the creator, his creative choices, his “heart” or “soul” in the poetic sense of Kazuo Ishiguro that we have mentioned at the start of this review. This presupposes a human being, one capable of possessing a “heart” or “soul” in the first place. Thus, the authors conclude that according to EU conceptions of copyright “an output wholly generated by an AI system without human intellectual effort is excluded from copyright protection”. This conclusion also seems in line with the position of the US Copyright Office which, in a decision of February 14, 2022, refused to register an AI-generated work, finding that “human authorship is a prerequisite to copyright protection in the United States and that the Work therefore cannot be registered”.
However, the authors’ analysis, unlike many other recent writings on the subject, does not stop there, as they underline that the necessity to have human intellectual effort “does not rule out creations by human authors made by the aid of machine”, provided that the other criteria of copyright protection are met such as e.g., the need to be original and to reflect creative choices.
At this point, they develop the main argument of the paper, which is that human creativity in AI-assisted production may occur in different phases of the creative process, which they propose to label “conception” (design and specifications), “execution” (producing draft version) and “redaction” (selecting, editing, refinement, finalization). They find that, “while AI systems play a dominant role in the execution phase, the role of human authors at the conception stage often remains essential. Moreover, in many instances, human beings will also oversee the redaction stage”, for example through editing the output generated before publication.
Accordingly, the authors conclude that when human-made creative choices “are expressed in the final AI-assisted output, the output will then qualify as a copyright-protected work”. The analogy can be made with other creative areas where the creators use machines as tools in their creative processes, such as computer program-operated digital art or even more classical creations such as photography where the photographer uses an automated camera, or the painter uses tools to disseminate the colors on the canvas without fully controlling the end-result. Such as the same idea with, for example, the recognition of ready-mades in contemporary art, what remains essential for a potential copyrightability is the creative choice of the human creator, not necessarily the execution of the artwork (as the object does not need to be made or produced by the artist, but only chosen).
Of course, from a more normative perspective, the findings of Hugenholtz and Quintais that many AI-assisted outputs can benefit from copyright protection does not solve the (pending) question of the social desirability of such conclusions and, ultimately, what could be the consequences on follow-on creativity if the gates of copyright will be opened too widely for these types of works. These questions of course are essential to solve for the future of creativity, and maybe more broadly for the future of our society. For now, the interesting analysis drawn by Hugenholtz and Quintais in their submission offers us a new perspective through which we can begin to view a complex societal issue more clearly, showing that if new technologies often raise groundbreaking questions, the answers from a strictly legal point of view are often not so new and that the copyright system potentially can adapt. But should it adapt, or should it rather be adapted? For sure, it opens fascinating paths for future research.
Mar 28, 2022 David Fagundes
Property scholars have long noted a gulf between the way laypeople think about ownership and how property law actually works. This gulf has widened in the digital age, as our beliefs about what we own and what ownership means have not kept pace with our complicated relationships with digital goods. Consider, for example, the furor that arose a decade ago when it emerged that Apple insisted that iTunes users did not own purchased music files outright but had only a lifetime license to use them.
In a sparkling essay, Aaron Perzanowski explores the distance between the perception and reality of ownership in the context of the emergent right to repair. Traditionally, we expected to be freely able to repair physical items we own, whether a book with a torn cover or a car with a leaking fuel line. But how does this expectation translate in the digital setting? The question, Professor Perzanowski shows, is an important and complicated one.
This issue is important not least because of the financial stakes. If fixing my iPhone’s screen requires only a visit to a third-party repair shop, that saves me money, but it costs Apple a lost purchase of an entirely new unit. And freedom to repair has other consequences. Users who can fix damaged digital goods extend the lives of those goods, saving them from becoming the kind of slow-decaying waste that is especially damaging to the environment.
This issue is complicated because in a world where technology increasingly pervades the objects we own, industries often design technological means to undermine users’ ability to repair their goods. Perzanowski catalogues a surprisingly vast array of anti-repair strategies: Some are relatively innocuous, like lowering the price of replacement goods to make repair relatively less appealing. Others are more sinister, like embedding devices with software that makes them less effective over time or making physical repair prohibitively costly due to the need to use expensive tools. Firms also leverage copyright and patent laws to intimidate users with dubious but chilling threats of litigation for engaging in unauthorized repair.
As Perzanowski carefully details, this issue has not gone unnoticed by lawmakers. State legislators have advanced several bills designed to preserve consumers’ ability to repair electronic devices, including the Promoting Automotive Repair, Trade, and Sales (PARTS) Act of 2017. However, the PARTS Act stalled in committee, Other state legislatures have had spotty success with similar initiatives. Perzanowski readily identifies the source of these failures. Well-heeled lobbyists representing a range of companies from Apple to John Deere to GE Healthcare have slowed reform efforts by voicing both public and closed-door objections to these laws.
Perzanowski points out that right to repair laws would succeed only if consumers actually do want to repair electronic devices. Only then would a law lowering barriers to repair actually cause consumers to engage in more repair. Perzanowski thus undertakes the only study to date that asks whether consumers believe they should have a right to repair devices they own. And, if so, how do they view manufacturer efforts to restrict repairs and how are attempts to restrict repair material to their purchase decisions?
Perzanowski’s article reports that consumers have a strong preference for freedom to repair. The vast majority of device purchasers agree or strongly agreed that they should have a right to repair those devices. Nearly as many expressed strong negative affect toward manufacturer attempts to restrict the right to repair. And equally significant majorities stated that their willingness to pay for a device will drop substantially if the device is burdened with repair restrictions.
Perzanowski is careful not to overstate the practical implications of these results. He notes that while passage of right to repair laws would create consumer surpluses and reduce environmental stressors, they would not be panaceas. Manufacturers remain free to complicate repair in ways that evade such laws, such as by simply declining to produce replacement parts or phasing out support for goods after a short time frame. That being said, Perzanowski makes a powerful empirical case that passage of these laws would produce major gains in overall social welfare.
Perzanowski’s work also raises important additional questions. For instance, even if consumers like the idea of repair, what’s to say they would effectuate it well? Manufacturers’ warnings about the risks of off-brand repair shops are not entirely baseless. Can these concerns be ameliorated by regulation or other means? Also, how can laws be crafted to more effectively counter manufacturers’ end-runs around right to repair statutes? It may at first seem a stretch for governments to mandate that industry must provide support and replacement parts, but some states—e.g., California—have adopted such laws. And as a practical matter, how can legislatures overcome the overwhelming pressure of industry lobbyists and pass right to repair laws?
Perzanowski is about to release a book-length treatment of this topic, The Right to Repair: Reclaiming the Things We Own. This monograph promises to address these questions immanent in Perzanowki’s essay.
Feb 17, 2022 Lisa Larrimore Ouellette
Joseph P. Fishman & Kristelia García,
Authoring Prior Art, 75
Vand. L. Rev. __ (forthcoming 2022), available at
SSRN.
Patent and copyright law share what the Supreme Court calls a “historic kinship”: they both grant exclusive rights under the IP Clause to incentivize production of new knowledge goods. But they implement this goal through very different doctrinal frameworks, including different roles for “prior art”—all the information that existed before the innovation at issue. Patent validity crucially depends on whether an invention is new and nonobvious compared to the prior art. In contrast, it doesn’t matter if a copyrighted work is similar to earlier works as long as it wasn’t actually copied from them. Copyright treatises unquestioningly assert that the formal prior art analysis of patent law has had no place in blackletter copyright doctrine.
The doctrinal trend identified in a new article by Joseph Fishman and Kristelia García, Authoring Prior Art, is thus interesting and surprising: in at least some recent copyright music cases, judges are looking at prior art. One judge even dismissed an infringement claim for “failing to consider prior art,” rendering the comparison between the original and infringing works “legally deficient.” Another dismissed a claim because the plaintiff’s expert report lacked “any information about prior art,” whereas the defendant’s expert cited three earlier songs with similar expressive qualities. Identifying this trend would be valuable on its own, but Fishman and García go further in tracing the source of this development to the influence of a small set of musicologist expert witnesses.
Judges rely on experts in music cases more than in any copyright field except software, perhaps because they are less comfortable assessing the similarity of musical works than works like novels, films, and paintings. And the pool of music copyright experts is tiny: Fishman and García analyzed fifty reports and found that they were produced by only ten experts, with one expert producing 23 out of 29 defense-side reports. These defense-side reports generally identified prior art to argue that any similarities between the allegedly infringing work and the copyrighted work were also present in the prior art, and plaintiffs’ experts generally argued not that prior art is irrelevant, but that any prior art wasn’t that similar to the copyrighted work at issue. Fishman and García also interviewed six of these experts, who described a perceived duty to assess the prior art not only to determine whether the defendant’s work was actually copied from the plaintiff’s, but also to evaluate the creative importance of any similarities between the works.
To understand exactly how these cases are using prior art in the infringement analysis, it’s worth recapping the relevant blackletter doctrine. Assessing whether a defendant copied enough from a plaintiff’s work to constitute copyright infringement first requires filtering out any unprotectable elements, such as abstract ideas and “scènes à faire”—stock elements that are ubiquitous in the genre. The works are then compared for “substantial similarity” from the perspective of the ordinary observer of the works in question. Both steps of this analysis are frustratingly vague, especially for those trained in the comparatively formulaic rules of patent law: it isn’t easy to draw a line between unprotectable ideas and protectable expression or to determine if any remaining similarity is “substantial.”
Prior art isn’t a formal part of this doctrine, but it is implicit in the analysis. Judges know that “star-crossed lovers” is an abstract idea that should be filtered out because this idea recurs in prior art ranging from Romeo and Juliet to Buffy the Vampire Slayer; they know that shootouts in Westerns and dead phones in horror films are scènes à faire because of prior art in these genres. (Patent readers might analogize to the amorphous category of “abstract ideas” like “longstanding practices” under recent patentable subject matter caselaw.) The musicologist expert reports—and cases that follow them—surveyed by Fishman and García make this reliance on prior art explicit, and they expand the relevant prior art from ubiquitous elements that judges are personally familiar with to a more patent-like universe of all prior works identified through specialized search tools. Experts frequently argue that elements drawn from the prior art should be discounted in the substantial similarity analysis as “musicologically insignificant,” so that a work cannot be infringing if it is closer to the prior art than to the plaintiff’s work.
This approach is similar to that suggested a decade ago by Rebecca Tushnet, who asked whether “substantial similarity doctrine could be improved by, in essence, placing the plaintiff’s work alongside an array of prior art and trying to place the accused work in the ‘space’ defined by the expressive universe,” and then framing the legal question as “whether the similarity between the plaintiff’s work and the accused work is any greater than the similarity between the accused work and [the prior art].” For these reasons, the recent adoption of prior art analysis in music copyright cases may be a beneficial development. As Fishman and García explain, “prior art allows judges to understand a work in its creative context rather than in a vacuum.”
Fishman and García don’t offer a prediction on whether this use of prior art will expand into non-music cases, or whether such an expansion would be a good thing. These seem like important questions to pursue in future work, both by scholars and by practitioners and experts in other copyright cases. Prior art is certainly not a panacea—facts and broad ideas can’t be copyrighted even if far removed from the prior art, just as natural laws and abstract ideas can’t be patented no matter how novel. But if prior art could make the elusive line between ideas and expression somewhat more predictable, that would be a welcome development for both potential copyright litigants and future students of IP law.
Cite as: Lisa Larrimore Ouellette,
Prior Art in Copyright, JOTWELL
(February 17, 2022) (reviewing Joseph P. Fishman & Kristelia García,
Authoring Prior Art, 75
Vand. L. Rev. __ (forthcoming 2022), available at SSRN),
https://ip.jotwell.com/prior-art-in-copyright/.
Jan 18, 2022 Sarah Fackrell
According to conventional wisdom, fair use factor two plays a minor role, if any, in copyright cases. But, as Cathay Smith shows in Political Fair Use, this factor plays a major role—and may even be dispositive—in cases involving “political uses” of copyright-protected works. Through a series of case studies, Smith “identifies a pattern in political fair use decisions: in disputes arising from the unauthorized political uses of copyrighted works, courts appear to implicitly modify their analyses and balancing of the fair use factors under section 107 of the Copyright Act.”
Fair use is a fact-intensive, case-by-case inquiry. The Copyright Act sets forth four factors that courts should weigh in determining whether a use of parts or all of another author’s work is fair:
- the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes;
- the nature of the copyrighted work;
- the amount and substantiality of the portion used in relation to the copyrighted work as a whole; and
- the effect of the use upon the potential market for or value of the copyrighted work.
When considering factor two, courts often consider the extent to which a work embodies creative expression or is more informational or functional in character. Some types of works are deemed closer to the “heart” of copyright. Accordingly, the use of a creative work is less likely to be fair than the use of an informational one. In the standard fair use case, factor two tends to play a minor (if any) role; factors one and four are generally the most important.
Smith argues that the “political use” cases are different. She defines a “political use” as “the use of original copyrighted works by politicians or about politicians.” This “occurs when a politician or political candidate uses another’s creative expression to express political speech or when a person or party uses another’s creative expression to speak about a politician or candidate.” So defined, use of the “Success Kid” meme by Rep. Steve King’s campaign would be a political use, but Infowars’ use of the character Pepe the Frog (a use not connected to any particular politician or candidate) would not.
Smith observes that in political use cases, factor two “seems to exert an outsized influence on the determination of all four fair use factors.” Specifically, when the underlying works are political—e.g., in cases dealing with photographs of a candidate or music created for a political ad—courts deem those works to “informational” and thus the use is more likely to be fair. And where the underlying work is political and the defendant’s use is also political, the defendant is “overwhelmingly” likely to win on the ultimate question of fair use. Consider, for example, a situation in which a photo taken of a political candidate at a political rally is used in an adversary’s political advertisement. These types of political uses (we might perhaps think of them as “nominative political uses”) seem to be being treated as almost presumptively fair.
Not only does Smith’s article challenge the conventional wisdom about fair use factor two, it also delves into larger questions about the purpose of copyright. Smith posits that, when it comes to other political uses (we might think of them as “distasteful political uses”), courts engage in the adaptation of fair use analysis “in order to both accommodate the import of political speech and to respect copyright owners’ dignity and rights to control objectionable uses of their expressive works.” For example, in the “Success Kid” case, the court ruled for the copyright owner—and against fair use—where the original work was not political in nature and where the copyright owner (the child’s mother) was strongly opposed to Steve King’s political positions and didn’t want the meme associated with him. Of course, at least in the United States and as Smith notes, we don’t generally think of copyright as a regime that is meant to protect authorial dignity or allow authors to veto uses just because they don’t like them. If it’s true that courts are providing some kind droits morals in political use cases, that is notable in and of itself.
Smith’s article also revisits the well-known tension between copyright and the First Amendment. In Eldred v. Ashcroft, the Supreme Court suggested that the fair use doctrine internalized First Amendment values in copyright law. See 537 U.S. 186, 221 (2003). The pattern of fair use cases Smith identifies may be seen as an encouraging sign that fair use is flexibly adapting to core political speech. But Smith expresses concern that, at least in some cases, courts may actually be applying the fair use framework too rigidly: “Political fair use appears overinclusive in certain scenarios when it fails to account for the privacy, dignity, or economic interests of creators of works that happen to embody a political nature and underinclusive in other scenarios in which concerns for the dignity rights of a creator seem to override common applications of parody, transformativeness, and fair use.” In either case, Smith’s article adds an important new facet to this discussion as well.
Dec 10, 2021 Julia Haines
Camilla Alexandra Hrdy,
The Value in Secrecy (Sep. 16, 2021), available at
SSRN.
What makes a secret a trade secret worth enforcing? Trade secrets have traveled a bumpy path from the basis for a common law gut feeling about unfair competition to a federally enforced intellectual property right. Because almost any kind of “information” can be a trade secret, trade secret doctrine has a reputation for being a fact-soaked free-for-all. This is especially true when compared to the contours of copyrights, trademarks, and patents, which are strict at least in theory.
In The Value of Secrecy, Professor Camilla A. Hrdy calls for legal rigor and, despite the reputation that trade secrets are an opportunity to air grievances about contract loopholes rather than valid assets, finds that rigor in the statutory text and in courts’ recent rulings. Her hot-off-the-presses article posits that “independent economic value” is, descriptively and normatively, a meaningful threshold for trade-secret eligibility. Although a reader unfamiliar with the ins and outs of trade secret law may get a bit lost in the nuances, Professor Hrdy’s article on this fast-developing doctrine also illuminates to the generalist both the divide between legal theory and legal practice and the importance of framing a narrative to fit a court’s expectations.
Centrally, Professor Hrdy argues that a trade secret must have commercial value that is caused by its secrecy. This idea naturally emerged from the history of trade secrets in American jurisprudence and modern statutory manifestations (including the UTSA, developed as a uniform proposal that many states have adopted as-is, and the federal DTSA, passed by Congress in 2016). But it challenges common wisdom that the “value” requirement is as lifeless as the “utility” requirement in patent law.
It also challenges revered scholarship, such as William Landes’s and Richard Posner’s landmark economics work, that a trade secret’s value is self-enforcing; that is, a secret is necessarily valuable if someone went to the trouble of litigating it. It’s like the old joke, where an economist won’t acknowledge a $100 bill lying on the ground because, if it were really there, someone would have already picked it up. Well, not in Professor Hrdy’s view. She believes that both the text of the law and practical concerns, which so often sour good theories, make independent economic value not just worth considering but necessary.
First, she explains that “independent economic value” should be understood as a derivation of “competitive advantage,” which has always undergirded the character of a trade secret. “Information” is otherwise too broad a definition for intellectual property, particularly in light of the finicky web around copyrights, trademarks, and of course, patents. Professor Hrdy suggests that trade secrets, too, must fit into a standard rather than simply stretching out into a residual category. A trade secret that offers a “competitive advantage” means that a secret does not become a trade secret unless it provides a commercial edge over rivals. And independence requires that the secrecy of the information itself provides that competitive advantage. The secrecy of a password, while important, does not itself confer an advantage. But the allure of a secret sauce? Absolutely.
Second, she identifies the influence of this standard in case outcomes from the last five years. Courts with the opportunity to offer fresh interpretations of the DTSA have been scrutinizing the character of “information” to determine whether it qualifies as a trade secret as a matter of law. The myriad paths that courts have taken appear to lead to the same place. That is, courts have used a variety of circumstantial and direct methods to detect, whether they state it explicitly or not, independent economic value. Professor Hrdy’s review determined that courts probed the following: whether the value is commercial (rather than, for example, relational); whether it is valuable enough (albeit suggesting that “triviality” remains the rather low bar); whether the value was added because the information is secret (a masked form of causation); and whether the information has foreseeable or actual value (not a futuristic whim or outdated notes).
Professor Hrdy’s article is noteworthy for her success in imposing a framework on a doctrine that many commentators have dismissed as a scattershot cause of action—one that depends more on storytelling than doctrine. Her job wasn’t easy. She admits at least one type of information (religious scriptures) that was found to be a protected trade secret in one forum and unprotected in another. However, Professor Hrdy insists that the effort is worth it to prevent unreasonable lawsuits. She does not employ the term “trade secret troll,” but she cites a few articles that do. Trade secret litigation can and has been used to harass firms and former employees who find it too expensive to participate in a legal drama.
The article could do more to highlight that trade secrets, particularly, open the door to fishing expeditions into competitors’ business practices in ways that overwhelm the comfort of a discovery protective order. For former employees, naturally vulnerable defendants, the very threat of years-long litigation is the de facto enforcement for outrageous non-compete agreements. Even without these details, however, Professor Hrdy’s point is clear: a federal cause of action for trade secret protection opens the doors to colorable claims that can be cataclysmic for small actors, and judges should be able to close the courtroom doors to secrets that are asserted for this secondary value (i.e., leverage) rather than for independent economic value.