The Journal of Things We Like (Lots)
Select Page

Taxonomizing Various Mechanisms for Terminating Intellectual Property Rights

Dave Fagundes & Aaron Perzanowski, How IP Ends, Emory Legal Studies Research Paper (forthcoming), available at SSRN (Jan. 24, 2025).

The literature on intellectual property (IP) rights has overwhelmingly focused on their birth, life, and enforcement. Rare indeed have been articles about how and why those rights cease to be enforceable.

In How IP Ends, Fagundes and Perzanowski have written the first comprehensive article about different ways that IP rights can cease to exist for the four most common IP regimes (e.g., patents, copyrights, trademarks, and trade secrets) through what they call “terminal mechanisms”: expiration, abandonment, forfeiture, and two types of invalidations (distinguished below).

The article’s first section provides a taxonomy of these terminal mechanisms to enable IP scholars to overcome the “impoverished vocabulary” we have historically had to discuss various ways that IP rights can end. The taxonomy aims to sharpen distinctions among the mechanisms, such as abandonment and forfeiture, which IP scholars and judges sometimes fail to appreciate.

Abandonment, the article observes, requires (or at least should require) a subjective intent to relinquish IP rights, coupled with an overt act manifesting that intent. Forfeiture, by contrast, is an unintentional relinquishment of rights (e.g., as when an author of a work forgets to file for a renewal of her copyright).

IP scholars and courts sometimes loosely refer to these two concepts as if they are the same thing. Fagundes and Perzanowski maintain that their emphasis on sharpening such distinctions is “more than terminological scolding.” It is instead a way to enrich our understanding of the mechanisms and their role in the IP system, and improve our vocabulary about them.

Another important distinction they make is between ex post invalidation and ab initio invalidation. The authors define ex post as a type of invalidation in which the IP right was at one time valid and enforceable, but ceases to be valid owing to changed circumstances (e.g., a once valid trademark becomes generic for a type of product). Ab initio invalidation, by contrast, is defined in terms of IP rights that were granted, but are subsequently determined to be invalid (e.g., a patent that gets struck down on obviousness grounds).

Although ab initio invalidations are about IP rights that should never have been recognized, those rights may have been plausibly claimed and the claimants may have asserted them against others or persuaded third parties to license. These claimed rights have a practical significance that deserves to be recognized. After all, IP laws do not require claimants of ab initio invalid IP rights to pay back, after invalidation, any royalties they may have collected when asserting those rights against some users.

A second section of How IP Ends discusses various policy rationales for IP right terminations. Most familiar is enrichment of the public domain after patents and copyrights expire. Another familiar rationale is the role that IP terminations play in screening out low value IP rights (e.g., when a patentee decides not to pay a maintenance fee because it regards the patent as no longer sufficiently valuable to justify paying the fee). A third familiar rationale is correcting errors (e.g., PTAB reviewing patent claims based on prior art patent examiners missed).

The most surprising category of the policy rationales described in the Article is that which promotes IP rights’ autonomy interests. If I as a creator have rights in a particular IP asset and I decide I want to relinquish those rights, then my personal autonomy interests give me power to do this. While I agree that there is something to be said about this, it seems an unlikely rationale for ending IP rights in the modern era.

The most stimulating and intellectually rich part of How IP Ends is its third section which presents a series of thought experiments about how IP rights could end differently. The authors pose the question: what about creating a termination mechanism for patents or copyrights (especially the latter) if the right holder fails to exercise those rights after a period of time? After all, trademark rights can cease to exist after long periods of nonuse, so why not other forms of IP?

Reinstating formalities in copyright law is another mechanism the authors consider. It used to be so easy to dedicate original works of authorship to the public domain (e.g., publishing copies of the works without copyright notices). Now there are too many copyrights and no certain way to dedicate them to the public domain. Bringing back formalities at least for U.S. works is possible, even if unlikely, but it is a good thought experiment topic.

And why shouldn’t trade secrets and trademarks expire like patents and copyrights do? These and other questions will tease the minds of all thoughtful IP scholars. This is an article that really needed to be written.

Cite as: Pamela Samuelson, Taxonomizing Various Mechanisms for Terminating Intellectual Property Rights, JOTWELL (May 12, 2025) (reviewing Dave Fagundes & Aaron Perzanowski, How IP Ends, Emory Legal Studies Research Paper (forthcoming), available at SSRN (Jan. 24, 2025)), https://ip.jotwell.com/taxonomizing-various-mechanisms-for-terminating-intellectual-property-rights/.

McFarlin on “Infringing Uses” After Warhol

Timothy J. McFarlin, Infringing Uses, Not Works, 76 S.C. L. Rev. 103 (2024).

In Andy Warhol Foundation for the Visual Arts, Inc. v. Goldsmith,1 the Supreme Court considered whether the licensing of one of Andy Warhol’s Prince Series works for the purpose of illustrating a magazine story about the musician Prince, was a fair use of the Lynn Goldsmith photograph on which the Warhol work was based. In that opinion, the Court made two important interventions into the law of fair use.

First, the Court held that “[t]he fair-use provision, and the first statutory factor in particular, requires an analysis of the specific use of a copyrighted work that is alleged to be an infringement.”2 “The same copying may be fair,” the Court said, “when used for one purpose but not another.”3 Accordingly, the Court focused on the challenged use—the license to Vanity Fair to illustrate a cover story about Prince’s life—and did not rule on whether the particular Warhol work licensed, Orange Prince, or any of the other Warhol Prince Series works, or any other conceivable use of Orange Prince, was or was not fair use.

As all copyright lawyers know, since the Supreme Court’s decision in Campbell v. Acuff-Rose Music, Inc.,4 the question of transformativeness has become a critical element of the fair use analysis, with works deemed transformative significantly more likely to be fair use. The Warhol Court’s second intervention was to make clear that a defendant’s use of a plaintiff’s work is not necessarily transformative just because that use contains a new meaning or message. Something more is required—the defendant’s use must have a different purpose than the plaintiff’s.5

That different purpose, the Court held, was lacking. Of particular significance to the majority, both Goldsmith’s photograph and Warhol’s Orange Prince artwork had been used to illustrate magazine stories about Prince. The particular challenged use—the Warhol Foundation’s licensing of Orange Prince to Vanity Fair—was the same as a principal use of the Goldsmith photograph. The use was therefore a potential substitute for the licensing of Goldsmith’s work.6

Tim McFarlin’s new piece in the South Carolina Law Review, titled Infringing Uses, Not Works, explores the consequences of the Warhol Court’s first intervention—i.e., its focus on the particular use made by the defendant of the plaintiff’s copyrighted work. In McFarlin’s telling, that intervention will re-shape how courts handle infringement lawsuits. In particular, McFarlin argues that Warhol’s use-focus is likely to transform how courts fashion injunctions in copyright infringement cases. The reason to expect this transformation, McFarlin explains, is rooted in the logic of the Warhol opinion itself, which strongly suggested that while the Warhol Foundation’s particular use might compete with the plaintiff’s work, and therefore fail to qualify as a fair use, other uses that we might ordinarily expect the Foundation to make of the Prince Series works would be unlikely to compete, and therefore much more likely to qualify as fair:

First, though Justice Sotomayor could have refrained (due to Goldsmith’s narrowing of her claim) from discussing other uses of Orange Prince, she did discuss them. And she did so in a way that highlighted their likely noninfringing nature: the Court’s own reproduction, public display, and distribution of both Orange Prince and Goldsmith’s photo within its opinions was covered by fair use, Sotomayor noted, just as that doctrine would likely cover using Orange Prince for the purpose of teaching.

Second, Justice Sotomayor also discussed the difference between an infringing and noninfringing use of Warhol’s iconic Campbell’s Soup Cans. Reproducing, publicly displaying, and distributing the “Campbell’s Soup” logo in a series of paintings for collectors and museums was likely noninfringing, Sotomayor posited, while licensing the work to a competing soup business to be reproduced, publicly displayed, and distributed in grocery stores likely would be infringing. (P. 112 (internal citations omitted).)

Warhol’s “use focus”, McFarlin argues, means that fair use is no longer a categorical analysis—in the ordinary case, a work is likely to be neither wholly inside or outside of the scope of fair use, but rather partly on both sides of the line, with different uses either qualifying or failing to qualify as fair. In a forthcoming article, I argue that the Warhol Court’s focus on the competitive consequences of defendant’s particular use is more likely, over time, to expand the scope of fair use than it is to contract it.7 McFarlin focuses on copyright infringement remedies to make a related point—Warhol’s use focus should, if it is heeded, reduce the scope of injunctive relief as well as impoundment and destruction orders in many copyright infringement cases.

To launch his argument, McFarlin refers to the injunction granted in a recent case, Tolkien Tr. v. Polychron.8 There, Judge Stephen V. Wilson of the Central District of California issued both (1) a worldwide injunction against the further publication of the Fellowship of the King, a work of fanfiction written without the Tolkien Estate’s consent, as well as (2) an Order that the writer, Demetrious Polychron, “[p]ermanently destroy all physical and electronic copies of the Infringing Work.” (P. 114 (internal citations omitted).) McFarlin says that an injunction that respects Warhol’s use focus would have been much narrower:

[F]irst, whether a derivative work, like the Fellowship of the King, infringes copyright must be judged use by use, per Warhol. The sharing of the Fellowship of the King for free and within a fanfiction community would present a strong case for a fair (and therefore noninfringing) use. The retention of a copy for Polychron’s own private, personal joy would pose yet a stronger case. So the injunction should have targeted only the commercial sale of Fellowship of the King, the sole use litigated in the case.

Second, it appears that the infringed works may now be out of copyright in New Zealand, where the Lord of the Rings copyrights apparently expired on January 1, 2024. So the injunction, issued by a U.S. court and applying only U.S. copyright law, should not have been worldwide in scope.

Third, even in the U.S., the infringed works’ copyrights will expire: likely in 2046 for the Fellowship of the Ring, 2049 for The Two Towers, and 2050 for the Return of the King. So the court should not have ordered the destruction of all copies of the work, given that at some point all uses—even commercially competitive ones—will be noninfringing. (P. 116-17.)

I’ll leave the remainder of McFarlin’s valuable article for your reading—including the parts where he argues how best to reconcile Warhol’s use focus with 17 U.S.C. § 103(a)’s seemingly categorical rule regarding the uncopyrightability of infringing derivative works, as well as his predictions regarding the impact of Warhol’s use focus on generative AI—but suffice now to say that McFarlin’s article is a valuable early entry in what will be a lengthy period in which the implications of the Warhol Court’s two big holdings are digested (or not) by the law.

  1. 598 U.S. 508 (2023).
  2. 598 U.S. at 533 (internal quotation marks omitted) (emphasis added).
  3. Id.
  4. 510 U.S. 569 (1994).
  5. Id. at 529.
  6. Id. at 556 (Gorsuch, J., concurring) (“[H]ere, the undisputed facts reveal that the Foundation sought to use its image as a commercial substitute for Ms. Goldsmith’s photograph”); see also id. at 578 (Kagan, J., dissenting) (“[T]he majority conducts a kind of market analysis: Warhol, the majority says, licensed his portrait of Prince to a magazine that Goldsmith could have licensed her photo to—and so may have caused her economic harm.”).
  7. Christopher J. Sprigman, Copyright, Meet Antitrust: The Supreme Court’s Warhol Decision and the Rise of Competition Analysis in Fair Use, 134 Yale L.J.F. 298 (2025).
  8. No. 2:23-cv-04300-SVW-E, 2023 U.S. Dist. LEXIS 226135, at *34 (C.D. Cal. Dec. 14, 2023).
Cite as: Christopher J. Sprigman, McFarlin on “Infringing Uses” After Warhol, JOTWELL (April 14, 2025) (reviewing Timothy J. McFarlin, Infringing Uses, Not Works, 76 S.C. L. Rev. 103 (2024)), https://ip.jotwell.com/mcfarlin-on-infringing-uses-after-warhol/.

Government Suppression of IP

Doni Bloomfield, Intellectual Antiproperty: Export Controls and the Transformation of IP (Jan. 13, 2025), available at SSRN.

Intellectual property laws are government policies to encourage the creation and dissemination of information. But there are also laws allowing the U.S. government to suppress IP-protected technical knowledge, and Doni Bloomfield’s insightful article argues that IP scholars should pay more attention to these forms of “intellectual antiproperty.” Just as intellectual property laws allow innovators to capture some positive externalities of their efforts, Bloomfield argues that intellectual antiproperty laws address some of the negative externalities—at least as they relate to national security. And with increasing global competition between the U.S. and Chinese governments, these laws are likely to grow in importance.

Bloomfield focuses on two legal channels of information suppression, which cover information protected under trade secret law and patent law. First, U.S. export-control laws allow the State and Commerce Departments to restrict transfers of thousands of categories of proprietary U.S. technologies. For example, these agencies can bar a U.S. firm from sharing covered information with non-U.S. employees within the United States. The U.S. government has even claimed authority over extraterritorial sales involving U.S. know-how, such as fining an Irish firm for selling disk drives to China. Second, the Invention Secrecy Act allows the U.S. Patent and Trademark Office to impose secrecy orders on patent applications that pose a national-security risk, including to restrict use to classified contexts or to bar disclosure altogether.

Government suppression of information isn’t new. Bloomfield engagingly documents the historical roots of these legal regimes, including the increasing use of industrial IP to define the limits of the government’s authority and to balance free-speech interests with national-security concerns. These regimes stem from WWI and WWII, when Congress passed laws allowing the government to restrict trade and to impose patent secrecy orders—and to more broadly censor other types of information. During the Cold War, the government dropped these broader speech controls and focused intellectual antiproperty laws on industrial IP. Enforcement varied over the following decades. A general liberalization of export controls during the Clinton, Bush, and Obama presidencies gave way to a spike in enforcement during the first Trump presidency due to rising concerns about China. President Biden’s administration tightened restrictions even further, and the second Trump administration seems likely to continue this course.

Bloomfield is right that these intellectual antiproperty laws deserve attention from IP scholars and teachers. For one thing, they are important limits on trade secrets and patents that are increasingly relevant for IP practice. As Bloomfield summarizes, these regimes “in effect added a new, enduring clause to the traditional IP bargain, requiring those seeking industrial IP rights to give the government the power to restrict knowledge about that invention.” They may help push innovators to use trade secrets rather than patents for inventions that implicate national security interests to avoid the broader scope of a patent-secrecy order. And they may have unintended consequences for IP boundaries; for example, the trend toward increasing use of confidentiality agreements (including to circumvent bans on noncompetes) may expand the government’s authority to restrict the covered information.

Bloomfield’s article also invites IP scholars to think more broadly about how and when the government should suppress IP. If there is a century-old tradition of government interference with IP rights to address national-security concerns, can this help justify government limits on IP to address other kinds of negative externalities? Could these export-control laws serve as a template for other limits—or is national security different? What are the best institutional designs for balancing the costs and benefits related to different externalities, and how well do U.S. institutions achieve that balance? Is restricting intellectual antiproperty to industrial IP sufficient to prevent First Amendment harms, or does this undervalue the speech interests that can arise in patent and trade secret cases?

As with most articles I like (lots), Bloomfield’s Intellectual Antiproperty raises as many questions as it answers, laying the foundation for an ambitious research agenda. It is recommended reading both for scholars focused on problems internal to IP, and for the growing number of scholars considering how IP laws intersect with other laws and policies for regulating information.

Cite as: Lisa Larrimore Ouellette, Government Suppression of IP, JOTWELL (March 14, 2025) (reviewing Doni Bloomfield, Intellectual Antiproperty: Export Controls and the Transformation of IP (Jan. 13, 2025), available at SSRN), https://ip.jotwell.com/government-suppression-of-ip/.

Patent Law: What, Like it’s Hard?

Amanda Levendowski, Hard Truths About “Soft IP”, 124 Colum. L. Rev. F. 102 (2024).

Imagine for a moment that you are a law student. You are interested in intellectual property law but you don’t qualify to sit for the patent bar. You know that means you can’t do patent prosecution work. You’re not too worried about that because you’re more interested in trademark or copyright work anyway. But still you wonder: How can you concisely convey your professional interests to your professors, peers, and potential employers? One common answer is to use the phrase “soft IP.”

As Amanda Levendowski points out, “soft IP” is frequently used to distinguish trademark and copyright law from patent law (despite longstanding criticism from Eric Goldman). But, she argues, this phrase “reflects implicit biases against copyright and trademark doctrine and practitioners.” (P. 102.) Levendowski demonstrates that the phrase “offers no conceptual, doctrinal, historical, operational, definitional, or practical clarity.” (P. 109.) So, she asks: “What’s so soft about ‘soft IP’?” (P. 102.) One answer, she suggests, is that the fields of copyright and trademark law tend to be associated with women:

Society has projected softness onto women for centuries. Softness can be powerful. It’s resilient. It’s flexible. It’s supportive. But I have never heard “soft IP” invoked to signify the strengths of softness, all of which happen to be indispensable to effective lawyering. Rather, fields and skills associated with women are routinely disregarded for being “soft” in ways that reflect a lack of respect for women and the rigor of their work. (P. 111.)

Thus the phrase “soft IP,” Levendowski writes, “suggest[s] that people who practice copyright and trademark law are, unlike patent practitioners, less up to a hard challenge.” (P. 111.) This, according to Levendowski, “is perhaps the most insidious aspect of ‘soft IP’: By implying that copyright and trademark work is not hard, the term erases that being a woman in any practice of law can be very hard.” (P. 112.)

And what’s so hard about “hard IP” anyway? As Levendowski notes, patent law isn’t intellectually “harder” than other forms of IP law or practice. For example, she notes that it’s not “harder” in the sense of being physically more tangible. And patent protection isn’t limited to inventions in the “hard sciences”—as Levendowski observes, utility patents can be used to protect a pool full of sprinkles and, also, design patents exist.

Indeed, the recent battle over the creation of the new design patent bar illustrates many of the points Levendowski makes in her article.

In 2022, the U.S. Patent & Trademark Office (USPTO) released a request for comments seeking input on, among other things, the question of “whether the creation of a separate design patent practitioner bar would be beneficial to the public and the Office.” In January 2024, it started allowing people who have certain art or design backgrounds (specifically, those whose backgrounds would qualify to be design patent examiners) to apply to join a new design patent bar.

No matter how many people end up applying to or joining this new design patent bar, this change constitutes an important recognition of the value that people with art and design backgrounds can bring to design patent law. It is also an important corrective to the completely illogical rule that existed before, where someone with an art degree was deemed competent to examine design patents but not to prosecute them. That made no sense.

Nonetheless, a number of patent attorneys filed public comments opposing any change to the existing gatekeeping system. Some of these comments reflected the kinds of biases Levendowski describes, suggesting that people who don’t have science and technical degrees aren’t as smart—and haven’t worked as hard—as people who qualify for the general patent bar. For example, one commentator who appears to be a patent attorney suggested that anyone without a science or technical degree would necessarily be a “second-rate practitioner.” One anonymous commenter, who claimed to be a patent attorney, argued that letting people without science or technical degree prosecute patents would constitute “lowering the bar” and even suggested that people with other degrees lacked “merit” and a “willingness to follow the rules.” The AIPLA suggested that attorneys without science or technical degrees are less ethical and more likely to fail to adequately advise and refer their clients. (For one response to this accusation, see my own public comments.)

In some ways, it’s astonishing to see attorneys express such open disdain for their fellow professionals and to publicly embrace these kinds of stereotypes. On the other hand, perhaps it’s not surprising in light of the culture and dynamics that Levendowski describes. If you have spent your career denigrating certain colleagues’ work as “soft” perhaps it’s not surprising that you see them as a “lower” level of practitioner. This may be especially true in a world where those colleagues have accepted—and perhaps even themselves adopted—the “soft IP” label.

Like Levendowski, I’ve used the term “soft IP” in the past. As a student and a young attorney, it seemed like a helpful shorthand to describe the kind of work I wanted to do. And yet, I was never fully comfortable with the phrase—especially when it was used by patent attorneys. Levendowski’s sharp, smart, and incisive essay helped crystalize, just in my own mind, the sources of and reasons for that discomfort. In the end, she makes a compelling case for abandoning the phrase “soft IP.” This essay is well worth your time—and that of your students.

Cite as: Sarah Fackrell, Patent Law: What, Like it’s Hard?, JOTWELL (February 11, 2025) (reviewing Amanda Levendowski, Hard Truths About “Soft IP”, 124 Colum. L. Rev. F. 102 (2024)), https://ip.jotwell.com/patent-law-what-like-its-hard/.

A Stepwise Approach to Copyright and Generative Artificial Intelligence

Katherine Lee, A. Feder Cooper, & James Grimmelmann, Talkin’ ’Bout AI Generation: Copyright and the Generative-AI Supply Chain, __ J. Copyright Soc’y U.S.A. __ (forthcoming, 2024), available at SSRN (July 27, 2023).

In order to understand whether generative AI may infringe copyrights, one must first have a sound grounding in the technical complexities of the “generative AI supply chain.” This Article not only explains the technology in terms accessible to a legal audience, but also explores the doctrinal complexities of how generative AI maps onto existing copyright law. The authors do an admirable job in accomplishing both goals.

I. Understanding Generative AI

This jot highlights four key technical points made in the article about generative AI that a copyright-interested legal reader needs to understand.

1. It is all math. Even though the outputs of generative AI models may be expressed in texts or images, the process by which the models are trained, and by which they operate, rely on converting the expressive inputs, including works of authorship, into structured numeric values in structured data sets that can be used in various mathematical operations to produce numeric outputs that are translated back into text and images. The math used in the operation of these models is fourth-grade math carried out by the machine-equivalent of an army of billions or trillions of fourth graders.9

2. Generative AI models are capable of discerning patterns in the entire corpus of works in the training data, not just in individual works on which the models were trained. Works of authorship used in training are treated as sources for pattern analysis of the constituent elements of textual or visual works, so “an item like a painting or a book is not itself data; rather it can be processed computationally to be converted into data to be used in machine-learning applications.” (P. 10.) If a generative AI model produces an output that look like a copy of a work in the training data, that is not because the training process retained any sense of those patterns as part of an individual work.

3. Generative AI technologies are prediction engines, not knowledge bases
. A generative AI model or system is generally designed to predict outputs that a user would find to be responsive to a prompt. The article explains that some relatively recent technical advances in transformer architecture and diffusion techniques have improved these systems’ predictive power. Once granular patterns in the training data set are in place, the model or system is trained to generate an output at a similar level of granularity.

4. Scale matters. The amount of data required to train today’s most advanced generative AI models is unprecedented in scale. The greatly enlarged scale of training data is primarily responsible for the increasing utility of these technologies and their ability to produce results that surprise even those who built them.

II. Copyright and the Generative AI Supply Chain

The article breaks the stages of preparation and operation of a generative AI system into a “supply chain” comprised of component elements: from creation of expressive works to dataset collection and curation to model (pre-)training (creation of a base model) and then fine-tuning to system deployment for output generation which may be followed by model alignment. The value in this decomposition lies in showing that different actors can play different roles at different stages of the generative AI supply chain. This has implications for who might have copyright liability at what stages of that supply chain.

With respect to authorship/ownership, the article identifies a range of potential authors in the supply chain. In addition to copyrightable source works, compiling training datasets may sometimes exhibit sufficient originality to give rise to at least thin copyrights. In many cases, creating a base model may not be sufficiently original, but the article identifies some instances in which the model could be considered a work of authorship. Fine tuning a model raises similar issues, but the work of fine tuning lends itself to a potentially wider array of creative choices that underpin copyrightability claims. Are the generated outputs copyrightable? The article works through four possibilities: (1) authors of works in the training data, (2) some entity in the generative-AI supply chain (e.g., the model trainer, model fine-tuner, or application developer), (3) the user who prompted the system, or (4) no one.

The article discusses potential liability by organizing discussion around each exclusive right’s application to the each stage of the supply chain. The liability analysis covers reproductions in compiling and structuring the training datasets as well as in generating outputs.

After providing some interesting examples of “memorized” outputs (that is, outputs that are substantially similar to particular inputs) and techniques by which determined users can circumvent internal controls to prevent memorized outputs from being generated, the article usefully lays out the liability matrix for direct and indirect liability. The article also discusses why a provider of a deployed generate AI service may not be able to assert the Digital Millennium Copyright Act’s safe harbor protection.10

III. Concerns About the Fair Use Discussion

While the article generally does a good job of educating the reader about relevant copyright issues about which reasonable minds could disagree, it makes some assertions about fair use law that are reasonably contestable. Having previously acknowledged space for reasonable disagreement about whether models should be deemed to contain copies of individual works, this point should carry over to the fair use discussion. The article should remind the reader that fair use applies only if the elements of a prima facie case have first been met.

In the fair use discussion, I regard treating use of works in generative AI training to be transformative use as much stronger than the article suggests.

In addition, the article should distinguish between existing or emerging licensing markets for access and for uses. (See pp. 947-949 (elaborating on this point in the text and data mining context).) The market for access licensing, in my view, has no bearing on whether under the fourth fair use factor training on accessible works interferes with a market for use licenses.

IV. Conclusion

Despite these points of disagreement, this article makes a very valuable contribution to the literature. Its careful explanation of the technology and its readable application of relevant doctrine is impressive. As the law and technology continue to evolve, this article will stand out as a marker of where we were in the early days of copyright law’s application to generative artificial intelligence.

  1. See, e.g., pijipvideo, Copyright and Generative AI – Prof Michael Carroll and Assistant Prof Charles Duan, Youtube (Nov. 5, 2023).
  2. See 17 U.S.C. §§ 512(c)-(d).
Cite as: Michael W. Carroll, A Stepwise Approach to Copyright and Generative Artificial Intelligence, JOTWELL (January 14, 2025) (reviewing Katherine Lee, A. Feder Cooper, & James Grimmelmann, Talkin’ ’Bout AI Generation: Copyright and the Generative-AI Supply Chain, __ J. Copyright Soc’y U.S.A. __ (forthcoming, 2024), available at SSRN (July 27, 2023)), https://ip.jotwell.com/a-stepwise-approach-to-copyright-and-generative-artificial-intelligence/.

Fraud, Delays, and Nonsense (Marks) . . . Oh My!

Jeanne Fromer & Mark McKenna, Amazon's Quiet Overhaul of the Trademark System, 113 Cal. L. Rev. __ (forthcoming, 2025), available at SSRN (June 19, 2024).

“Yes! that’s it! that’s it!” Those were my words when I first saw the title of Jeanne Fromer & Mark McKenna’s paper, Amazon’s Quiet Overhaul of the Trademark System. I was sure that they had identified the culprit behind the seismic shifts in trademark law that I had sensed were taking place over the past several years. I was thrilled that they were going to put together the pieces, some of which I had personally encountered, of a bizarre and important puzzle. They did not disappoint.

Trademarks ostensibly exist to reduce consumer search costs and protect the goodwill brand owners generate in their source-identifying marks (as associated with particular goods or services). They do this by being distinctive, with fanciful (made-up words like “Exxon”) and arbitrary marks (e.g., “apple” for computers) receiving the strongest protection, and, along with suggestive marks (e.g., “Coppertone” for sunscreen), being deemed inherently distinctive, while descriptive marks (e.g., “Honey-baked Ham” for ham with a sweet glaze) may acquire distinctiveness. Generic terms, the common name for a product (e.g., “apple” for apples) are not protectible as trademarks, because other product sellers need to be able to use the common names of goods for their own products. This last restriction reflects trademark law’s concern with the preservation of competition in the marketplace.

Trademark rights in the U.S. are acquired by use, which can be quite helpful for small business owners who may not be able to afford federal registration when launching. However, federal registration with the United States Patent and Trademark Office (PTO) does provide a number of helpful benefits, including nationwide constructive use even in unentered markets.

As compellingly described by Fromer and McKenna, Amazon, through its “Brand Registry Program”, has singlehandedly turned many of these long-held understandings and goals on their head, transforming trademark law in profoundly disturbing ways. The Brand Registry Program rewards owners of federally registered trademarks with a cheap and efficient dispute resolution process, which both allows them to object to uses of their mark on the website and also gain a higher priority in search results than if their mark was unregistered.

Fromer and McKenna identify at least ten impacts of Amazon’s program on the trademark system: Amazon 1) created a shadow trademark system that incentivized applicants to file for federal trademark registrations which 2) swamped, the PTO resulting in 3) significant delays for all applicants (plus a cluttered register) and opportunities for bad actors to 4) fraudulently register “in use” but unregistered marks and then 5) extort the legitimate owners (often small entities relying on traditional trademark priority rules) who would now risk losing the ability to sell goods on Amazon.

In addition, the authors detail how Amazon’s policies led to a change not just in the magnitude of registrations but also 6) in the kinds of marks being registered, nullifying the effect of historical limitations on the registration of 7) descriptive and generic terms (bad actors can register them with the USPTO as stylized marks/use disclaimers, but Amazon’s Registry only matches text so those legal limitations are ignored) while also giving rise to a wide variety of 8) “nonsense marks” (if you’ve shopped on Amazon, you’ve seen them), strings of unpronounceable letters that have no meaning to consumers but qualify for Amazon’s registry which then favors them in search results. As the authors explain, “when search and purchase are not necessarily done by people who need to remember a brand name, businesses just need something to make the algorithm prefer them. Nonsense will do.”

And if these externalities to Amazon’s Brand Registry Program were not bad enough, the company has simultaneously 9) elevated its own brands on its site so that one is more likely to see Amazon brands first in searches, even in searches specifically for other brands. This gives Amazon more power vs. third-party brands and 10) “decenters” branding by blunting the traditional source identification and search cost reduction benefits branding is designed to provide.

So, what to do? Because Amazon’s program (intentionally or not) is influencing the behavior of parties within the legal system, seemingly to the detriment of public policy goals for trademark law, the authors wisely propose actions for the PTO, Amazon, and the trademark system itself.

For the PTO, some self-reflection on its approach to dealing with these shifts in behavior is in order. This should include, inter alia, a reevaluation of its process of allowing registration of descriptive or generic terms with stylization or disclaimers, as well as the possibility of higher fees for such registrations in light of the higher costs imposed on the trademark system. Also advocated is a reconsideration of the registrability of nonsense marks which, as the authors note, are not like normal coined terms (as they are only able to be compared with other marks for sight, not sound nor meaning) and do not actually fit anywhere on the Abercrombie spectrum of distinctiveness.

For Amazon, while it has already begun cooperating with the PTO on identifying fraudulent filings, it can and should do more. Fromer and McKenna’s suggestions that Amazon curtail its acceptance into the Brand Registry of marks based on a pending application (which may never register and which arguably further incentivizes fraudulent filings that rely on the PTO backlog) and only allow a mark in the registry when it is registered in standard text, seem both sensible and eminently doable.

Finally, for the trademark system, the success of Amazon’s search algorithm as a replacement for much of the search-cost-reducing, information conveying function of trademarks suggests we may not need to accept the trademark baggage of artificial product differentiation and independent brand value generation (apart from product quality) to meet consumer search efficiency goals. This may call for a reevaluation of the role of trademarks in the competitive marketplace, but also an assessment of whether the dominance of Amazon’s search algorithm effectuates for its own branded products is consistent with the public good or should be addressed, if not within the trademark system, then within another legal framework. There is no single solution, but Fromer and McKenna’s completion of this trademark puzzle is a wonderfully valuable, albeit troubling, gift to us all.

Cite as: Margo Bagley, Fraud, Delays, and Nonsense (Marks) . . . Oh My!, JOTWELL (October 29, 2024) (reviewing Jeanne Fromer & Mark McKenna, Amazon's Quiet Overhaul of the Trademark System, 113 Cal. L. Rev. __ (forthcoming, 2025), available at SSRN (June 19, 2024)), https://ip.jotwell.com/fraud-delays-and-nonsense-marks-oh-my/.

How Do You Like your Books?

Guy Rub, Reimagining Digital Libraries, 113 Geo. L.J. __ (forthcoming, 2024), available at SSRN (Feb. 19, 2024).

Do you ever wonder how it is that libraries can lend books repeatedly, while copyright owners (e.g., book authors) are granted the exclusive right to distribute their copyrighted works? Or how publishers make money selling books at retail prices when a person can resell books (or buy used books) for much less (hello Amazon Used Books for under a dollar!)? The reason is because of copyright’s “first sale” doctrine, 17 U.S.C. § 109, codifying the common law’s exhaustion principle, which says owners of lawfully made copies are allowed to dispose of those copies without regard to copyright law. In other words: the right to distribute is “exhausted” with the first sale. This means we can resell, lend, or give away our books. What we can’t do is make copies of them.

And that is where the concept of “digital lending” runs headlong into copyright law. When libraries buy books, they can lend them without restrictions. When libraries lend e-books, those e-books come with contractual provisions limiting their lending in substantial ways because reading an e-book requires making a copy, which is not exhausted by the first sale doctrine. As the Supreme Court has said (in the patent context, which has a similar first sale principle): “exhaustion applies only to the particular item sold, and not to reproductions.”11

But libraries, which Professor Rub aptly describes as “among the most cherished institutions in our society,” are increasingly engaging in e-book collection and e-lending to accommodate patron preferences and needs. E-lending enhances libraries’ core mission providing access to books and information and is especially effective when in-person patronage is challenging, such as in rural areas and in cases of disability or a pandemic. But e-book lending is substantially more expensive for libraries and license restrictions allow publishers to retain much more control over uses, disrupting the first sale balance that has kept libraries flourishing for so long. Professor Rub’s article describes this imbalance, the legal puzzles it raises, and then proposes some novel solutions.

In the background and haunting the article’s discussion is the case of Hachette et. al. v. Internet Archive, currently pending before the U.S. Court of Appeals for the Second Circuit. That case concerns a multi-publisher lawsuit against a different kind of library: the Internet Archive, a non-profit digital-only library, which describes itself this way: “Like a paper library, we provide free access to researchers, historians, scholars, people with print disabilities, and the general public. Our mission is to provide Universal Access to All Knowledge.” The Internet Archive engages in what it calls “controlled digital lending,” (CDL) whereby instead of an in-person library patron borrowing a print book (thus making that book unavailable to all other patrons), the Internet Archive patron receives a digital copy of the book (which the Archive scanned for this ). While the Internet Archive patron has access to the digital scan, no other patron does. CDL aims to mimic the scarcity model that is the basis of print lending.

The problem, of course, is that CDL does not precisely mimic the model of print lending, as Professor Rub explains. First, book scans are copies, and making a copy is an exclusive right of the copyright holder that is not exhausted upon purchase of a lawful copy of the book. (Whether it is fair use under certain conditions is a good question that the article addresses at some length, as does this White Paper by David Hansen and Kyle Courtney.)

Second, CDL scans do not degrade the way print books do, and so libraries do not have to replenish their stock of digital collections as they do for print books, which are often replaced after significant use. Publishers claim to rely on library replacements to turn profits. Without such repurchases, publishers assert that their revenues fall as does their support for authors. (For whether publishes complaints about their continued viability in the digital age can be fully credited, I urge readers to consider The Publisher’s Play Book: A Brief History of Publishing Industry’s Obstruction of the Library Mission.)

Third, CDL is instantaneous, whereas print lending has significant friction that slows access. Friction means libraries lend print copies less frequently than e-books (or scans), and thus to serve demand, libraries will buy more print copies when they don’t have to license more e-books. These differences lead to conclusions drawn by Professor Rub and the many others he cites that digital first sale does not exist. It also means that the Internet Archive’s CDL practice is at substantial risk of being held unlawful.

That doesn’t leave libraries without options, however. Professor Rub’s article is largely devoted to saving (if not empowering) libraries and CDL practices, because as he explains, libraries all over the country face a serious financial crisis. Public libraries – the model created by Benjamin Franklin in 1790 in my neck of the woods (Franklin, Massachusetts) – are unusual around the world because they are free and open to anyone. They serve many functions beyond lending books, including anchoring communities as places to convene and learn. But their budgets are threatened by diminishing tax revenue and exploitative pricing of e-book licenses (which Professor Rub describes in detail). Moreover, state laws, such as the Maryland Library Ebook Fairness Law, which would have required book publishers to offer public libraries reasonable licenses to ebooks, have been successfully challenged as unconstitutional.

So what are some of the solutions? Professor Rub provides many possible avenues. One suggestion is to “replicat[e] the physical world” by creating friction in digital lending so that CDL resembles print borrowing. Although CDL would still involve unauthorized copying, such conditions of friction ostensibly minimize market harm. Given the public benefits of distribution, the library practice would comfortably fall within fair use.

A second option is to adopt a “digital public lending right” (ePLR) akin to the Public Lending Rights in EU countries that collect royalties for lending books (because the EU does not have the equivalent of the first sale doctrine). An ePLR would allow libraries full digital exhaustion rights as they enjoy with print books, but publishers would be entitled to collect royalties determined by a public governance body to offset revenue loss.

A third way to achieve digital lending is to support market segmentation along variables of time, usage, and wealth. For example, digital books might be unavailable for new titles (similar to home movie streaming which is often delayed until after theaters show new releases). Or public libraries could offer preferential digital access to patrons whose income is below a certain wealth threshold and who are less likely to allocate their scarce resources to purchasing books (or ebooks).

My favorite of his suggestions focus on state-run public libraries (including public university libraries). These suggestions encourage states (1) to use their market power to demand favorable licensing terms from publishers, and (2) to exercise sovereign immunity if sued for copyright infringement stemming from their CDL practices. Given how important libraries are for state government and communities, encouraging states to get in the fight to save them makes a lot of sense.

Each of these suggested solutions (some of which Professor Rub acknowledges) pose problems, but they are worth considering to save our most “cherished institutions.” Some of my favorite quotes about libraries explain why: “Libraries are full of ideas – perhaps the most dangerous and powerful of all weapons.” (Sarah Maas, from Throne of Glass) And “the only thing that you absolutely have to know, is the location of the library.” (Albert Einstein) Whether you like your books in paper or electronically (and there is some interesting empirical research on just that question), libraries should provide both and they need help doing so. Professor Rub’s article significantly advances the conversation and hopefully recruits more lawyers and legislatures to the cause of saving the free public libraries.

  1. Bowman v. Monsanto, 569 U.S. 278 (2013).
Cite as: Jessica Silbey, How Do You Like your Books?, JOTWELL (September 27, 2024) (reviewing Guy Rub, Reimagining Digital Libraries, 113 Geo. L.J. __ (forthcoming, 2024), available at SSRN (Feb. 19, 2024)), https://ip.jotwell.com/how-do-you-like-your-books/.

Service Dress: Trademark Law’s Secret Third Thing

Dustin Marlan, Tertium Quid Unveiled: Trade Dress and Service Design, 58 U.C. Davis L. Rev. __ (forthcoming, 2024/2025), available at SSRN (March 11, 2024).

In an oft-quoted moment in the Supreme Court’s Wal-Mart v. Samara opinion, Justice Scalia articulated three types of trade dress: product packaging, which can be protectable from its earliest use if deemed inherently distinctive; product design, which is only ever protectable upon a showing of secondary meaning; and a third category, “some tertium quid that is akin to product packaging,” which is also capable of being inherently distinctive. As Professor Dustin Marlan sees it, Scalia coined the phrase to save face. He “needed a conceptual mechanism for distinguishing restaurant décor (previously held capable of inherent distinctiveness [in Two Pesos]) from product design (now considered incapable of such [in Wal-Mart]), without overruling the previous Two Pesos holding outright.” In so doing, Scalia conflated trade dress for services with product packaging in a way that has sowed confusion since 2000.

So what exactly is a tertium quid—Latin for “third thing”—and why should trade dress in that category be treated as capable of inherent distinctiveness? And how have courts perpetuated the vague, amorphous idea of tertium quid for more than 24 years without any real interrogation?

In his forthcoming article, Marlan takes a magnifying glass to the phrase, working backward to consider its derivation (alchemy; religion) and then forward from Wal-Mart to explicate every trade dress case that incorporates it. What he finds is fairly straightforward. Just about every case categorizing trade dress as tertium quid has at its heart what Marlan labels “service dress”: not product design, which is the shape of a tangible good; not product packaging, which is what a tangible good gets boxed inside; but rather experiential elements of branded services, such as restaurant décor, store layout, retail displays, employee uniforms, menu presentation, “look and feel,” and other aspects of design connected with services and the places in which they’re provided. Service dress is In-N-Out’s red and white diner-style setup, the layout of a Yankee candle display shelf, the cuffs and collars worn by Chippendale’s dancers, or the design of an Apple store from floor to ceiling. Mostly service dress is about protectability of spaces, rather than objects or symbols. The tertium quid is trade dress for services, and Marlan argues that it demands decidedly different treatment from trade dress for goods when it comes to assessing distinctiveness and protectability.

Why? The primary purpose of product packaging is usually a branding purpose—it’s designed to catch the eye, communicate source-indicating information, and improve commercial efficiency by distinguishing the brand from others and enabling consumers to find what they prefer quickly and consistently. It’s possible for product packaging, like word marks and logos, to do what a trademark does from its very first use. Product design, on the other hand, is far more likely to serve utilitarian and/or aesthetic purposes. It follows that product design faces more hurdles to registration: it may be deemed functional; it may fail to function as a mark; it may require substantial investment in the form of sales and look-for advertising before it comes to serve as a source indicator to consumers. Marlan argues factfinders should treat service dress with the same caution. It certainly isn’t akin to packaging, as Scalia claimed, because “a service…has no container, being an intangible concept without material exteriority.” Instead, like product design, service dress’ primary purpose is usually something other than source identification, such as user experience, efficiency, effectiveness, quality, cost, aesthetic appeal, or desirability. In marketing parlance, service dress takes the form of “touchpoints,” which consumers tend to value for non-source-indicating reasons. So service dress doesn’t deserve the free pass it has received since Wal-Mart.

Marlan advocates instead that trademark law treat service dress as incapable of inherent distinctiveness, requiring a user to establish “commercial significance prior to removing that touchpoint from the experiential commons.” The competitive interest in copying service dress, like the competitive interest in copying product design, is substantial and benefits consumers, new entrants, and the marketplace up the point when the service dress becomes clearly source-indicating. Requiring secondary meaning also gives competitors better notice of the scope of rights that might be asserted against them, lowering transaction costs and improving predictability. In addition, Marlan observes, trademark law lacks a consistent doctrinal test for determining inherent distinctiveness of nontraditional marks, leaving factfinders to rely heavily on intuition, so those determinations tend to be erratic and unpredictable. Determinations of secondary meaning are based on more robust information and track more objective indicators than determinations of inherent distinctiveness for trade dress.

In Tertium Quid Unveiled: Trade Dress and Service Design, Marlan gleans guidance from the field of service design and analyzes the entire set of federal “tertium quid” cases. His proposal is well-supported, persuasively argued, and far from radical, although it does require a walking-back of some case law from the last quarter-century. Once he reveals tertium quid trade dress for what it is—not something ineffable and unknowable, but simply service-related design elements—it becomes readily apparent that requiring secondary meaning would better serve the interests of fairness and competition.

Cite as: Alexandra Roberts, Service Dress: Trademark Law’s Secret Third Thing, JOTWELL (September 4, 2024) (reviewing Dustin Marlan, Tertium Quid Unveiled: Trade Dress and Service Design, 58 U.C. Davis L. Rev. __ (forthcoming, 2024/2025), available at SSRN (March 11, 2024)), https://ip.jotwell.com/service-dress-trademark-laws-secret-third-thing/.

Samuel Johnson Was a Blockhead

Andres Sawicki, The Law of Creativity?, __ Cornell L. Rev. __ (forthcoming 2025), available at SSRN (Aug. 28, 2023).

If you jostle an IP scholar awake in the middle of the night and ask them, “What is the canonical justification for copyright and patent,” the first thing they’ll say is, “What the hell are you doing in my bedroom?” But the second thing they’ll say is, “Limited exclusive rights in works of authorship and inventions are necessary to preserve incentives to create them.” Or as Samuel Johnson more bluntly put it, “No man but a blockhead ever wrote, but for money.”

Everyone knows that this is the received wisdom. But strangely, at the same time, no one actually believes it. Intuitive counterexamples to Johnson’s “blockhead” comment are easy to conjure. Authors often produce content with zero expectation of remuneration simply because they have something to say and want to share it with others. Scientists engage in pure research because they want to contribute to how we understand the world (and also because they are paid a salary, by universities, or drug companies, etc.). Developers produce innovations not only to become the next Uber or Google but to improve others’ lives and for the love of making stuff.

The transparent incompleteness of the Johnsonian view that only financial incentives motivate creation raises a further question: If love of money doesn’t (or doesn’t solely) explain creativity, what does? Here is where Andres Sawicki enters the conversation with his fantastic recent article, The Law of Creativity?, which makes major advances in our understanding of how creativity works and what law can do to better encourage it.

Sawicki’s model of creativity begins with identifying three components. First, a domain of extant artifacts and ideas that promise to function as the building blocks of creative production, such as books, art, or scientific theories. Second, makers who combine their insights with content in those domains to produce new innovations and artifacts. And third, evaluators who assess and curate the content of those domains.

This domain-maker-evaluator model in turn yields a temporal model of how creativity emerges and declines, one that complicates traditional assumptions that creation is simply a product of individual effort inflected by state or private incentives. Sawicki identifies several historical instances illustrating that creativity tends to follow a standard pattern. At first, domains tend to be sparsely populated, making it difficult for makers to find material from which to develop new work. In some instances, though, evaluators intervene to organize and popularize the content of some domains to render them more accessible to makers. When this happens, domains may take off, producing the kind of bursts of collective production familiar from, for example, impressionism in the late 1800s or nuclear physics in the early 1900s. Such efflorescence, Sawicki cautions, can have downsides. As domains become overcrowded they tend to ossify, leaving little space for new content.

Sawicki’s descriptive account of creativity as a self-sustaining ecosystem with uniform, predictable features yields several substantive insights. For example, he argues that patent’s written description requirement should be revised to render it less technical and more accessible to a general audience. Such a move, Sawicki suggests, would enable more cross-discipline creativity by allowing a wider range of evaluators to understand and leverage the domain of patented inventions.

In terms of copyright, Sawicki argues that imposing an attribution requirement—and in particular, one that honors the contributions of all makers rather than just a few dominant authors—would facilitate a richer understanding of how creativity works, allowing evaluators to better understand how domains operate and calibrate their understanding of multiple makers’ contributions to works of authorship.

There is so much to enjoy about this article. For one thing, it reaches well beyond the closed universe of law review scholarship, inviting insights from scholars working in fields such as psychology and sociology (e.g., Amabile, Czikszentmihayli) that have made major advances in how creativity works. It is also replete with fascinating stories that illustrate its author’s point. The Law of Creativity? doesn’t just articulate a compelling theory of human and social creativity. It also explodes the myth that Van Gogh was a solitary genius creating ex nihilo and explains why it took so long for someone to invent the bicycle.

Sawicki’s piece is as important as it is entertaining. He is certainly not the first author to propound alternative accounts of creativity that complicate the dry and no longer plausible Johnsonian model.12 The Law of Creativity? makes an important move beyond preexisting work, though, in two respects. First, it identifies the central importance of evaluators who, alongside creators and artifacts, play a formative role in creative ecosystems. Second, he translates insights about the psychology of creativity at an individual level into a claim about how creativity operates both temporally and socially.

And while Sawicki’s arguments about how his model should inflect IP doctrine are cogent, his work motivates other lines of inquiry as well. For example, if IP is not (or is not only) a public goods problem to be solved by state intervention, what posture should law take toward creative production more generally? Should law even seek to intervene in these processes, or should it let the domain-maker-evaluator dynamic develop without government interference?

If anything, Sawicki’s work invites much bigger questions than just how a few features of patent and copyright law could be tweaked. It should cause us to rethink basic premises of our IP system. Here’s just one: Sawicki observes that domains sparsely populated with content often languish absent evaluators who seek to intervene in them to make these domains more susceptible to innovation. This appears to be a shortfall that law could remedy, suggesting in turn that patent and copyright may do well to incentivize evaluators to engage with early-stage domains rather than focusing their incentive policies solely on creators themselves.

A host of other questions arise out of Sawicki’s model, further illustrating its generative potential. He collapses, for example, the very different kinds of creativity that lead to the subject matters of patent and copyright. It seems entirely plausible that the domain-maker-evaluator dynamic may play out differently with respect to works of authorship and inventions. And while Sawicki has convincingly shown (again) that creativity does not exclusively depend on financial incentives, it remains true that for some categories of high-investment works (most obviously large-scale motion pictures), exclusive rights are a prerequisite to protect creators’ investments. So rethinking how IP systems work may not mean choosing between competing models of creativity, but acknowledging their non-mutual-exclusivity and imagining how law can help optimize social creativity that takes into account multiple such models and motivations.

The foundational Johnsonian premises of our IP system are as old as the American Republic itself, so it’s hardly surprising that they could use some serious updating. Andres Sawicki’s The Law of Creativity? is a compelling and readable effort in this direction, one that invites us to rethink basic features of how creativity works and what that means about how law should engage with it.

  1. Sawicki rightly acknowledges other contributions in this vein, such as Jeanne Fromer, A Psychology of Intellectual Property, 104 NW. U. L. Rev. 1441 (2010) and Gregory Mandel, To Promote the Creative Process: Intellectual Property Law and the Psychology of Creativity, 86 Notre Dame L. Rev. 1999 (2011).
Cite as: David Fagundes, Samuel Johnson Was a Blockhead, JOTWELL (July 24, 2024) (reviewing Andres Sawicki, The Law of Creativity?, __ Cornell L. Rev. __ (forthcoming 2025), available at SSRN (Aug. 28, 2023)), https://ip.jotwell.com/samuel-johnson-was-a-blockhead/.

The Textualist Supreme Court Misreads the Copyright Act Once Again

Peter Karol, What's The Use? The Structural Flaw Undermining Warhol v. Goldsmith, __ J. Copyright Soc'y __ (forthcoming, 2024), available at SSRN (Dec. 13, 2023).

Virtually no one in the scholarly community is happy with the Supreme Court’s 2023 decision in Andy Warhol Foundation v. Goldsmith. The case produced aggressive majority and dissenting opinions by justices whom we normally think of as well aligned: Justice Sotomayor writing for the majority, and Justice Kagan writing in dissent. Peter Karol’s recent article shows that the opinions generated more heat than they did light, because the two Justices fundamentally misunderstood the case, the law, or both. This leaves the case’s precedential value deeply in doubt.

The case pitted the Andy Warhol Foundation (AWF), a nonprofit organization created after Warhol’s death to license uses of his artwork, against Lynn Goldsmith, a photographer who took a photo of the musician Prince in 1981. In 1984, Goldsmith’s licensing agency granted Vanity Fair a license to use the photograph as an artist reference for a story that would appear in the magazine. Vanity Fair then commissioned Warhol to create an image to illustrate a story about Prince. Warhol (and/or his team) created 16 images based on the Goldsmith photo (the Prince Series), one of which was used by Vanity Fair. Then, when Prince died, Condé Nast contacted AWF about licensing one of the Prince Series prints for a commemorative issue, which it published in 2016. Condé Nast chose a different Warhol work depicting Prince for the cover of that issue.

Litigation began when AWF, the licensing agency, filed a declaratory judgement action against Goldsmith asking the court to find that “the Prince Series is a fair use.” As Karol notes, this was a strange first step, because AWF wasn’t asserting that any particular use (e.g. creation of the series) was fairly done, but rather that the works as a whole represent a fair use.

But the confusion quickly grew with Goldsmith’s counterclaim. Goldsmith alleged that AWF infringed her copyright in the photo by: (1) reproducing; (2) publicly displaying; (3) commercially licensing; (4) distributing; and (5) incorporating into derivative works her photo. As Karol explains, however, while four of these activities could constitute copyright infringement (1, 2, 4, & 5), there was no evidence that AWF actually did any of those things. That is, Goldsmith did not show that AWF copied the photo or created the cover image. Those behaviors were done by non-parties, by Warhol and his team or by Condé Nast. Goldsmith did claim that AWF commercially licensed a work based on her photograph, but granting a license simply isn’t copyright infringement. It’s not one of the owner’s enumerated exclusive rights.

Building on recent work by Pam Samuelson, Karol next shows how this “licensing use” became the centerpiece for the Supreme Court’s judgment. Samuelson has argued that the Office of the Solicitor General was responsible for “hijacking” the case and setting it down the wrong path. Based on the SG’s brief, the Court only addressed the “licensing use” by AWF, because it thought that Goldsmith waived any claims she might have had about the creation of the series or against museums’ display of the works. Thus, Justice Sotomayor’s opinion focused almost exclusively on a behavior that is itself non-infringing and for which a fair use defense is unnecessary.

Karol helpfully explains why licensing a copyrighted work isn’t a copyright-relevant use of the work, but rather just a grant of permission and a contract not to sue. The owner of the licensed asset (here AWF) grants permission to someone else who actually uses the asset (here Condé Nast) in a copyright-relevant way by making and distributing copies of it to the public. The license, on its own, needn’t assert that the licensor actually owns any copyright in the work. It simply indicates that the licensor won’t sue the licensee. The best argument for how AWF violated one of Goldsmith’s exclusive rights is that AWF “authorized” others to reproduce the work when it was not entitled to do so. But as Karol argues, any claim in this direction would require pleading and proving that AWF was liable for some sort of secondary infringement rather than direct infringement. And, of course, Goldsmith never attempted to do that.

So what should we take from this case? It seems unlikely that the Supreme Court intended to create a new species of direct copyright infringement for unauthorized licensing. If so, many parties in art, film, and music licensing have new liabilities to fear. Karol, understandably, doesn’t like that reading, so he offers two other suggestions that “stay faithful to the Court’s arguments and intent while avoiding the potential hazards of its foundational error.”

The first is simply to treat Condé Nast as the true direct infringer and interpret the Court’s analysis in that light. As noted, this would require the challenging task of locating some secondary liability for AWF, but doing so is not implausible.

The other alternative is to assume—despite a lack of evidence—that AWF created a digital copy of the Warhol image that it distributed to Condé Nast. To me, this is probably the easier way to teach the case to students, but as Karol notes, many art licensing agencies clearly do not distribute images to their licensees, so it’s hard to know how the Court’s opinion affects them.

Once again, the Supreme Court has made life even more difficult for copyright law professors. And just as it did in Star Athletica, the Court’s misreading of the Copyright Act created the mess. Is it too much to hope that a Court committed to textualism could read the statute properly? I guess so.

Cite as: Christopher J. Buccafusco, The Textualist Supreme Court Misreads the Copyright Act Once Again, JOTWELL (June 18, 2024) (reviewing Peter Karol, What's The Use? The Structural Flaw Undermining Warhol v. Goldsmith, __ J. Copyright Soc'y __ (forthcoming, 2024), available at SSRN (Dec. 13, 2023)), https://ip.jotwell.com/the-textualist-supreme-court-misreads-the-copyright-act-once-again/.