The Journal of Things We Like (Lots)
Select Page

Impacts of Pharmaceutical Capture on Public Health Outcomes

Liza Vertinsky, Pharmaceutical (Re)Capture, 20 Yale J. Health Pol’y L. & Ethics __ (2021).

At the heart of Professor Liza Vertinsky’s excellent article, Pharmaceutical (Re)Capture, lies a persistent paradox: Although the U.S. innovation ecosystem is one of the most sophisticated and advanced in the world, its technological prowess has not resulted in broadly distributed public health benefits. On the contrary, the U.S. experiences some of the highest spending in biomedical innovation, but some of the poorest health outcomes as compared with other developed countries.

Historians of medicine call the belief that the societal path to better health lies in technological interventions a “biomedical approach to health.” This approach has profoundly influenced global and U.S. health care policy in the twenty-first century. An alternative, “sociomedical,” approach looks at expensive, high-technology innovations with a certain degree of skepticism, prioritizing instead broad access to low-cost, low-technology primary health care. Biomedical approaches, however, have eclipsed global and domestic sociomedical practices.1

Intellectual property scholars have paid little attention to this tension between high technology innovation and poor public health outcomes. Driven by its internal focus on designing interventions for incentivizing innovation, much of the intellectual property literature ignores the equally important question of why the impressive technological advances in our healthcare innovation ecosystem have not resulted in equally impressive public health benefits.

Liza Vertinsky’s article fills this important gap by providing a novel and useful way to understand this paradox from an institutional design perspective, which she has characterized as the “pharmaceutical capture of healthcare markets.”

We are all familiar with uses of the term “capture” in the context of regulatory capture of administrative agencies, in which agencies become dominated by the very firms they were created to regulate. In Vertinsky’s article, pharmaceutical capture refers to that industry’s wide-ranging and holistic strategy to systematically influence key players in innovation and healthcare markets at multiple points in the drug development, marketing, and enforcement life-cycle. She argues that traditional understandings of regulatory capture do not adequately reflect the pervasive web of influence that pharmaceutical companies exert over social and market structures. Rather, new models of regulatory capture are needed to reflect the complex set of interlocking influences that tend to further private pharmaceutical interests, often at the expense of public ones.

Vertinsky traces the evolution of various theories of regulation. She begins with the public interest theory of governmental regulation, which posits that government agencies can be trusted to regulate private interests in the service of the public good. She then discusses the post-New Deal pessimism about the ability of governments to avoid being captured by firms they regulate. This pessimism about the effectiveness of governmental oversight has come from both the left and the right. The next phase has focused on public interest litigation to enforce public policies and accepted the now widespread assertion that market imperfections seldom justify governmental intervention.

In pharmaceutical markets, Vertinsky’s key insight is that the structure of the market itself can be “captured” by pharmaceutical interests. This includes molding preferences of key actors such as doctors and patients, social definitions of disease, clinical data and understandings of disease, and the content of medical training.

What makes this type of capture possible are several unique features of pharmaceutical markets. Two are especially worth highlighting. One is an innovation ecosystem that is both fragmented and overlapping, and in which governmental actors lack the ability to coordinate policy with each other and with sophisticated corporate actors that have adopted systematic approaches to influencing policy. A second is a belief in the private sector as the primary engine of lifesaving biomedical innovation, which is a version of the biomedical view of public health.

Vertinsky documents this problem through a case study of the opioid epidemic. She documents the dizzying web of relationships among pharmaceutical companies and prescribers, academic institutions, physicians and patient advocacy groups. Mobilizing these relationships, pharmaceutical companies contributed to the social construction of “pain management” and “chronic pain” as a discrete disease category with specific guidelines for treatment. They also co-opted “patient-centered” language and funded advocacy groups to push the idea of the right to be free from pain—all in the service of increasing prescriptions of opioid medication. As Vertinsky explains, “the healthcare response was largely to increase the prescription of opioids for chronic pain,”2 a biomedical approach that sidelined other more time-consuming behavioral pain therapy approaches.

In short: Pharmaceutical capture has enormous social costs. This makes the claim that any governmental intervention would be worse than an imperfect market particularly suspect in the healthcare context.

To overcome the pharmaceutical capture that has made the U.S. healthcare system so dysfunctional, Vertinsky calls for a pharmaceutical recapture through regulation around public health goals. The first step would simply reframe the deregulation debate as “really a debate over alternative governance models.”3 So conceptualized, “the question of regulatory capture becomes one of how different governance models may favor different actors.”4

A comprehensive recapturing plan, in an area with so many moving pieces, would require several additional papers, but Vertinsky leaves us with an important guiding principle: Governments need to develop holistic, systemic, and flexible approaches to regulation to match the industry’s systemic strategies. One possible path forward would revitalize local public health departments as sites of coordination. If well-funded, these departments could balance the scales that now tip so heavily in favor of biomedical approaches.

  1. See, e.g., Randall M. Packard, A History of Global Health: Interventions into the Lives of Other Peoples (2016).
  2. At 45.
  3. At 7.
  4. Id.
Cite as: Laura Pedraza-Fariña, Impacts of Pharmaceutical Capture on Public Health Outcomes, JOTWELL (November 12, 2021) (reviewing Liza Vertinsky, Pharmaceutical (Re)Capture, 20 Yale J. Health Pol’y L. & Ethics __ (2021)), https://ip.jotwell.com/impacts-of-pharmaceutical-capture-on-public-health-outcomes/.

The Threat Value of Copyright Law

  • Cathay Y. N. Smith, Weaponizing Copyright (May 2, 2021), 35 Harv. J. L. & Tech. __ (forthcoming, 2021), available at SSRN.
  • Cathay Y. N. Smith, Copyright Silencing, 106 Cornell L. Rev. Online 71 (2021).

In two related pieces, Professor Cathay Y. N. Smith revisits the issue of plaintiffs using the threat value of copyright law to advance claims or interests other than protecting the value of original expression. As she documents, these threats appear to be on the rise in response to the growth of the internet and social media, the lack of coherent privacy law in the United States, and the comparatively powerful array of remedies copyright offers copyright owners.

This review focuses on the larger argument in Weaponizing Copyright, which generalizes from, and incorporates much of the argument from, Copyright Silencing. In it, Professor Smith has three overarching goals: (1) to expose the growing prevalence of “weaponizing” copyright; (2) to explain why copyright is more attractive than other bodies of law to achieve these non-traditional enforcement objectives; and (3) to argue that some non-traditional uses of copyright are justified while many others are not.

The first section identifies the enforcement objectives that she views as weaponizing copyright: (a) to suppress facts embedded in copyrighted works; (b) suppress others’ speech, particularly criticism; (c) punish or retaliate for non-copyright-related conduct; (d) protect reputation and moral rights; and (e) to protect privacy. One background fact that undergirds these examples is the privatized copyright enforcement system that YouTube has established through which three “copyright strikes” against an account holder results in deletion of the account. This system adds a new type of threat value to copyright law.

Smith begins with accounts of retaliatory uses of copyright. Copyright has been used to push back against online racism and sexism. When infamous Twitch celebrity PewDiePie shouted a racist slur while streaming himself playing one video game, the copyright owner of a different game filed a copyright strike to retaliate for this, and other similar episodes, by PewDiePie. In a similar vein, Alinity, a woman also popular on Twitch, responded to a misogynistic comment he made about her and other women gamers while commenting on her video by filing a copyright strike.

Two of Smith’s other examples less cleanly fit her narrative because the parties were already in a legally adverse relationship. A subchapter in the well-known free-speech dispute between Rev. Jerry Falwell and Larry Flynt involved Hustler’s ultimately unsuccessful copyright infringement suit against Falwell for using unauthorized copies of the parody at the heart of their dispute for fundraising purposes. More recently, Sony Records is in a termination-of-transfer dispute with Southside Johnny and another artist. To deter the growing number of artists seeking termination, Sony sought to counterclaim against the artists for contributory copyright infringement because they authorized their attorney to use Sony-owned album art on the attorney’s website to recruit new artist clients.

Smith next turns to uses of copyright to suppress unfavorable facts. The most well-established form of this practice is by public figures or their estates, e.g. James Joyce or Howard Hughes, who assert or acquire copyright to suppress unauthorized biographies. For an online version of this practice, Smith relays the story of television doctor Drew Pinsky (Dr. Drew) who dismissed the seriousness of Covid-19 in the early stages of the pandemic. When YouTuber (yes, it’s now a noun) Dr Droops posted a five-minute compilation of Dr. Drew’s statements. Drew filed a copyright strike and takedown notice on YouTube, which took down Dr Droops’s compilation. While the video was eventually made available again, the effort to suppress was temporarily successful.

Smith’s third category is related, but focuses more specifically on using copyright to censor criticism. Her examples includes the Church of Scientology and Jehovah’s Witnesses sending DMCA takedown notices to suppress critical commentary; Netflix sending takedown notices to Twitter to suppress criticism of its controversial film Cuties; and talk radio host Michael Savage threatening a civil rights organization and a documentary film production company for using clips of Savage’s on-air anti-Muslim tirade from 2007 in connection with critical commentary. While Savage’s suit against the civil rights organization was unsuccessful on fair use grounds, his takedown notice against the film was more effective.

The fourth category—protecting reputational and associational interests—is one that Smith agrees is a more traditional use of copyright, since these interests are explicitly protected by moral rights outside the United States. Her examples here include musicians and meme creators who sought to stop political uses of their creations by politicians or groups with whom they want no association.

The final category is uses of copyright to protect personal privacy. Copyright is an imperfect tool to combat nonconsensual pornography unless an image is a selfie, in which the author and subject of an image are the same. But, in the subset of cases involving selfies, copyright takedown notices can be an effective antidote. Sympathetic photographers can use their copyrights in aid of the interests of their subjects, as was done by a photographer who sued the anti-marriage-equality group Public Advocate for using her wedding photos of a gay couple holding hands and kissing in their mailings. Last, are celebrities who have asserted copyright against gossip publications that have obtained leaked photographs or videos.

The article’s second section explains why copyright law is a more attractive legal tool than other areas of law to advance the interests discussed above. In short, obtaining copyright and stating a prima facie claim for infringement are quite easy, enforcement costs are asymmetrical, copyright remedies enable speech suppression without traditional First Amendment review, copyright defenses are insufficient to deter questionable claims, and U.S. law does not provide robust privacy protection or moral rights that might more directly advance some of these interests. A few points that Smith discusses in this section are particularly worth noting. Enforcement cost asymmetry is exacerbated by YouTube’s copyright strike system because filing a strike is relatively cheap and easy, and the potential costs of losing a valuable social media account are quite high. The counter-notice option under the DMCA notice-and-takedown system is largely ineffective, and counterclaims that might raise the costs of aggressive enforcement, such as copyright misuse or state law anti-SLAPP actions, are unavailable. Finally, other sources of law are less effective tools because they are subject to First Amendment review and any attempt to enlist the aid of platforms will be unsuccessful because of Section 230.

After a section recognizing that mixed-motive cases will challenge any policy response to the uses of copyright she has discussed, Smith turns to three potential policy responses. The law can reject all uses of copyright that are not associated with protecting economic interests of the copyright owner, permit at least uses to protect personal privacy, or accept uses that advance dignitary and reputational interests so long as they do not involve suppressing criticism or disclosure of facts.

In engaging with the existing scholarship, Smith is sympathetic with uses of copyright to protect privacy interests, particularly in cases of nonconsensual pornography, but she expresses concern that if privacy is defined too broadly it could also support uses of copyright to suppress evidence of domestic violence or other abusive behaviors. She also sees merit in focusing reform efforts on developing areas of law that more directly protect the interests discussed in the Article.

In the end, Smith concludes that a general line-drawing approach to addressing all of the uses of copyright she discussed would be too difficult, but that the evidence of increased uses of copyright to censor criticism or suppress unfavorable facts should be addressed by making enforcement more symmetrical. Three ways to do that would be to make copyright misuse an affirmative claim, allow anti-SLAPP laws to apply to censorious copyright litigation, and strengthen the balance in the DMCA notice-and-takedown process to deter abusive takedown notices.

These recommendations receive more attention in Copyright Silencing, but I would have liked to see these further developed in this piece. With that said, I enjoyed reading this article, and I agree that Professor Smith has identified some troubling uses of copyright that are exacerbated by the current structure of social media’s privatized dispute resolution scheme, particularly YouTube’s strike system.

Cite as: Michael W. Carroll, The Threat Value of Copyright Law, JOTWELL (October 12, 2021) (reviewing Cathay Y. N. Smith, Weaponizing Copyright (May 2, 2021), 35 Harv. J. L. & Tech. __ (forthcoming, 2021), available at SSRN; Cathay Y. N. Smith, Copyright Silencing, 106 Cornell L. Rev. Online 71 (2021)), https://ip.jotwell.com/the-threat-value-of-copyright-law.

“Trademark, Labor Law, and Antitrust, Oh my!”

Hiba Hafiz, The Brand Defense, 43 Berkeley J. Emp. & Lab. L. __ (forthcoming, 2022), available at SSRN.

I am allergic to antitrust law, but after reading Hiba Hafiz’s recent article, I understand that my aversion is problematic. This paper combines an analysis of trademark law, labor law, and antitrust law to explain how employers exploit trademark law protections and defenses to control labor markets and underpay and under-protect workers. For most IP lawyers and professors, this article will open our minds to some collateral effects of trademark law’s consumer protection rationale on other areas of law with important consequences for economic and social policies.

The Brand Defense says it “takes a systemic view of intellectual property, antitrust and work law,” which means reading it demands keeping several balls in the air and following their interacting paths. It is worth the effort. Here are three paths the article’s argument follows.

First, Hafiz explains how broadened trademark protections for franchisors, like McDonalds, shift obligations from the franchisor to the franchisee. This means that individual restaurants or other franchisees must tightly monitor workers and products in service to “the brand.” This monitoring means that ingredients, components, machines, and processes are strictly regulated under the franchise agreement, leaving little leeway on profit margin for the franchisee except in the cost of labor.

Second, franchisors structured their relationships with franchisees as independent business entities to take advantage of developing antitrust law to functionally immunize their franchisor-franchisee relationships from antitrust liability. Vertical integration by contract or license (as opposed to through ownership) supposedly produces economic efficiencies to consumers, which is thought to alleviate the need for close antitrust scrutiny. But, as Hafiz demonstrates in her literature and doctrinal review of antitrust law, antitrust benefits are supposed to flow both to product markets and labor markets. Hafiz shows that when franchisee-franchisor agreements significantly constrain franchisee choice in the production of goods and services, this leads franchisees to skimp on worker protections and wages, which is also an antitrust harm. Hafiz persuasively argues that antitrust court decisions mistakenly view brand protection (through trademark licensing agreements) as ultimately encouraging competition between brands to consumers’ benefit while ignoring the harm to labor markets.

The third path follows the development of lawful but distressing labor practices by which upstream employers can avoid responsibility towards downstream franchisee workers by arguing a combination of trademark protection (“the brand defense”) and vertical disintegration. Upstream franchisors impose obligations on downstream employer-franchisees through businesses contracts, which include trademark licenses. They use this to claim the absence of a joint-employer relationship despite stringent flow-through quality control requirements. Once again, product quality and labor policy are artificially disentangled. Concern over the latter is hidden or depressed in favor of the consumer welfare justification that anchors both trademark and antitrust law.

There is so much to commend this article: its succinct legal history of the three areas of law; the clarity of its doctrinal analysis in light of the complex and interacting legal regimes; and the unapologetic championing of worker power in an era of increased economic inequality and burgeoning threats to democracy that ideally ensures accountability.

Different readers will draw different insights from it. The breadth of the terrain it covers makes it broadly appealing. When reading The Brand Defense, intellectual property lawyers and professors are likely to experience something familiar suddenly becoming strange. Hafiz describes how trademark law meant to promote consumer confidence and pro-consumer competition between goods and services is harnessed to justify anticompetitive vertical restraints and unfair labor practices.

Trademarks … confer broad[] value as legal trumps in antitrust and work law, immunizing lead firms’ legal exposure for anticompetitive conduct in labor markets and work law violations. Upstream firms have thus deployed a sophisticated set of legal strategies highlighting purported consumer benefits of branding in a way that has successfully obscured agency and court view of the effects of their market power, or wage-setting power, in downstream labor markets and over downstream employees’ terms and conditions of work. (P. 51.)

This is not the typical trademark framework, to say the least. And those writing and thinking about how broader scope of trademark protection produces incumbency benefits, disadvantages small companies, and injures competition and communication, should take note. The Brand Defense is a thoroughly devastating critique of contemporary trademark practice along related lines, but it enlists the adjacent legal fields of work law and antitrust to drive the points home. The doctrinal and regulatory reforms proposed at the end are straightforward, bold, and unfortunately (to me) unlikely to transpire given the current political climate. But the proposals derive from diverse legal mechanisms and thus provide various opportunities of attack.

I cannot guess how readers from the antitrust or labor law fields will find The Brand Defense. If you are less allergic to trademark law than I am to antitrust law, Hafiz’s article is well worth your time. Even if you are allergic, Hafiz’s sophisticated ideas, delivered in systematic arguments, will bring you far enough along to learn a lot about the twenty-first century workplace and the doctrinal and regulatory framework inhibiting the fight against destabilizing economic inequality.

Cite as: Jessica Silbey, “Trademark, Labor Law, and Antitrust, Oh my!”, JOTWELL (September 10, 2021) (reviewing Hiba Hafiz, The Brand Defense, 43 Berkeley J. Emp. & Lab. L. __ (forthcoming, 2022), available at SSRN), https://ip.jotwell.com/trademark-labor-law-and-antitrust-oh-my/.

Profiting Off Infringement

Kristelia Garcia, Monetizing Infringement, 54 U.C. Davis L. Rev. 265 (2020).

It’s hard to imagine people tolerating intentional violations of their physical autonomy, never mind seeking to monetize such behaviors. But as Kristelia García argues in her new essay, Monetizing Infringement, many copyright owners find this strategy appealing.

According to copyright’s standard narrative, infringement reduces the returns to creative effort and, thus, undermines authors’ incentives to produce new works. Here, however, García “destabilizes long-held but problematic assumptions about the interplay between copyright law’s purported goals and its treatment of infringement by challenging the received wisdom that rightsholders are necessarily anti-infringement.” (P. 270.)

Building on work by Tim Wu, Dave Fagundes, and Rebecca Tushnet, among others, García catalogues three distinct forms of monetizing copyright infringement across a variety of creative domains: (1) profitable infringement, in which infringement results in income for the rightsholder; (2) remedial infringement, in which infringement mitigates a worse outcome for the rightsholder; and (3) promotional infringement, in which infringement amounts to valuable and cost-efficient promotion for the rightsholder’s content.

It is well known that owners of sound recording copyrights have found user-generated content on YouTube to be a profitable form of infringement, thanks to YouTube’s Content ID system. When musicians’ fans create and post videos to YouTube, record labels can reap the advertising revenue without having to generate their own content. But García also describes how video game developers rely on sales of extra downloadable content, like additional levels and characters, to benefit from pirated versions of their games. While users may be able to pirate a game for free, they are often willing to pay for added content that increases its appeal.

Game developers also encourage what García calls remedial infringement, encouraging piracy when it is a less significant problem than others that they face. For example, gray market resellers offer game “keys” that allow purchasers to access games and promotional content for lower prices than the developer is charging. Often, however, the keys don’t work, and the developers spend considerable time and money responding to complaints about fake and broken keys. In response, García notes that several developers have opted to encourage users to simply pirate their games, instead of using gray market sites. Either way, the developers argue, they aren’t being paid. But at least they don’t have to deal with the additional headache.

Most interesting to me is García’s category of promotional infringement and her example of musicians encouraging fans to create videos that incorporate the musicians’ songs and post them online. In some cases, the original video will generate millions of views and promote fan interest in the song. In other cases, the video will inspire others to create their own versions. But in either case, the potentially infringing videos can generate new streams and new revenue for musicians. García and I elaborate on this phenomenon in our forthcoming article, “Pay-to-Playlist: The Commerce of Music Streaming.”

Having cataloged various forms of monetizing infringement, García then elaborates on potential reasons why copyright owners might engage in this behavior rather than simply suing (or threatening to sue) for infringement. She notes how copyright law covers a wide variety of content and actors with a fairly similar set of legal rights. This opens up the possibility that owners simply have very different preferences and norms with respect to uses of their works. García also suggests that monetization may be an effective strategy in situations where technology changes more rapidly than law. Although authors might not prefer this strategy in a perfect world, they may come to rely on it where industrial changes outpace legal ones.

Finally, although this article is largely descriptive rather than normative, García considers the potential costs and benefits of monetizing infringement. On the benefits side, she includes the efficiencies of private ordering, tailoring the law’s one-size-fits-most approach, and an effective shrinking of copyright’s scope and duration, at least for those who aren’t targeted with infringement actions. But monetizing infringement has costs as well. It is easier and safer for larger established players than it is for upstarts or independents. Selective copyright enforcement can also lead to confused norms and user uncertainty. If one gaming company allows or encourages infringement, that doesn’t mean that others will—or that this one will continue to do so in the future and for everyone.

The realities of how copyright law is wielded in the hands of owners often differ from the standard narratives that lobbyists and scholars articulate about incentives and access. García’s work joins a growing movement of scholars who are exploring the ways in which the law interacts with the particularities of actual creative industries. This is an important contribution for scholars who want to move beyond just-so stories and abstract theories.

Cite as: Christopher J. Buccafusco, Profiting Off Infringement, JOTWELL (July 29, 2021) (reviewing Kristelia Garcia, Monetizing Infringement, 54 U.C. Davis L. Rev. 265 (2020)), https://ip.jotwell.com/profiting-off-infringement/.

Dirty Hands, Dead Patent?

Sean Seymore, Unclean Patents, 102 B.U. L. Rev. __ (forthcoming, 2022), available at SSRN.

The 2018 Federal Circuit Gilead Sciences v. Merck & Co.1 decision is one of the rare patent cases in which a court has applied the unclean hands doctrine to withhold a remedy for infringement. Sean Seymore used this case as a launching point for a deeper and more expansive reconception of the role of the unclean hands doctrine in patent law. He suggests that a range of pre-issuance malfeasance by the patentee, not just inequitable conduct before the USPTO, should preclude relief for the offending plaintiff against all defendants.

The doctrine of unclean hands is best known in patent law as the origin of the inequitable conduct defense, which renders patents obtained from the USPTO through materially deceptive behavior permanently unenforceable against anyone. Unclean hands, however, is both broader and narrower than inequitable conduct. It is not limited to misconduct in patent prosecution, but it only prevents the patentee from enforcing the patent against the particular defendant in the action involving the misconduct; other defendants are fair game.

So, while inequitable conduct results in permanent unenforceability, unclean hands only creates relative unenforceability. The rationale for this dichotomy is that if the patentee’s misconduct did not occur during the process of obtaining the patent, the underlying property right remains taint-free. Thus, only enforcement of the right in the proceeding to which the misconduct relates should be disallowed.

Many pundits remarked the surprising revival of the standalone doctrine of unclean hands –untethered from inequitable conduct– in the Gilead Sciences decision. However, Seymore goes deeper, using the case as an opportunity to propose a more robust, expansive, yet theoretically sound role for unclean hands in patent cases; a role which complements, without subsuming, its inequitable conduct progeny.

In his thought-provoking article, Seymore identifies a type of pre-issuance misconduct that raises the same misconduct-in-patent-acquisition concerns as inequitable conduct, but because it does not involve USPTO proceedings, gets treated as unclean hands with only relative unenforceability (as between the parties) and not permanent unenforceability with erga omnes effect.

This result, according to Seymore, makes no sense. He persuasively argues that a more equitable and symmetrical approach would be to treat all misconduct that taints the patent right ab initio the same: by imposing a remedy of permanent unenforceability.

The facts of the Gilead case exemplify Seymore’s scenario of concern. There, Gilead shared its Hepatitis C lead compound, sofosbuvir, with Merck as part of a technology collaboration subject to a confidential firewall agreement. Merck violated the agreement by allowing one of its in-house lawyers — prosecuting Merck’s own applications — to participate in a teleconference where he learned sofosbuvir’s structure. He later amended Merck’s pending applications to cover sofosbuvir. Moreover, when Merck later sued Gilead for patent infringement, the same attorney gave false testimony at trial.

Gilead’s successful assertion of an unclean hands defense was based on both the litigation and pre-litigation misconduct. In affirming the holding, the Federal Circuit noted that the pre-litigation business misconduct met the requirement for the unclean hands defense by potentially enhancing Merck’s legal position, possibly expediting patent issuance, and likely lowering invalidity risks in litigation. These were all directly connected to the patent enforcement relief sought.

Seymore employs a series of examples to distinguish actions triggering inequitable conduct, such as submitting fabricated data to the USPTO, from those with which his proposal is concerned. An example of the latter is falsifying information in a grant proposal that results in an award of funds later used to develop a patented invention. While there is no fraud on the USPTO, there is fraud on a federal agency and the patent is the fruit of that poisonous tree. As such, per Seymore, the patent should be rendered permanently unenforceable.

An intriguing example of “misconduct” in the article is poaching for the public good. In this scenario, a hypothetical COVID-19 vaccine manufacturer seeking to speed up product development, poaches an employee from a competitor (who has already developed a vaccine) and uses the knowledge of what does not work obtained from the employee to accelerate its product development and FDA approval.

While the public benefits from a second vaccine on the market, should the manufacturer be able to enforce its vaccine patent(s) against a different competitor? Is there a sufficient nexus between the possible trade secret misappropriation (poisonous tree) and acquisition and enforcement of the patent (fruit)? Should engaging in bad conduct for a good cause affect the taint? Or should we be less concerned about not enforcing patents (which could exclude other manufacturers from the market) in a public health situation? Such tensions are beyond the article’s direct focus but perhaps could fruitfully be explored in future work.

Considering the open-ended nature of the unclean hands determination, and the risk that it could devolve into a patent litigation “plague”2 like inequitable conduct pre-Therasense,3 Seymore wisely cabins application of his proposal with several constraints. These include a tort-based proximity requirement: misconduct that lacks a sufficient nexus to acquisition of the patent right (what he calls collateral misconduct) should be subject to the ordinary unclean hands remedy of relative unenforceability. He also articulates five discretion-limiting principles and aligns the proposal with normative justifications for the doctrine such as court integrity, public interest, and deterrence of wrongful conduct.

Seymore candidly notes that his proposal could result in overdeterrence: patentees taking inefficient precautions to avoid misconduct, or bypassing patents for trade secret protection. He further opines that bona fide purchasers for value without notice of the misconduct could be harmed (and patent rights made more uncertain) if his proposal is adopted. Nevertheless, he concludes, quite correctly, that this risk already exists for inequitable conduct, and that the high hurdle of clear and convincing evidence required for proving unclean hands provides a further critical limit. He also suggests ways for patentees to purge the “taint” before filing for patent protection and provocatively queries whether some types of “uncleanness” in patent law should be tolerated, citing to the largely defunct moral utility doctrine.

I probably appreciated Seymore’s paper more than most because he elegantly develops a wonderfully cogent theory that I wish I had been aware of in writing an article over a decade ago. At the time, I alluded to a kind of pre-litigation invention-creation misconduct possibly recognizable in equity, but my effort was under-theorized. Sean Seymore’s insightful recognition of the latent implications of the Gilead decision’s resurrection of the unclean hands defense in patent cases was a pleasure to read and an important evolution in thinking about equitable doctrines in patent law.

  1. 888 F.3d 1321 (Fed. Cir. 2018).
  2. “The habit of charging inequitable conduct in almost every major patent case has become an absolute plague.” Burlington Indus., Inc. v. Dayco Corp., 849 F.2d 1418, 1422 (Fed. Cir. 1988).
  3. Therasense, Inc. v. Becton, Dickinson & Co., 649 F.3d 1276 (Fed. Cir. 2011).
Cite as: Margo Bagley, Dirty Hands, Dead Patent?, JOTWELL (July 2, 2021) (reviewing Sean Seymore, Unclean Patents, 102 B.U. L. Rev. __ (forthcoming, 2022), available at SSRN), https://ip.jotwell.com/dirty-hands-dead-patent/.

Update of Jotwell Mailing Lists

Many Jotwell readers choose to subscribe to Jotwell either by RSS or by email.

For a long time Jotwell has run two parallel sets of email mailing lists, one of which serves only long-time subscribers. The provider of that legacy service is closing its email portal next week, so we are going to merge the lists. We hope and intend that this will be a seamless process, but if you find you are not receiving the Jotwell email updates you expect from the Intellectual Property section, then you may need to resubscribe via the subscribe to Jotwell portal. This change to email delivery should not affect subscribers to the RSS feed.

The links at the subscription portal already point to the new email delivery system. It is open to all readers whether or not they previously subscribed for email delivery. From there you can choose to subscribe to all Jotwell content, or only the sections that most interest you.

Tracking Change and Continuity in Twenty-First Century Copyright Fair Use

Barton Beebe, An Empirical Study of U.S. Copyright Fair Use Opinions Updated, 1978-2019, 10 N.Y.U. J. Intell. Prop. & Ent. L. 1 (2020).

In the past sixteen years, copyright law has undergone important changes. Court have issued major decisions, such as Skidmore v. Led Zeppelin, which clarified the Ninth Circuit’s substantial similarity test and rejected the inverse ratio rule, and Capitol Records, LLC v. Vimeo, LLC, in which the Second Circuit elucidated a more concrete red flag knowledge standard for purposes of the Digital Millennium Copyright Act. Significant new copyright legislation, in the form of the Music Modernization Act, has also been promulgated. And during this period, fair use jurisprudence has also continued to grow apace. Many of the cases that are now considered copyright law canon for students, academics, and practitioners alike were decided during this period, including Bill Graham Archives v. Dorling Kindersley, Ltd., Perfect 10, Inc. v. Amazon.com, Inc., Cariou v. Prince, and Authors Guild, Inc. v. Google, Inc. Barton Beebe’s recent article analyzing fair use opinions from 1978 to 2019 thus provides a welcome update to his earlier work that covered fair use cases from 1978 through 2005.

Both Beebe’s original article and this update use statistical analyses of all the fair use opinions issued during the period to draw conclusions about how judges have applied the four fair use factors and their subparts. Beebe’s earlier work provided an important statistical analysis baseline for anyone wanting to understand, modify, or improve fair use. This long-awaited update will no doubt prove useful in providing the most recent data on fair use determinations to those in the copyright space.

The updated article, in addition to those opinions issued during 1978-2005, analyzes a further 273 fair use opinions from 220 cases. Perhaps surprisingly given the number of fair use opinions issued over the past decade and a half, fair use analyses largely remained the same during the 2006-2019 period. For example, the vast majority of courts have continued to primarily apply only the four factors listed in Section 107, even though the factors are explicitly meant to be nonexclusive. Courts also tend to apply them mechanically, moving through each factor to see which party it favors. The Second and Ninth Circuits, as well as the Southern District of New York, also continue to exert the most influence on fair use cases, although the Ninth Circuit is growing in importance.

However, Beebe discovered several important trends during this period. On average, the number of opinions addressing fair use is on the rise. Many more have arisen in opinions addressing motions to dismiss, which Beebe—no doubt correctly—chalks up, at least in part, to the Supreme Court’s stricter motion to dismiss standard from Bell Atlantic Corp. v. Twombly and Ashcroft v. Iqbal, both of which were decided after the initial study. The fair use defense has also been increasingly adjudicated at the summary judgment stage.

In addition, Beebe found that, like in his earlier study, lower courts continue to cite to overturned precedent and dicta. For example, in Sony Corp. of America v. Universal City Studios, Inc., the Supreme Court established the presumptions that commercial uses are unfair, noncommercial uses are fair, and commercial uses harm the plaintiff’s market. But in Campbell v. Acuff-Rose Music, Inc., the Supreme Court limited these standards by reducing the importance of commercial use to a considered factor rather than a per se fair use rule. Yet district courts have continued to cite to Sony unabashedly for these rules. This has even increased since 2005. Similarly, courts continue to cite the Supreme Court’s dicta in Harper & Row v. Nation Enterprises that factor four is “undoubtedly the single most important element of fair use,” even though the Supreme Court overrode this statement in Campbell by stating that all factors should be considered and that the transformativeness inquiry was at the heart of fair use.

The core of Beebe’s article, however, is how he uses data on the fair use factors to determine both the impact of a factor on the overall outcome and its correlation with the other factors. The first and fourth factors—the purpose and character of the work (including transformativeness) and market effect—continue to predominate, with the fourth factor correlating the most strongly with the overall fair use determination. The first and fourth factors also strongly correlate with each other.

The determinativeness of the fourth factor may, at first blush, surprise many commentators who have argued that the transformativeness inquiry drives the fair use analysis. Beebe found that as compared to 2005, when it appeared that the importance of transformativeness was waning, courts now consider whether a use is transformative in the vast majority of cases. Indeed, transformativeness, taken alone, was the single most determinative subfactor for the overall fair use outcome, even more so than market effect. Despite this influence on the overall outcome, Beebe found that transformativeness has not yet eaten the entire fair use inquiry.

Beebe notes that statistics cannot be a replacement for traditional doctrinal analysis, but the data he has gathered does provide a valuable high-level understanding of the trends in fair use jurisprudence and opens the way for further research on fair use. Hopefully, Beebe continues this long-running project. The Supreme Court’s decision in Google LLC v. Oracle America, Inc., is the first Supreme Court decision to address fair use since Campbell in 1994. How courts decide to interpret Google v. Oracle could prove significant for fair use decisions in the coming years, especially those involving computer programs and other technological innovations.

Cite as: Michael Goodyear, Tracking Change and Continuity in Twenty-First Century Copyright Fair Use, JOTWELL (June 2, 2021) (reviewing Barton Beebe, An Empirical Study of U.S. Copyright Fair Use Opinions Updated, 1978-2019, 10 N.Y.U. J. Intell. Prop. & Ent. L. 1 (2020)), https://ip.jotwell.com/tracking-change-and-continuity-in-twenty-first-century-copyright-fair-use/.

How Do Innovation Races Affect Research Quality?

Ryan Hill & Carolyn Stein, Race to the Bottom: Competition and Quality in Science (Jan. 5, 2021).

Significant new technologies have often been invented nearly simultaneously, and some scholars have worried that patent law’s rewards for the first to file create incentives to race to the patent office and do less to refine the invention. Similar concerns have been voiced about competition for academic priority leading to rushed, low-quality publications. But measuring whether competition for IP or academic credit actually decreases quality has proven difficult, and this difficulty limits the usefulness of models of innovation races.

In a creative and important new working paper, Race to the Bottom: Competition and Quality in Science, economists Ryan Hill and Carolyn Stein tackle this empirical challenge. They focus on structural biologists, whose research deciphering protein structures has advanced drug and vaccine development (including for COVID-19) and led to over a dozen Nobel Prizes. Journals and funding agencies generally require structural biologists to deposit their structures for proteins and other biological macromolecules in a worldwide repository, the Protein Data Bank (PDB). Using this rich dataset, Hill and Stein have documented that structures with higher expected reputational rewards induce more competition and are completed faster—but at lower scientific quality. Recognizing and navigating this tradeoff is important for scholars and policymakers concerned with allocating awards among competing innovators through a range of policy instruments, ranging from academic credit to intellectual property.

Three key features of the PDB make it a viable setting for this research. First, it has objective measures of project quality. The quality of a PDB structure is based on how well it fits to experimental data, resulting in quantitative, unbiased quality metrics. Second, it provides measures of project timelines. The authors could observe both the time between collecting experimental data and depositing a structure (as a measure of project speed) and the time between a first deposit and the deposit of similar structures (as a measure of competition). Third, it enables estimates of the expected reputational reward from winning the priority race to deposit a given protein structure. The detailed descriptive data in the PDB allows a structure’s potential to be estimated based on information that would have been known to researchers before they began working, including the protein type, organism, and prior related papers.

If scientists can choose whether to invest in a research project and how long to refine their work before publishing, then the projects with the highest potential reputation rewards should induce the most entry—but entrants concerned about being scooped may also rush to publish their work prematurely. And this is exactly what Hill and Stein find. Structures in the 90th versus the 10th percentile of the potential distribution induce more competition (30% more deposits), are completed faster (by 2 months), and have lower scientific quality (by 0.7 standard deviations). The fact that high-potential projects are completed more quickly suggests these results aren’t driven by high-potential projects being more complex. Additionally, the authors show that these correlations are smaller for scientists who receive lower reputational rewards from publication and priority: researchers at government-funded structural genomics consortia, who are focused achieving a comprehensive protein catalog rather than publishing individual results.

The welfare implications of rushed, low-quality protein structures appear significant. Improving a structure generally requires inefficient reinvestment of the same costs expended by the original research team. But optimizing existing incentives is challenging. Hill and Stein consider increasing the share of credit allocated to the second-place team—such as through recent journal policies that treat scooped papers on equal footing with novel papers—and conclude that if the total rewards are fixed (as seems plausible with scientific credit), the quality improvement might be outweighed by decreased investment. As another option, they argue that both investment and quality could be improved by barring entry by competitors once one team has started working on a protein structure—a sort of academic prospect theory, as was the norm in the early days of structural biology, before the size of the field made the norm too difficult to enforce. Importantly, this result depends on the specific nature of their model, with quality differences driven more by rushed work to avoid being scooped than by the skill of the research team. Reintroducing this kind of entry barrier for academic research would be challenging (and problematic under antitrust laws), but this result may inform debates over the optimal timing of awarding patent rights.

Hill and Stein’s rigorous empirical evidence that innovation races can lead to decreased quality scientific work is a welcome addition to the innovation racing literature, including because many racing models omit this consideration altogether. And their paper is also well worth reading for their thoughtful discussion of key factors for allocating rewards among competing innovators. First, how easy is it to build on incomplete work, both scientifically and legally? Unlike in structural biology, follow-on work is not always particularly costly; for example, if an ornithologist releases an incomplete dataset of bird species, a subsequent team can pick up the project relatively seamlessly, increasing the value of early disclosure. Second, how important are differences in research skill relative to the decline in quality caused by rushing? Ending innovation races early may be effective in structural biology, but in many cases, giving the first team time to complete work well may not be worth the cost of preventing a better team from stepping in. Third, are rewards fixed? Creating additional academic credit may be difficult, but financial rewards—including through government prizes and subsidies—can be used to increase the second team’s payoff without reducing the first’s.

Before reading this paper, I had thought about the problem of rewards for incomplete research primarily in terms of quality thresholds such as patentability criteria, but choosing a threshold that applies across projects of varying difficulty is challenging in practice. Hill and Stein have given me a richer understanding of the relevant variables and policy instruments for tackling this challenge, and I look forward to seeing the impact this work has on the innovation law community.

Cite as: Lisa Larrimore Ouellette, How Do Innovation Races Affect Research Quality?, JOTWELL (April 30, 2021) (reviewing Ryan Hill & Carolyn Stein, Race to the Bottom: Competition and Quality in Science (Jan. 5, 2021)), https://ip.jotwell.com/how-do-innovation-races-affect-research-quality/.

A Bold Take on Copyright Implications of Text & Data Mining

Michael W. Carroll, Copyright and the Progress of Science: Why Text and Data Mining Is Lawful, 53 UC Davis L. Rev. 893 (2020).

Professor Carroll is not the first copyright scholar to have asserted that text and data mining (TDM) is and should be lawful as a matter of copyright law (and he probably won’t be the last).1 The hook that pulled me through the 72 pages of his excellent article was the introduction’s announced intention to explain why use of TDM tools to run searches on digital repositories of infringing copies of copyrighted works do not infringe, at least as a matter of U.S. copyright law.

Text and data mining is a multi-stage technical process by which researchers compile and refine large quantities of text and other data so that it can be processed with statistical software to detect patterns that would be difficult or impossible for a human to perceive without the aid of the machine. The article considers the legality of TDM using SciHub as an exemplar. SciHub is a well-known repository of vast quantities of the scientific journal literature. Many scientists want to do TDM research using SciHub, but courts have held that that database is infringing. Although SciHub has more than once been forced to shut down, it has re-emerged every time and can still be found on the Internet.

Well-documented in this article, as well as in the technical literature to which Carroll copiously cites, is the promise of myriad scientific insights that researchers’ use of TDM tools could unlock in a wide variety of fields. (For those not already conversant with TDM technologies, this article provides a very useful primer that is neither too nerdy nor too simplistic for lay readers to follow.) If promoting progress in science and useful arts continues to be copyright’s constitutional purpose, the logical conclusion follows, Carroll intimates, that copying of in-copyright works to enable TDM research is and should be lawful.

Thanks to the Supreme Court’s Eleventh Amendment jurisprudence2 and the audacity of Google and the University of Michigan when agreeing to allow Google to scan all eight million books in the university’s library in exchange for the library’s getting back a digital copy, and thanks also to the Authors Guild for its unsuccessful lawsuits charging Google, the University of Michigan and its HathiTrust repository with copyright infringement, we know that digitally scanning in-copyright books for TDM and other non-consumptive purposes is non-infringing.

Carroll methodically works through each type of copying that happens in the course of collecting, formatting, processing, and storing data for TDM purposes. The article works through the relevant copyright case law for each type of copying that TDM involves. The ground over which the article travels will be familiar to many readers, but it provides a useful recap of how the law of digital copying has evolved over the last two decades.

Copyright is not, of course, the only potential obstacle to TDM research. Numerous proprietary publishers of scientific journals offer institutional database subscriptions to universities and other research institutions. However, those digital repositories are not interoperable. Researchers consequently cannot run searches across various databases. Cross-publisher collaborations are rare, and the license terms on which databases are available may impair researchers’ ability to make the full use of TDM tools. Publishers and the Copyright Clearance Center are promoting licensing of TDM as a value-added service and some of these licenses are more restrictive than TDM researchers would want.

One can understand why scientific researchers, even at institutions with institutional database subscriptions, would be attracted to using SciHub for TDM research. It is easier to use than some of the publisher repositories; the SciHub database is far more comprehensive than any of the proprietary databases; and there are no license restrictions to limit researcher freedom to investigate with TDM tools to their hearts’ content.

Downloading SciHub seems a risky strategy for TDM researchers who do not want to be targets of copyright infringement lawsuits. Carroll argues that running TDM searches on the SciHub collection hosted elsewhere involves only the kind of transient copying that the Second Circuit found too evanescent to be an infringing “copy” of copyrighted television programming in the Cartoon Networks case. The results of the TDM research would be unprotectable facts extracted from the SciHub collection.

This is a bold assertion, which is well-documented. Read it for yourself to see if you agree.

  1. See, e.g., Edward Lee, Technological Fair Use, 83 S. Cal. L. Rev. 797, 846 (2010); Jerome H. Reichman & Ruth L. Okediji, When Copyright Law and Science Collide: Empowering Digitally Integrated Research Methods on a Global Scale, 96 Minn. L. Rev. 1362, 1368-70 (2012); Matthew Sag, Copyright and Copy-Reliant Technology, 103 NW. U. L. Rev. 1607 (2009).
  2. The Supreme Court has concluded that the Eleventh Amendment bars damage awards against states or state-related institutions. See Allen v. Cooper, 140 S.Ct. 494 (2020). The University of Michigan had reason to think that its endowment was safe from any lawsuit that might challenge its deal with Google as copyright infringement.
Cite as: Pamela Samuelson, A Bold Take on Copyright Implications of Text & Data Mining, JOTWELL (April 1, 2021) (reviewing Michael W. Carroll, Copyright and the Progress of Science: Why Text and Data Mining Is Lawful, 53 UC Davis L. Rev. 893 (2020)), https://ip.jotwell.com/a-bold-take-on-copyright-implications-of-text-data-mining/.

Refashioning Copyright’s “Substantial Similarity” Infringement Test

Carys J. Craig, Transforming ‘Total Concept and Feel’: Dialogic Creativity and Copyright’s Substantial Similarity Doctrine, 38 Cardozo Arts & Ent. L. J.  __ (Forthcoming), available at SSRN.

Carys Craig is far from the first scholar to criticize copyright law’s vague “substantial similarity” test for infringement, especially when that test is based on the even vaguer “total concept and feel” standard, but the difference is that in her new article, Transforming “Total Concept and Feel”: Dialogic Creativity and Copyright’s Substantial Similarity Doctrine, Professor Craig advances an alternative approach that might get some traction.

Professor Craig centers her critique on a recent case that involves the two images below. A jury could look at these two photos and decide that an ordinary person could view the “total concept and feel” of the two images as the same. But Craig explains why that’s not the right outcome.

The image on the left is a photograph by Lynn Goldsmith. The subject, of course, is the late and much-lamented Prince Rogers Nelson: the musician the world knew and will remember as “Prince.” Goldsmith made the photograph in 1981, just as Prince was breaking into the public’s consciousness. The photograph was made during a shoot that was originally undertaken for a planned story on Prince in Newsweek—a story that never ran in that magazine. In 1984, Vanity Fair licensed the Goldsmith photo for $400, but they didn’t use it in the story they published on Prince. Instead, they gave it to pop art colossus Andy Warhol, who cropped the photograph down to Prince’s face and used it as the raw material for 16 iconic portraits in Warhol’s “Prince Series,” one of which, pictured below at right, was used to illustrate the Vanity Fair article.

After Vanity Fair used the Warhol portrait again as the cover illustration for a 2016 special issue commemorating Prince’s death, Goldsmith made her displeasure known. The Warhol Foundation responded by filing a complaint in federal district court in Manhattan seeking a declaratory judgment that Warhol’s Prince Series did not infringe the Goldsmith photograph. The Foundation argued both that the Warhol portraits were not substantially similar to the Goldsmith original, and that the Warhol works were protected as fair uses. In 2019 the district court ruled for the Warhol Foundation, finding that the works in the Prince Series were fair use without considering the Warhol Foundation’s substantial similarity arguments. The case is now on appeal to the Second Circuit.

Professor Craig’s article focuses on the substantial similarity issues that the litigation over Warhol’s Prince Series raises. It is perhaps an odd fact that on perhaps the single most important issue in copyright law—the degree of similarity between a plaintiff’s work and a defendant’s that is necessary to support liability—the copyright statute is conspicuously silent.

In the absence of any statutory command, the federal courts have developed a set of related tests for similarity that all boil down to the same ultimate inquiry: would an ordinary lay observer, reader, or listener (the final word used depends on the medium involved) consider the works to be impermissibly similar?

As Professor Craig notes, the reaction of an ordinary lay observer is certainly relevant, but it should not comprise the entirety of the test for what is referred to as “substantial similarity.” Section 102(b) of the Copyright Act directs that courts must not base infringement findings on facts, ideas, concepts, and other elements of creative works that are outside the scope of what copyright protects. And it’s true that before remitting an infringement lawsuit to the ordinary observer’s judgment, courts often perform an analysis in which they dissect the work into protectable and unprotectable elements, disregard or “filter” the latter, and compare the degree of similarity with respect only to the former.

But courts do this only to ensure that there is enough similarity in protected expression for the case to go to the jury, which will then apply the ordinary observer test. So if a reasonable jury could find infringement based on similarity in protected elements alone, the “dissection” phase of the test concludes and the case is given to the jury to apply the ordinary observer test. Often, courts treat the two phases of the analysis as disjoint. That is, juries often are not instructed, in performing their ordinary observer analysis, to disregard similarities that relate to elements of the plaintiff’s work that the court previously has found unprotected.

Consequently, the jury’s ordinary observer inquiry often is little more than the application of unguided intuition. As Professor Craig notes—and others have noted—nothing in the “ordinary observer” directs that juries, or even judges in bench trials, confine their intuitions about impermissible similarity to the protectable elements of the plaintiff’s work.

Professor Craig argues that this problem is made worse by a formulation of the substantial similarity test that appears in decisions of both the Ninth and Second Circuits directing juries to assess similarity in terms of the “total concept and feel” of the two works at issue. That formulation is indeed a misfire. It directs juries to focus on an element, the work’s “concept,” that Section 102(b) of the Copyright Act identifies specifically and by name as unprotectable. The formulation also directs juries to focus on an element, the “feel” of a work, that may differ from observer to observer and is only dubiously copyrightable even when its content can be articulated with any measure of precision.

In short, copyright’s substantial similarity test is a doctrinal failure. But Professor Craig has a suggestion for how to salvage the test—one which uses elements of the current approach and therefore is in the nature of a course-correction that a court might actually entertain. Here, in a nutshell, is Professor Craig’s approach:

[T]he unlawful appropriation step should begin with the holistic comparison of the two works to determine if their overall aesthetic appeal is substantially similar. If the works are, perceived in their totality, substantially different, then the infringement inquiry should end there: The defendant has created a non-infringing new work that is, in its “total concept and feel,” more than merely a colorable imitation of the plaintiff’s work. If the works are substantially similar in their overall impression, the decision-maker should proceed (with the necessary expert evidence) to dissect the plaintiff’s work into protectable and public domain elements, and to filter out the latter from the infringement analysis. The defendant’s work can then be compared again to the protected elements in the plaintiff’s work. If they are not substantially similar after the unprotected elements have been appropriately filtered out, there is no infringement, notwithstanding the similarities in their “total concept and feel.” If, on the other hand, the defendant’s work is substantially similar to the protected expression in the plaintiff’s work, prima facie infringement is established, and the decision-maker can proceed to consider the availability of a fair use defense.

Professor Craig revises the substantial similarity test at two levels. First, she’s re-fashioned the “total concept and feel” test into a tool for identifying when a defendant’s work differs so substantially from a plaintiff’s that it should escape copyright liability altogether. In such instances, defendant’s copying was in the service of creating something substantially new—an outcome which fulfills copyright’s grounding purpose of encouraging the production of new works, and which therefore, Professor Craig argues, should be outside the scope of the copyright holder’s monopoly.

The Warhol Prince Series, Professor Craig suggests, should escape liability: Warhol’s works, which boldly outline Prince’s face against various brightly-colored backgrounds, portray Prince as an icon at the height of his power, as distinguished from the young, vulnerable artist captured in the Goldsmith photograph. It is unclear precisely how Warhol achieves this transformation; Warhol’s ineffability is entwined with his greatness. It is clear—at least to me—that Warhol does in fact produce work that is both recognizably based on the Goldsmith and yet indisputably new.

Professor Craig’s first move thus re-conceptualizes the “total concept and feel” formulation as a helpful way of framing the inquiry into whether the defendant’s work is new enough to slip the bonds of plaintiff’s copyright, rather than as a misleading way of inquiring into the presence of the degree of similarity required for liability.

Professor Craig’s second innovation is equally helpful. She re-positions the “dissection” part of the substantial similarity test to a place—after the revised “total concept and feel” inquiry rather than before—where it can actually do some good. After Professor Craig’s re-ordering, the two parts of the test are no longer disjoint. Rather, dissection must be undertaken only if the initial inquiry does not fall in favor of the defendant. In that case, the “total concept and feel” of defendant’s work is close enough to plaintiff’s where the factfinder, be it jury or judge, must inquire whether the similarities between the two works are due to defendant appropriating substantial protected expression, or re-using unprotected facts, ideas, concepts, such stock elements that are properly treated as scenes a faire, or expression that merges with underlying ideas.

While I would have welcomed an explanation of how her revised substantial similarity test could be administered in a jury trial, the article merits a close reading, and I recommend it.

Cite as: Christopher J. Sprigman, Refashioning Copyright’s “Substantial Similarity” Infringement Test, JOTWELL (March 5, 2021) (reviewing Carys J. Craig, Transforming ‘Total Concept and Feel’: Dialogic Creativity and Copyright’s Substantial Similarity Doctrine, 38 Cardozo Arts & Ent. L. J.  __ (Forthcoming), available at SSRN), https://ip.jotwell.com/refashioning-copyrights-substantial-similarly-infringement-test/.