The Journal of Things We Like (Lots)
Select Page

Intellectual Property Rights: A Destroyer as Well as a Creator of Jobs?

Camilla Hrdy, Intellectual Property and the End of Work, 71 Fla. L. Rev. 303 (2019).

Do intellectual property (IP) rights create or destroy jobs (or both)? Industry associations and governmental agencies, such as the Patent & Trademark Office (PTO), frequently tout IP as a major force in creating (good) jobs as well as significantly contributing to economic growth. In 2016, the PTO, for instance, claimed that IP-intensive industries were directly or indirectly responsible for 45.5 million jobs, said to represent 30 percent of all jobs in the US. Without questioning this statistic, Professor Hrdy’s article explains that this is at best only one side of the story.

The main insight of the article is this: “Intellectual property may be partly responsible for job creation for people who work within IP-intensive industries . . . But a significant subset of innovations protected by IP, from self-service kiosks to self-driving cars, are labor-saving, and in many cases also labor-displacing” (emphasis in the original). The development and deployment of automated systems for performing a wide variety of tasks in a wide array of industries is “drastically reduc[ing] the amount of paid human labor required to complete a task.” Job losses resulting from technological change give rise to what economists call “technological unemployment.”

While some studies have concluded that the displacement of labor due to technological innovations has resulted in more job creation than job destruction, Hrdy questions whether this result will hold true in the near future owing to several factors. These include the increasing quality and pace of automation in various sectors, a decrease in quality of the work that remains unautomated, a rising inequality in who has what kinds of jobs, and the inability of education to keep pace with the needs of displaced workers.

Hrdy offers self-driving trucks as a case study. According to the American Trucking Association, there are currently 3.5 million professional truck-drivers in the US. The median salary of these truck drivers is about $40,000 per year. If the huge investments now being made in the development of IPR-laden self-driving trucks pay off, truck driving may no longer be a viable source of employment for most, if not all, of these people. The highly skilled engineers who are developing the software and hardware for self-driving vehicles generally earn more than $200,000 per year. But truck drivers cannot easily or quickly become engineers. Their jobs are going to be substantially displaced by automation. What is to become of these workers?

What should the US do about the labor-displacing impacts of technological innovation? The right answer is not the one Queen Elizabeth I chose when asked to grant exclusive rights to a knitting machine in 1589. She denied the inventor’s request for a patent because it would bring ruin to the many workers who made their living by hand-knitting clothing and other products. Elon Musk has suggested a universal basic income initiative as a solution. Bill Gates has proposed a robot tax. These and other possible solutions to the end-of-work problem brought about by advances in technology and IPRs on which these innovations are built are discussed in Hrdy’s fascinating paper. IP may not be wholly responsible for the end of work, but Hrdy says that it “magnifies the division of rewards between generators of IP and the workers whom their innovations replace.” Without destroying the incentive effects of IP, Hrdy would have us consider and address the distributive effects.

Whether you agree with Hrdy’s conclusions or not, this provocative article is well worth a read.

Cite as: Pamela Samuelson, Intellectual Property Rights: A Destroyer as Well as a Creator of Jobs?, JOTWELL (February 17, 2020) (reviewing Camilla Hrdy, Intellectual Property and the End of Work, 71 Fla. L. Rev. 303 (2019)), https://ip.jotwell.com/intellectual-property-rights-a-destroyer-as-well-as-a-creator-of-jobs/.

Plagiarize This Jot

Brian L. Frye, Plagiarize This Paper, IDEA: The IP Law Review (forthcoming 2020), available at SSRN.

Oscar Wilde: “That was an awfully good joke you made last night. I wish I could say it was mine.

James Whistler: “You will my boy. You will.”

Melvin Helitzer: One day Milton Berle and Henny Youngman were listening to Joey Bishop tell a particularly funny gag. “Gee, I wish I said that,” Berle whispered. “Don’t worry, Milton, [said Henny,] you will.”

Plagiarism is not a crime, or even a cause of action. But it is the “academic equivalent of the mark of Cain,” a curse that cannot be undone. Even an unsubstantiated accusation leaves an indelible stain, and a credible complaint cannot be countered. A plagiarist is an academic pariah, a transgressor of the highest law of the profession, the embodiment of the “great deceiver,” who leads everyone astray. Anything else can be forgiven, for the sake of the scholarship. Plagiarism tarnishes the scholarship itself, and leaves it forever suspect. If the purpose of scholarship is dowsing for truth, then the plagiarist is a liar who poisons the well from which everyone draws.

This is a jot recommending Brian Frye’s short, lively, and incisive article about plagiarism, Plagiarize This Paper. And, fittingly, everything you’ve read before this paragraph I’ve plagiarized from Brian’s work.1

Or have I?

Brian wants others to copy his words, and even his ideas, and he doesn’t care whether we attribute them to him. He tells us that very clearly in his paper:

I explicitly authorize plagiarism of this article. I permit and encourage people to copy this article and republish it under their own name. I permit and encourage people to copy expressions from this article and use them without attributing them to me. And I permit and encourage people to use the ideas expressed in this article without attributing them to me.

Brian’s attempt to license plagiarism raises a fascinating question – what are norms against plagiarism really about? There are two, often entwined, interests that are usually identified as being protected by norms against plagiarism. First, it is often said that norms prohibiting plagiarism are there to protect authors’ interest in attribution. So, for example, anti-plagiarism norms would protect a junior academic against unacknowledged taking of her ideas by a senior academic who might otherwise have the power to get away with it. That seems like an interest worth protecting, though, of course, figuring out the origin of ideas with any precision is often difficult in reality. And second, anti-plagiarism norms are often characterized as protecting readers’ interest in not being defrauded – i.e., preventing a writer from fooling readers into thinking that a penetrating idea or a felicitous sentence is his when in fact it was invented by another. As a law professor who grades papers, this justification rings true to me – I want to know that the brilliant things that my students say in their papers are actually the product of their minds.

With respect to authors’ attribution interest, Brian has made clear that he doesn’t care about attribution, and that raises the question whether it’s fair to condemn as a “plagiarist” someone who denies to Brian what he doesn’t want. Brian is engaging in a provocation here: He thinks that people are likely to resist the notion of licensed plagiarism, which suggests that anti-plagiarism norms aren’t actually about protecting authors or readers, but about something else:

Ultimately, plagiarism norms are just cartel rules dressed up as moral obligations. Different discursive communities have adopted different plagiarism norms because they have different economic interests. And the plagiarism norms adopted by a community reflect the economic interests of its members. As those economic interests are contested and shift, the community’s plagiarism norms also are contested and shift. Accordingly, the plagiarism norms of any particular discursive community typically reflect the consensus interests of that community at that point in time.

I think Brian is on to something: If you look closely at institutional anti-plagiarism policies in the real world, it’s difficult to explain their actual content purely by reference to the interest of authors in attribution, or of readers in not being lied to. As an example of how anti-plagiarism norms tend to over-run both of these justifications, take my own institution, NYU. Its standard of “academic integrity” begins with a general definition of plagiarism as “presenting others’ work without adequate acknowledgement of its source, as though it were one’s own.” “Plagiarism,” the general definition continues, “is a form of fraud. We all stand on the shoulders of others, and we must give credit to the creators of the works that we incorporate into products that we call our own.”

This general definition that seems to focus precisely on the two justifications discussed earlier: (1) the interest of readers in not being lied to and (2) the interest of writers in receiving attribution when others are “standing on their shoulders.” These are valid justifications for anti-plagiarism norms, so all seems fair enough so far. But then the NYU plagiarism statement gets down to some examples of use by one writer of the work of another that would count as plagiarism:

  • a sequence of words incorporated without quotation marks
  • an unacknowledged passage paraphrased from another’s work
  • the use of ideas, sound recordings, computer data or images created by others as though it were one’s own

These examples don’t mesh well with the two justifications. Take the first, “a sequence of words incorporated without quotation marks,” or the second, which bars unacknowledged paraphrasing. If you think about whether taking a “sequence of words” or paraphrasing without acknowledgment should count as either fraud on readers or unacknowledged shoulder-standing that hurts writers, you will realize rather quickly that the answer depends entirely on the particular words involved.

Let’s first imagine a student who takes these words: And then the day came, when the risk to remain tight in a bud was more painful than the risk it took to blossom.

Some people might recognize that these words comprise the entirety of Anais Nin’s short poem “Risk.” But some people won’t. If the student uses the words without acknowledgment, she will have defrauded readers who don’t recognize the source. We recognize the unacknowledged taking here as fraud on readers because of the effect of the specific words. They are lovely, insightful, richly metaphoric. Someone who doesn’t have Nin’s poetic oeuvre committed to memory might think that the genius of the words is the student’s genius, which it is not. And that gets to the other justification – the attribution interest of writers. Nin has a legitimate attribution interest here, precisely because this particular sequence of words is so highly creative The student is standing on a giant’s shoulders, but unless she says so, not everyone is going to see that. If anyone qualifies as a plagiarist, this student qualifies.

Now let’s imagine a second student who takes these words: When he lived in Chicago, Mojica sang in punk bands, ran a record label, and owned the Jinx Café and a video rental shop called Big Brother.

I doubt you will recognize these words, and there’s no reason anyone should. They comprise a sentence of workmanlike journalistic prose, reporting basic facts about an actual person. These words, and others like them, were the subject of a recent public plagiarism scandal involving former New York Times editor Jill Abramson, who was accused of copying (actually, mostly paraphrasing) in a book she wrote short passages from articles written by a number of less-well-known journalists. In fact, Abramson cited (in the book’s endnotes) virtually all of the articles she used. But in several places she failed to put language that she copied or paraphrased in quotations. For that, she was publicly flayed.

Did Abramson deserve to be labeled as a plagiarist? She did not commit fraud in the same sense as did the student who took Anais Nin’s words without acknowledgment. The words Abramson took are certainly useful to her narrative, but no one is going to give Abramson any credit tied to the particular words. As literary composition they are utterly banal. The words do report facts, but the facts are of little or no value in themselves. Abramson is not making off with someone else’s diamonds, nor is she standing on another writer’s shoulders – the words comprise, at most, a very small foot-stool. At bottom, Abramson is simply reproducing a sequence of words without telling you where they originated.

In the end, that’s what many plagiarism disputes are largely about. Not fraud on readers, or failing to acknowledge that one is “standing on the shoulders” of another writer, as the NYU general definition of “plagiarism” claims, but the mere reproduction of words. That is also what copyright law is about, and one might ask why we extend plagiarism norms beyond their core when we have copyright to police mere word-taking. Brian has an explanation:

Essentially, academic plagiarism norms are the equivalent of a tax imposed on junior scholars, for the benefit of senior scholars. Junior scholars must err on the side of attributing ideas to senior scholars, whether or not attribution is accurate or helpful, on pain of suffering a plagiarism accusation. As a consequence, senior scholars collect “interest” on the intellectual capital of junior scholars.

That’s a big idea, one that extends beyond ideas to works, and also beyond academia to journalism and other places where enforcement of anti-plagiarism norms is particularly fierce. And Brian deserves full credit for this idea. Even if he doesn’t want it.

  1. Except for the Henny Youngman joke, which I re-purposed from a paper of my own to make a point about the ubiquity of plagiarism. See Dotan Oliar & Christopher Jon Sprigman, There’s No Free Laugh (Anymore): The Emergence of Intellectual Property Norms and the Transformation of Stand-Up Comedy, 94 Virginia L. Rev. 1787 (2008).
Cite as: Christopher J. Sprigman, Plagiarize This Jot, JOTWELL (January 13, 2020) (reviewing Brian L. Frye, Plagiarize This Paper, IDEA: The IP Law Review (forthcoming 2020), available at SSRN), https://ip.jotwell.com/plagiarize-this-jot/.

What We’ve Got Here Is a Failure to Indicate

Alexandra J. Roberts, Trademark Failure to Function, 104 Iowa L. Rev. 1977 (2019).

When a new word or catchphrase enters the social lexicon, some individuals will rush to the U.S. Patent and Trademark Office to try to be the first to obtain a trademark registration. That was the case with one John E. Gillard, who applied pro se to register #COVFEFE for hats, T-shirts, and related goods mere hours after the President included the word at the end of a midnight tweet on May 31, 2017. The Trademark Trial and Appeal Board affirmed refusal of the registration on the grounds that #COVFEFE, as a “sui generis nonsense word” that allowed users and observers to “project onto it any meaning they wish,” failed to function as a trademark for the applicant’s goods, particularly given the wide array of merchandise from different sources already featuring the term.

The question may be a bit murkier, though, when the applied-for mark has a closer association with the applicant. In August, many media outlets were abuzz with the news that The Ohio State University had filed an application seeking a trademark registration for the word THE, for use on “clothing, namely t-shirts, baseball caps and hats” in “standard characters, without claim to any particular font style, size, or color.” Included among the submitted specimens was a photograph of a women’s T-shirt from “The Official Team Shop of Buckeye Nation®,” depicting a red shirt with the word THE in large block letters, with a smaller Ohio State logo underneath, as well as a photograph of a white baseball cap with the word THE in large red capital letters. The popular reaction seemed to revolve around the perception that it was ludicrous to obtain a trademark registration for a common English word, even though many such trademarks exist. (APPLE comes immediately to mind.)

A more nuanced critique, as Alexandra Roberts’s insightful article Trademark Failure to Function helps us to understand, would ask whether THE functions as a trademark in this context — whether consumers would use the word THE on the front of a T-shirt or cap to allow them to find merchandise produced by (or perhaps authorized by) the university or whether the word functions simply as an expressive device, a way of communicating support for the institution and/or its athletic teams (what trademark law calls ornamental use). This, of course, is not an issue limited to Ohio State’s application. A consumer who buys a t-shirt with a swoosh on the front may be using the swoosh both to identify a perceived high-quality shirt and to communicate to others that the consumer is a person who wears Nike apparel. But when what is on the front of the shirt is less likely to be used to communicate the quality of the manufacture, as with many T-shirts adorned with words or graphics, one might ask whether it is functioning as a trademark at all. Indeed, on September 11, the trademark examining attorney refused Ohio State’s registration. Given the location of the word THE on the “upper-center area on the front of the shirt and the front portion of the hat, where ornamental elements often appear,” the examining attorney concluded, the word did not function as a trademark to indicate the source of Ohio State’s merchandise. It remains to be seen whether Ohio State will challenge this conclusion or abandon its attempt to seek registration.

Professor Roberts’s article highlights that the “failure to function” doctrine has not received enough attention from trademark scholars, particularly in comparison to the question of trademark distinctiveness. The two are different, although related. Distinctiveness analyzes the semantic content of a mark — whether consumers will understand the mark to have a relationship to its good or service that conveys that it is functioning as a proper name and not as a description or generic term. The trademark GREYHOUND for bus services will be interpreted by consumers as a proper name, we surmise, because the term is being deployed metaphorically. One must first understand that greyhounds are known for being fast animals before one can understand that the mark suggests that the bus service is also fast. Because the mark owner could have conveyed this information more directly by using a word like “speedy,” we assume that consumers will conclude that the choice to use a metaphor must mean that the mark owner intends the mark as a name. The Abercrombie spectrum (derived from Abercrombie & Fitch Co. v. Hunting World, Inc.), a hierarchy of trademark distinctiveness, is a mainstay of trademark validity analysis.

Whether a trademark functions as a mark, however, asks whether the mark “appear[s] where consumers expect a trademark to appear” and whether it is “sufficiently set off from the surrounding text and images to attract notice” (P. 1981) — in other words, the “visual relationship” between the mark and its goods or services rather than the “conceptual relationship” between them. (P. 1983.) The interaction between distinctiveness and function, Professor Roberts contends, is “interdependent and inverse: the less distinctive a mark is, the greater indicators of trademark use are needed to ensure consumers will perceive it as a mark, and vice-versa.” (P. 1987.) In other words, consumers might recognize a fanciful term such as a pharmaceutical name no matter how it appears, but a less distinctive mark might require visual clues such as font, color, design, or a ™ symbol to convey that it is intended as a mark. These are things that the marketing and design literature has studied for some time, and Professor Roberts mines the research to demonstrate the importance of a consumer-centric analysis. Scholars building on her work might think about the ways in which these interpretations depend, as Rebecca Tushnet has noted, on understanding the role of implication in speech and on cultural competency; considerations of literacy, visual acuity, and other tools that consumers may or may not bring to the table may also complicate the analysis.

Because courts have not paid enough attention to the failure to function doctrine, writes Professor Roberts, and often analyze it separately from distinctiveness, the doctrine has been underdeveloped, which means that some faulty registrations have managed to escape close review. A tandem review also militates against strategic lawyering, as Professor Roberts points out, as a savvy trademark practitioner can easily help her client overcome a failure to function refusal by, for example, advising the creation of a hangtag or other ways commonly used to indicate trademark use.

The article caused me to think more deeply about something I had largely taken for granted. Professor Roberts’s focus is largely on word marks, but her analysis of trade dress and the Seabrook standard provides a helpful way of considering the various ways in which we assume consumers will understand trademarks. The more nuanced analysis she suggests doesn’t always, of course, provide an easy answer. Professor Roberts cites as examples of error the registration of #BeUnprecedented for legal research services and #SharetheSilence for alcoholic beverages, contending that the specimens offered no evidence that consumers would see these as anything but hashtags. (Pp. 2011-2012.) But a marketing expert might argue that those are exactly the kinds of phrases one would develop as a slogan for a client, and consumers in this context might assume that the hashtag developed from a slogan, rather than the reverse. Those who follow the news might immediately understand that #COVFEFE refers to the President’s tweet, not to any one manufacturer, but do buyers of Ohio State apparel understand THE to be only an expression of fan support or also an indicator of licensed merchandise? And if the university’s registration attempt is ultimately successful, what will consumers learn from that about trademark law generally?

Professor Roberts’s article reminds us that consumer perceptions are both the cause and the result of trademark validity. And it comes at the right time: The TTAB, as John L. Welch has documented, appears to be focusing more on failure-to-function in recent months, which makes Professor Roberts’s article even more relevant for practitioners as well as scholars. (A commenter at Mr. Welch’s blog offers the perceptive suggestion that the recent spate of failure-to-function denials at the TTAB may be evidence of a developing post-Brunetti strategy, as this example may confirm.) Failure to function may have been a somewhat dormant doctrine in recent years, but thanks to Professor Roberts, we can better understand the benefits of its coming out of hibernation.

Cite as: Laura A. Heymann, What We’ve Got Here Is a Failure to Indicate, JOTWELL (December 4, 2019) (reviewing Alexandra J. Roberts, Trademark Failure to Function, 104 Iowa L. Rev. 1977 (2019)), https://ip.jotwell.com/what-weve-got-here-is-a-failure-to-indicate/.

Right of Repair in the Digital Economy

Leah Chan Grinvald and Ofer Tur-Sinai, Intellectual Property Law and the Right to Repair, 88 Fordham L. Rev. 63 (2019).

We have long understood that people have a right to repair what they own, but this right to repair is under siege. A new article by Leah Chan Grinvald and Ofer Tur-Sinai explains how IP rules are inhibiting these repair rights and why laws protecting the right to repair are necessary and justifiable. As I explain below, authors Grinvald and Tur-Sinai describe the growing right to repair movement pushing for legislation to protect the right to repair and show how intellectual property laws should facilitate not interfere with consumers rights to repair what they own. The authors also propose a theoretical framework through which they analyze the intellectual property doctrine as enabling rather than inhibiting of consumer rights..

The right of repair problem is easy to identify and touches many aspects of everyday life. Our coffee machines break and, because certain components are protected by patent and copyright, this relatively simply machine cannot be fixed except by the manufacturer, who charges as much as the machine itself to replace the part. Automobile repairs performed only by “authorized dealers” preserve warranty agreements and are enforced through trademark law as well as patent law. These kinds of restrictions  make the market for repairs tightly controlled and expensive, sometimes entirely foreclosed, pushing consumers to buy new products instead of fixing old ones. The restrictions benefit manufacturers and their business partners. But it hurts consumers and repair shops and contributes to substantial amounts of waste inhibiting efforts at reversing devasting climate change.

In an elegant analysis, the Article identifies IP as both the problem and solution to protecting the right of repair. In a reversal of the usual use of IP, the authors cleverly rely on IP law’s internal justification of utilitarianism, incentivizing innovation, personal autonomy, scientific progress, and promoting information disclosure to justify expanding access to consumer products for users and small-businesses as opposed to strengthening control in the owners of intellectual property. When this analysis is paired with the socio-legal movement also described in this Article – the consortium of independent repair shops, “do-it-yourselfers,” and e-recyclers – the Article is a helpful and insightful roadmap for the “fair repair” legislative initiatives across the country.

One of the Article’s many engaging qualities is its plethora of details about the “fair repair” movement and the overwhelming variety of everyday consumer products affected. The Article describes legislative initiatives in approximately 20 states, the proposed model legislation for the “right to repair,” and the problems the legislation has been facing. In the process the authors paint a picture of a socio-political movement that should be relevant to most people but somehow has stayed relatively obscure. The examples of IP rules burdening or blocking our ability to repair cell phones, coffee makers, computers, agricultural machinery and home heating and cooling systems, to name just a few, are sufficiently ubiquitous for the problem described to feel personal and urgent.

The Article’s central innovation is its configuration of the “right of repair” in terms of four concentric circles.At the core is the individual consumer’s right to repair goods they own without interference by the original manufacturers. For this right, the Authors rely heavily on the first sale doctrine – the IP principle that once owned, an object is the dominion of the owner free from restrictions by others to use, resell, alter, or destroy.

The next level circle expands the right of repair beyond consumers to repair shops and includes not only repair (as a form of property use) but diffusing information regarding repair, such as through instructional videos, paper manuals, or advertising. The Authors argue that diffusion of repair information is essential to enable the right of repair in a meaningful manner. This circle implicates First Amendment interests and the legality of circumventing “digital locks” that prevent access to technological mechanisms in need of repair.

The third circle focuses on the right to make, sell and use replacement parts in competition with the original equipment manufacturer. This level of the right to repair aims to disrupt the original manufacturers’ monopoly over replacement parts – a monopoly IP law arguably grants them – while preserving some market share to IP owners to earn profits necessary to incentivize the parts’ production in the first place.

The fourth circle proposes an affirmative duty of original manufacturers to ensure effective implementation of the right to repair. The Authors acknowledge this may be the most controversial component of their analysis but it is the focus of the model legislation currently debated and thus essential for a thorough discussion. It is also one of the most interesting aspects of the Article because it directly addresses the challenging theoretical issue of “rights” being meaningful only when complemented by “affirmative duties.” This discussion mentions the possible fruitful path of compulsory IP licenses and the difficulties posed by the 2016 Defend Trade Secrets Act.

At each level of the right to repair, the Authors analyze the relevant IP doctrines and legislative initiatives, emphasizing tweaks to IP doctrines necessary to effectuate the right and legislative corrections that would make proposed “right of repair” bills stronger or more likely to succeed. At the end of the Article, the Authors address lingering and plausible critiques of their proposals – such as maintaining quality, economic loss, and counterfeiting – tying up some of the loose ends that are inevitable in an ambitious project such as this one.

As all IP professors know, IP laws overlap, and that complicates the analysis of their application across a range of scenarios. Students nonetheless usually learn IP law doctrine-by-doctrine, and the justice implications of IP law can be frustratingly marginal in many introductory courses. But this Article and the right to repair for which it advocates provides serial examples that demand inter-doctrinal analysis combining trademark and patent law, for example, or copyright with design patent law. And in the context of the right of repair, the analysis also demands attention to welfare and environmental justice and implications of consumers investing in property they are legally forbidden to maintain.  It is an article easily used when teaching to draw on these overlapping IP issues that also combine civil rights concerns, or to refer to students searching for a research topic with contemporary significance and many still-unanswered questions.

Although drawing on a political movement already afoot, this Article reads like a call to arms written by advocates who are equally theoretical and practical to launch a successful revolution. The elegance and thoroughness of the Article’s arguments, however, diminish the need for a revolution of IP law and call only for targeted adjustments fully consistent with IP law’s purposes and structural features. There are other likely partners in this movement left untapped. I couldn’t help but think of the many rights on which we rely that are dependent on others and on which this Article’s analysis could draw – rights to reproductive freedom and the right to vote, for example, both of which are under siege at the moment and critical to the human autonomy and equality on which this Article’s “right of repair” ultimately relies. Perhaps painting with too broad a brush and drawing analogies to controversial subjects undermines the Authors’ agenda. The Article’s framework is IP law, not constitutional law, but their mutual resonance makes the Article intriguing and timely. It is well worth consideration as a model of IP scholarship increasingly relevant in the digital age.

Cite as: Jessica Silbey, Right of Repair in the Digital Economy, JOTWELL (November 6, 2019) (reviewing Leah Chan Grinvald and Ofer Tur-Sinai, Intellectual Property Law and the Right to Repair, 88 Fordham L. Rev. 63 (2019)), https://ip.jotwell.com/right-of-repair-in-the-digital-economy/.

Erie and Unfair Competition’s Long and Winding Road

In her excellent addition to the Akron Law Review’s intellectual property volume, The Erie/Sears/Compco Squeeze: Erie’s Effects on Unfair Competition and Trade Secret Law, Sharon Sandeen “tells the story of the efforts undertaken in the aftermath of Erie to fill the gaps it left in the law of unfair competition.” Sandeen is particularly interested in the effect of Erie on what I would describe as the non-trademark-related areas of unfair competition, and especially the failed efforts to broaden the Lanham Act to cover trade secrets or otherwise develop general federal unfair competition legislation.

The tale goes like this: Prior to the Supreme Court’s decision in Erie, federal courts developed a robust common law of unfair competition. Sandeen describes that law as general federal common law, though it is only through the lens of Erie that the “federal” part of that formulation stands out. Federal courts developing unfair competition law before Erie didn’t think they were developing a different law than were state courts, and as Sandeen illustrates, federal courts were considerably more active in this area than state courts. Those federal courts thought they were developing the law of unfair competition. Erie created substantial uncertainty by throwing the status of that body of law into doubt and threatening disuniformity as states developed their own bodies of unfair competition law. Disuniformity was a significant concern, particularly to large commercial entities doing business nationally. Reformers made a variety of efforts to solve that problem with federal unfair competition legislation, and as Sandeen describes in detail, they largely failed. The reasons for that failure shed some interesting light on the coherence of the category of unfair competition—a category that has evolved considerably over time.

Sandeen thoroughly recounts concerns that the Erie decision “left gaps” in unfair competition law. But many of those purported “gaps” weren’t really attributable to Erie. As Sandeen shows, the substance of unfair competition was in considerable flux over the middle of the twentieth century. When Edward Rogers, the primary drafter of the Lanham Act, and others pushed for a federal law of unfair competition, they weren’t just trying to restore federal courts’ ability to rely on and develop federal common law. They were trying to create a considerably more expansive understanding of unfair competition.

For one thing, Rogers and others wanted courts to allow unfair competition claims even when the parties were not in competition. But the requirement of competition was not invented by state courts after Erie. It was always central to the concept of unfair competition, which was, after all, so named for a reason. The advocates for a federal unfair competition statute were also pushing for recognition of claims against a wider range of conduct than was traditionally recognized. Erie was, in a significant sense, more of an opportunity to change the law than the cause of new problems.

Sandeen also highlights another way in which the changing scope of “unfair competition” makes generalization in this area so difficult. On the one hand, courts did not respond to efforts to interpret §44 of the Lanham Act to create a general federal law of unfair competition. And as she documents exhaustively, efforts to pass other federal legislation largely failed. In that respect, unfair competition writ large was never federalized. Importantly in terms of Sandeen’s interests as a trade secret scholar, trade secrets were not swept into a broader federal unfair competition framework.

Courts did, however, federalize one very important part of unfair competition law when they radically expanded the scope of §43(a) of the Lanham Act to recognize causes of action involving “unregistered” trademarks. I use scare quotes there because calling the designations at issue in those cases unregistered trademarks was really a sleight of hand. In general, parties brought unfair competition claims precisely because they did not own (technical) trademarks. A much smaller universe of designators were considered trademarks in that era—only words or logos that did not provide any direct information about the nature of the products, their geographic origin, etc. Only designations that qualified as technical trademarks were federally registrable, and only federally registered trademarks could be enforced under federal law. Passing off by means other than use of a technical trademark was remediable by common law unfair competition. That difference was not only jurisdictional—unfair competition claims had different proof requirements and more limited remedies.

By re-denominating the designations at issue in unfair competition cases as “unregistered trademarks,” courts were doing much more than federalizing unfair competition claims. They were changing the nature and structure of trademark and unfair competition doctrine by expanding the subject matter of trademark law proper and emptying that part of unfair competition. In this context, then, federalization was not only, and perhaps even primarily, about solving a problem of disuniformity. It was instead a chance to expand the law to accommodate a wider range of designators and to treat them more favorably.

The most interesting thing about Sandeen’s chapter is the way it highlights substantial differences in the evolution of different parts of what many now consider unfair competition law. Courts had little resistance to federalizing the historical core of unfair competition—cases alleging passing off by means other than a technical trademark—using the Lanham Act to assimilate those claims to trademark law. They also recognized certain other semi-related claims (false endorsement) under the same section. But reformers were much less effective pushing federalization of a broader range of claims, including trade secrets.

That history is instructive because it suggests something implicit and unarticulated about the boundaries of unfair competition. Once upon a time, unfair competition had a fairly coherent and narrow meaning. All the claims recognized involved deceptive conduct that had the effect of diverting customers who otherwise would have gone to the claimant. The clearest example was passing off, which consisted of falsely indicating that your goods were those of another in order to secure the patronage that otherwise would have gone to the other. But other kinds of claims fit this pattern too. Product disparagement, for example, was recognized because it entailed a false claim about a competitor’s product for the purpose of diverting that competitor’s customers to oneself. Over time, unfair competition became a catch-all for claims based on bad things someone does in commerce. No longer do the claims require direct competition, and the “unfair” part of the formulation appears to have lost any independent meaning.

Sandeen helps us see one part of the explanation for that by highlighting the efforts the reformers made over the middle of the twentieth century to expand the concept of unfair competition to include a broader range of claims. But it’s notable that courts and Congress resisted the breadth reformers sought. The point shouldn’t be overstated. The fact that courts and Congress weren’t persuaded to dump everything into a general federal unfair competition law doesn’t mean none of those claims were recognized in some form (certainly trade secrecy long has been). The reasons for resisting full federalization are varied and sometimes context-specific. Still, one can’t help thinking there’s something to the reluctance to lump all of the purported unfair competition claims together. Perhaps by thinking through the distinctions sometimes made implicitly, we can make some progress on a theory of unfair competition that has substantive content.

Cite as: Mark McKenna, Erie and Unfair Competition’s Long and Winding Road, JOTWELL (October 14, 2019) (reviewing Sharon K. Sandeen, The Erie/Sears/Compco Squeeze: Erie’s Effects on Unfair Competition and Trade Secret Law, 52 Akron L. Rev. 423 (2019)), https://ip.jotwell.com/erie-and-unfair-competitions-long-and-winding-road/.

From Venetian Glass to Contemporary Intellectual Property: Revisiting Tailored Patent Regimes

  • Stefania Fusco, "Murano Glass Vase" in A History of Intellectual Property in 50 Objects (Dan Hunter & Claudy Op Den Kamp eds., Cambridge University Press 2019).
  • Stefania Fusco, Lessons from the Past: The Venetian Republic’s Tailoring of Patent Protection to the Characteristics of the Invention, 17 Nw. J. Tech. & Intell. Prop. __ (forthcoming 2020), available at SSRN

The apparent one-size-fits-all configuration of contemporary intellectual property systems has troubled many a scholar. The topic has a particular salience in patent law and policy, where debates about the need to tailor legal regimes to technology-specific domains remain an evergreentheme. Stefania Fusco takes an interesting spin on that debate by looking backwards—far backwards, to the place and industry that are inextricably tied to the imagination of patent aficionados as the birthplace of the formalized patent system: early modern Venice and its glassmaking industry. And she puts forward an intriguing proposition: the place that exported the patent system to Europe and the United States had tailorable components, designed to calibrate incentives to innovation; why do we not consider a similar approach in early 21st century America?

In Murano Glass Vase, Fusco briefly recounts the history of the glassmaking industry in Venice, framing it as the natural experiment from which the patent system sprang into the world. In the past, the Venetian model has received the attention of legal scholars like Ted Sichelman and Sean O’Connor, who explored the competition-enhancing properties of patents issued by the Venetian state. Like her predecessors, Fusco emphasizes the mix of exclusionary rights and trade secrecy that formed the backbone of Venetian innovation policy against the backdrop of a heavily regulated and protectionist economy. She describes how that mix was key in attracting foreign talent to Venice and how it facilitated technology transfer among the city state and the outer world.

In Lessons from the Past: The Venetian Republic’s Tailoring of Patent Protection to the Characteristics of the Invention, Fusco draws on original research performed at the Venetian State Archives to further the Murano narrative by looking at how the Venetian government fueled an ad hoc patent regime across industries by tailoring it to specific technologies. While Lessons from the Past makes several contributions of historical and comparative interest, I would like to focus on Fusco’s study of the implications of regime tailoring for regulatory policy. After all, it is not every day that someone discusses Venice in 1474 in the same breath (or abstract) as the 2011 Leahy-Smith America Invents Act.

1474 was the year in which the Venetian Patent Act was enacted, originating a system that was quite similar to contemporary statutory patent regimes: Venetian patents were issued for a period of 10 years and a fixed monetary penalty was established for patent infringement. At the same time, a customary patent regime also developed. Customary patents, unlike statutory ones, were flexible: they could be awarded for a fixed period of time, varying between 10 and 60 years, for life or in perpetuity; and the penalty for infringement could take the form of a per-item formula or a flat fee, set at a value well above statutory recovery. As Fusco’s research shows, the Senate used this flexibility to grant lengthier terms to patents relevant to certain fields (chief among which, that of water-related technology, ever so important to a city build in a lagoon).

Fusco then explores work by Lemley and Burk to tie the Venetian experience to the question of whether it would be appropriate today to tailor existing patent regimes to certain fields of technology. Of course, some of the mechanisms that were available to Venetian regulators are no longer an option in this era of international harmonized intellectual property regimes. For instance, article 33 of the TRIPS Agreement precludes the possibility of patent terms shorter than 20 years, while article 27 makes it hard for national legislators to differentiate between fields of technology for market-related purposes. At the same time, numerous tailoring possibilities are now available to domestic players in the intellectual property arena, which have been explored elsewhere in the literature over the past few decades. Among these tailoring possibilities, the impact of the Federal Circuit—as an agent of de facto tailoring—in the field of biotechnology comes to mind. At a different level—and as an example of legislative tailoring—the pre-America Invents Act non-obviousness provision in the Patent Act expressly carved an exceptional regime for biotechnologies.

Lessons from the Past
, however, goes beyond drawing inspiration from the rather blunt policy tools available to Venetian regulators at the dawn of intellectual property regimes. Instead, through her thorough examination of the institutional architecture of Venetian patent law, Fusco invites us to revisit our own regulatory landscape. She reminds us of a feature often overlooked when we tend to the minutiae of patent policy, technology regulation, and the design of incentives regimes: the U.S. Patent and Trademark Office (PTO) is somewhat of an oddity. And no, for once we are not talking about inter partes review and the patent death squad. But the PTO remains a singular case in the regulatory landscape: an agency with very limited rule-making authority, particularly at the substantive level. It is not the kind of oddity we can/should necessarily do away with, but perhaps we should question it more often.

While building on work by Jonathan Masur on regulatory design, Fusco highlights the sharp contrast between the PTO and agencies with robust rule-making authority, focusing on the Environmental Protection Agency (EPA), the Securities and Exchange Commission (SEC) and the Occupational Safety and Health Administration (OSHA). I would add to these the Food and Drug Administration (FDA), which often deals with areas of technology that overlap with those of the PTO—and, for that matter, with Venice’s experiments involving heavily regulated industries at the cross-roads between patent protection and trade secrecy. The potential parallels (or lack thereof) between the PTO and the FDA strike me as particularly interesting. Contemporary scholarship has explored the role of the FDA, viewing it not merely as an information gatherer and an information-production catalyst. The PTO, like the FDA, amasses enormous quantities of information on a number of subjects (from drafting techniques to the breadth and specifics of the state of the art in a given industry). Unlike the FDA, however, the PTO’s lack of rule-making authority has prevented the Agency from refining doctrines according the specificities of a given technological field. It has also insulated much of the information accumulated by the PTO from other players in the patent system—namely the Federal Circuit, which has historically been very parsimonious in seeking information from the Agency.

As Fusco reflects on the advantages of giving greater rule-making power—and consequently greater patent-tailoring power—to the PTO, I also see future pathways of inquiry that connect her work with that of Daniel Carpenter on regulatory structures, as well as issues like agency capture and agency-industry relationships.

To this reader, Fusco’s articles on the birth of the patent regime are therefore less of a historical narrative and more of a contemporary reframing of persistent issues in our patent law, policy and regulatory spaces. I think they will be of interest even to those working outside the intellectual property arena—from administrative law to the sociology of innovation. And to those who enjoy drinking out of fine Venetian glass, of course.

Cite as: Ana Santos Rutschman, From Venetian Glass to Contemporary Intellectual Property: Revisiting Tailored Patent Regimes, JOTWELL (September 25, 2019) (reviewing Stefania Fusco, "Murano Glass Vase" in A History of Intellectual Property in 50 Objects (Dan Hunter & Claudy Op Den Kamp eds., Cambridge University Press 2019). Stefania Fusco, Lessons from the Past: The Venetian Republic’s Tailoring of Patent Protection to the Characteristics of the Invention, 17 Nw. J. Tech. & Intell. Prop. __ (forthcoming 2020), available at SSRN.  ), https://ip.jotwell.com/from-venetian-glass-to-contemporary-intellectual-property-revisiting-tailored-patent-regimes/.

A Classical Perspective on Information Ownership

Jeremy N. Sheff, Jefferson’s Taper (Feb. 11, 2019), available at SSRN.

It’s not news that normatively fraught debates in legal academia tend to become polarized and then stuck. Scholarship often tends to cohere around preexisting camps, causing debate to focus on which camp (and who within each camp) is right and to ignore the possibility that the available framings may have missed something important. In light of this, one of the most valuable and refreshing moves an article can make is to throw a bomb into the long-accepted binary of a given academic debate by suggesting an entirely new way of thinking about an issue. This is precisely what Jeremy Sheff does to the debate over foundational concepts of information ownership in his fascinating and provocative draft, Jefferson’s Taper.

Here’s the backstory: Some scholars favor a limited vision of information owners’ rights and tend to embrace what has become known as the utilitarian theory1 of copyright and patent. According to this view, property in creative expression or inventions is not rooted in any notion of “right” other than the state’s positive law. Rather, the state grants monopolies in information only because (and to the extent that) doing so is necessary to incentivize the creation of things that would earn no profits for their owners absent law’s imposition of exclusive rights. Other scholars prefer a more expansive vision of owners’ rights; these scholars tend to advocate an alternative view of copyright and patent rooted in the writings of John Locke. This approach locates a pre-political right to ideas in the labor expended in creating them and rejects the notion that copyright and patent are nothing more than state-created monopolies designed to calibrate the optimal level of creative and inventive production.

Adherents of each side in this debate have produced a wide variety of authorities for each view, but none has been as influential as Thomas Jefferson as expressed in his letter to Isaac McPherson. In that letter, Jefferson uses the metaphor of lighting another’s candle to make a point about the non-rivalrous nature of intangible property: “He who receives an idea from me, receives instruction himself without lessening mine; as he who lights his taper at mine, receives light, without darkening mine.” (P. 5.) He deployed this example to argue that “inventions … cannot, in nature, be a subject of property.” (P. 5.) The Parable of the Taper has become a shibboleth for those who prefer the low-protectionist “utilitarian” view of copyright and patent.

But what if Jefferson meant something else entirely by the Parable of the Taper? This is the tantalizing question that Sheff investigates. He begins by pointing out that in all likelihood Jefferson did not come up with the Parable on his own, but rather borrowed it from Roman statesman and philosopher Cicero, who used the same story with slightly different phrasing in his philosophical tract De Officiis. Did Jefferson’s uncited reference to De Officiis suggest that he shared Cicero’s basic approach to property? If so, that may reframe entirely the meaning of the Parable of the Taper, and indeed the common understanding of Jefferson’s position on how to regulate ideas.

We commonly assume that Jefferson is a scion of the Enlightenment and its modern assumption that all persons are created equal. (A proposition that Jefferson adhered to at best in theory not practice, given that he was a slaveowner.) But the pre-Enlightenment Classical tradition of property—which has its roots in Cicero, among other ancients—assumed instead that people are necessarily born unequal. Consequentially, the challenge of law generally, and of property law in particular, is how to allocate rights among people in light of their inequality. Cicero’s view of property in particular was elitist and conservative. It accepted without question preexisting distributions of property and offered arguments about how and why these distributions should be preserved.

Sheff is careful not to argue that his discovery proves that Jefferson adhered wholesale to a pre-modern, Ciceronian worldview on property and equality generally. But he does imagine what it would mean to think about copyright and patent through this lens with provocative results. For one thing, in the passage from which Jefferson borrowed the parable, Cicero discusses the obligation of property owners to engage in acts of beneficence, at least toward those who merit such acts. The point of Cicero’s relating the parable is that he regards the duty of beneficence to be at its zenith when acting generously costs owners little or nothing, as when someone asks to light their lamp with yours. Sheff suggests that this could be read to mean that Jefferson’s view of copyright and patent included the conviction that owners of copyrights and patents had obligations to share the fruits of their intellectual labors with the public. This reading translates the deeply conservative Ciceronian view of property into one that is—in this application, at least—generous and public-spirited.

Sheff’s article is enlightening well beyond the ambitious thesis he seeks to advance. For one thing, his eloquent writing makes the seventy-seven pages of historical and philosophical exegesis read like a pleasure, not a slog. For those of us who know little of the Classical tradition of philosophy, Sheff’s article is a fascinating and useful primer that moves from a deep dive into Cicero to a tour through Aristotelian, Thomist, and Grotian views on property. One criterion for what makes an article worth reading is that in so doing, you learn something new and important. In this sense, Sheff’s work succeeds masterfully.

But how important was Sheff’s discovery? He makes a very strong case that Jefferson borrowed the parable of the taper from Cicero, but extrapolating from use of that one metaphor that Jefferson more generally embraced the Ciceronian worldview on property represents a fairly large conceptual leap. Sheff does not, for example, substantiate this suggestion by citing any other passages from Jefferson’s writing that embrace the Classical approach to property. And while I am no Jefferson scholar, I am fairly confident that there are indications that he instead embraced (again, with astonishing lack of awareness given his ownership of slaves) the modern Enlightenment view that all people are born equal.2

Yet this does not detract from the success of Sheff’s article in light of its major ambition: His piece is best viewed not as a claim about Jefferson’s own beliefs, but as an attempt to breathe life into the sclerotic debate in copyright and patent between low-protectionist utilitarians and high-protectionist Lockeans. In Jefferson’s Taper, Jeremy Sheff invites us to think more broadly about the range of philosophical traditions that may illuminate our understanding of owning ideas, and more generally serves as a reminder of the importance of bringing fresh perspectives to scholarly debates with long-fixed battle lines

  1. Sheff is right that legal scholars refer to this theory using the term “utilitarian,” but this usage is not quite right. This theory of copyright and patent is consequentialist because it looks to outcomes: How well does a given regime of exclusive rights maximize creation and invention? But it is not utilitarian because it does not (necessarily) adopt a Benthamite greatest-good-for-the-greatest-number framework for evaluating the normative appeal of that regime.
  2. For example, the familiar language from the preamble to the Declaration of Independence that it is a “self-evident” truth that “all men are created equal.”
Cite as: David Fagundes, A Classical Perspective on Information Ownership, JOTWELL (August 9, 2019) (reviewing Jeremy N. Sheff, Jefferson’s Taper (Feb. 11, 2019), available at SSRN), https://ip.jotwell.com/a-classical-perspective-on-information-ownership/.

How Elite Lawyers Shape the Law

  • Paul R. Gugliuzza, The Supreme Court at the Bar of Patents, 95 Notre Dame L. Rev. __ (forthcoming, 2020), available at SSRN.
  • Paul R. Gugliuzza, Elite Patent Law, 104 Iowa L. Rev. __ (forthcoming, 2019), available at SSRN.

Christopher Langdell’s “case” method of teaching the law has dominated the law school classroom for over a century. In this pedagogical approach, students typically read appellate opinions, and professors tease “rules” from the opinions—often in concert with the so-called Socratic method, which enlists students to aid in this abstractive process. This approach is said to make students “think like lawyers,” but what’s typically ignored in the process is the role lawyers actually play in the very cases under consideration. Instead, the working assumption is that judges are presented with arguments and facts up high from anonymous sets of ideal lawyers, who never miss a key argument or forget a relevant fact.

Of course, the actual world of lawyering is much messier, and lawyers range from the glorious and gifted to the struggling and essentially incompetent. But exactly how does this variation in attorney quality affect case outcomes? This all-too-important question has scarcely been addressed, much less answered, by systematic academic study. In an outstanding duo of articles, Paul Gugliuzza shines newfound light on the issue by examining the role of “elite” advocates in the certiorari process at the U.S. Supreme Court.

Unlike actual case outcomes, which are often a poor test for attorney quality because of endogeneity concerns (the best attorneys often take the hardest cases), selection effects, and the lack of any “natural experiment” comparing a before-and-after “treatment,” certiorari in patent cases is in my view quite a worthy domain in which to suss out the effects of attorney quality.

As Gugliuzza recounts in exhaustive and well-researched detail, there is a major shift in patent action appeals in the participation of “elite” attorneys, particularly at the Supreme Court. (By “elites,” Gugliuzza refers to those attorneys who presented oral argument in at least five cases in that term and the ten previous terms combined.) Barring other explanations—which Gugliuzza does a thorough job in effectively eliminating—this sets up enough of a natural experiment to assess the causal role of elite attorneys in the fate of patent appeals, especially the grant (or denial) of cert petitions.

Notably, Gugliuzza finds that “the Supreme Court is 3.3 times more likely to grant cert. when a petition in a Federal Circuit patent case is filed by an elite advocate as compared to a non-elite.” (Supr. Ct., P. 34.)  Specifically, while non-elite petitions are granted at a 4.7% rate, elite petitions are granted at a high 15.6% rate. Exactly how and why this occurs is complex. Part of the reason is the fact that in cases handled by elites, large numbers of amicus briefs are filed at the cert stage, and the presence of those briefs is even more strongly correlated with cert grant than the presence of elites.

Of course, it could be the fact that elites tend to work on more important cases, and it is precisely those cases that garner more amicus briefs. But as Gugliuzza explains—and which aligns with my own experience—it is the network and know-how of elites that drive the amicus filings, creating a causal link between elites and cert grants. Also, many elites are known to the justices and clerks. And elites know how to craft briefs to increase the odds of a cert grant. Thus, even more so than Gugliuzza, I think it’s fairly clear that elites are a substantial causal factor in the Supreme Court’s renewed interest in patent law issues.

What’s more incredible about Gugliuzza’s findings is that, in my view, they substantially understate the role “elites” are playing in patent cases at the Supreme Court, because Gugliuzza’s definition excludes attorneys who regularly draft briefs (but do not argue) Supreme Court cases and also excludes well-known academics (since none has argued 5 cases), who have increasingly played a role at the certiorari stage in patent cases over the past 10 years.

Gugliuzza plans to tease out some of these additional influences in a follow-on study, which I have no doubt will strongly support the causal role between elites and cert grants in patent cases. But where does all this leave us?

First and foremost, Gugliuzza’s study reminds us as law professors that attorneys really do matter and that we need to teach students as much, including the nitty gritty of why—not just in “skills” and “clinical” courses, but in “doctrinal” courses, too. It also opens the door for further empirical study on the role of attorney quality in outcomes (outside of mere win rates—which, as I noted above, is a difficult way to measure the effects of attorney quality) in many other areas of law.

Second, it raises important normative issues regarding the development of the law. As Gugliuzza rightly notes, elite advocates tend to have little training in science and technology, and instead are typically generalists. When both the advocates and judges are generalists in patent cases, this can lead to a “blind leading the blind” problem. As Justice Scalia aptly recognized in his Myriad opinion, he could not join certain portions of the majority opinion, stating “I am un-able to affirm those details on my own knowledge or even my own belief.”1 Personally, I find it hard to believe that any justice in the majority had any scientific knowledge substantially greater than Justice Scalia’s. Indeed, Gugliuzza documents cause for concern because most of the Supreme Court decisions have been in areas that are basic enough for the justices to understand, like procedure or statutory interpretation, rather than core substantive issues of patent law. Even the substantive cases, like KSR, Myriad, Mayo, Alice, Global-Tech, and the like, present relatively simple sets of facts, which in essence means the Court has eschewed many doctrinal areas in need of resolution, such as enablement, written description, and complex obviousness doctrines.

At the same, the elites arguably have stronger skills when it comes to law and policy than the usual patent litigator. Elites may help to correct for the sometimes tunnel-vision of patent litigators and, more importantly, “specialized” Federal Circuit judges. This may help avoid court capture and pro-patent biases, which tend to serve the economic aims of the patent bar.

As Gugliuzza perceptively notes, perhaps it’s too early to answer the normative question. There are decent arguments on both sides of the fence. My own instincts are that generalist elites—in concert with the elites that make up the Supreme Court—are mucking up patent doctrine to the point that the system isn’t working as it should. Most problematic are generalist opinions, which often don’t provide sufficient guidance to innovators and potential infringers, alike, to order their business affairs. More generally, the Supreme Court has produced many opinions that have weakened patents (e.g., KSR, Alice, Mayo, eBay, Global-Tech, and TC Heartland), which although not always intentional, is in my view the wrong policy choice.

In sum, I thoroughly enjoyed Gugliuzza’s insights on these important questions, and the more general question of the role of lawyers on the law, and I believe Gugliuzza’s articles and follow-on studies will surely play a critical role in resolving these thorny debates as the empirics continue to unfold.

  1. Ass’n of Molecular Pathology v. Myriad Genetics, Inc., 569 U.S. 576, 596 (2013) (Scalia, J., concurring).
Ted Sichelman, How Elite Lawyers Shape the Law, JOTWELL (July 26, 2019) (reviewing Paul R. Gugliuzza, The Supreme Court at the Bar of Patents, 95 Notre Dame L. Rev. __ (forthcoming, 2020), available at SSRN; Paul R. Gugliuzza, Elite Patent Law, 104 Iowa L. Rev. __ (forthcoming, 2019), available at SSRN), https://ip.jotwell.com/?p=1270.

Using Fair Use to Reduce Algorithmic Bias

Amanda Levendowski, How Copyright Law Can Fix Artificial Intelligence’s Implicit Bias Problem, 93 Wash. L. Rev. 579 (2018).

What is the relationship between copyright law and artificial intelligence or machine learning systems that produce outputs biased by race, gender, national origin, and related aspects of being human? That is the question that Amanda Levendowski investigates and addresses in her refreshingly well-written, to-the-point article How Copyright Law Can Fix Artificial Intelligence’s Implicit Bias Problem. In a nutshell, she argues that: (1) these systems need large quantities of training data to be effective; (2) those building these systems rely on biased data in part because of their own biases but also because of potential risks of copyright infringement; and (3) more copyrighted works can legally be included as training data under the fair use doctrine and should be so used to selectively diversify the inputs to these systems to de-bias their outputs.

Levendowski starts with the problem in the form of Google’s natural language processing system word2vec. It is a form of neural word embedding that analyzes the context in which words appear in the source texts to produce “vectors,” which indicate word associations such as “Beijing” is to “China” as “Warsaw” is to “Poland.” Trained by analyzing the published news sources incorporated into Google News to which Google has obtained a copyright license, word2vec ingests the biases in those sources and spits out results like “man” is to “computer programmer” as “woman” is to “homemaker.” Levendowski acknowledges that those in the machine learning research community agree that this is a problem and are in search of a solution (including Google’s own researchers)1, but she responds that it should not be left only to developers at large technology companies with access to the training data to de-bias their own systems.

Levendowski further asserts that copyright law stands as a potential barrier, or at least a perceived barrier, to outside researchers’ ability to investigate and report on bias in these systems. Copyright reinforces incumbents’ advantages in three ways. First, while reverse engineering of the algorithms is protected by fair use, accessing those algorithms, if they are subject to technological protection measures under 17 U.S.C. §1201, is limited to the narrower § 1201(f) exception or the right to circumvent that the First Amendment may provide.2 Second, if a biased system’s underlying training data is copyrighted, journalists and other investigators who seek to expose the sources of algorithmic bias are likely to be chilled by the prospect of an infringement suit. Finally, the leading artificial intelligence developers have significant resource advantages that allow them to acquire enormous training datasets by building them (Facebook) or buying them (IBM).

This competitive advantage leads newcomers to rely on what Levendowski terms “biased, low-friction data” or BFLD; that is, data that are accessible and that carry little legal risk. (P. 589.) Here, her example is the 1.6 million emails among Enron employees made accessible by the Federal Energy Regulatory Commission in 2003. This is one of the only publicly-accessible large datasets of interlinked emails. Although these emails are technically works of authorship protected by copyright, the legal risk that any of these authors would sue an AI researcher for using these is close to nil. But, this is hardly a representative sample of people to study if one were to train a system to extract generalizable rules about how human beings communicate by email. Other examples of BFLD that have other forms of bias include public domain works published prior to 1923, which do not reflect modern language usage, and Wikipedia, which is legally low-risk because of its Creative Commons license but is a biased source of facts about the world because of the large gender imbalance among contributors. Levendowski argues that this imbalance biases the data in the language used to describe women in many Wikipedia entries, and the substance of these reflect male bias in terms of the subject matter covered and the subject matter omitted, such as key facts about women in biographical entries.

The article then argues that enlarging any of these datasets, specifically with diverse, copyrighted sources that are likely to mitigate or erase bias, is desirable and is legal as a fair use. Recognizing that access to these sources remains a challenge, Levendowski argues that at least the use of these sources should be cleared by fair use.

Here, I should disclose my bias. I have a forthcoming article that makes a related argument that copyright law permits the use of large sets of copyrighted works for text and data mining, so I am sympathetic to this article’s argument. Nonetheless, I think most readers will find that although the fair use analysis in this article is brief, perhaps too brief, it is supported by the case law and copyright policy.

The analysis argues that using copyrighted works as training data is a transformative use, and there is now substantial case law and scholarship that support this assertion. The use is for a different purpose than for which the works were published and the use adds something new through the system’s operation. The article then argues the second factor also favors the use because even creative works are being used for their “factual” nature; i.e., as examples of creative works by humans. Under the third factor, using the entirety of these works is necessary and appropriate for this purpose and has been approved in a number of cases involving computational processing of copyrighted works. Finally, under the fourth factor, even if some of the training data has been licensed in by current developers, the transformative purpose under the first factor overrides any negative impact that fair use may have on this market.

While this analysis is generally persuasive, I found this part of the article a little thin. I agree that a court would almost certainly characterize this use as transformative for the reasons stated. But, the second factor has traditionally been focused on how much expressive material is in the work being borrowed from rather than the borrower’s purpose. This move felt like giving the transformative purpose a second bite at the apple. While the second fair use factor does little work on its own, I think it is appropriate to consider as part of the balance how much original expression is at stake.

I will note that I wanted more discussion of the third and fourth factors. While it is easy to agree that use of entire works is likely to be permissible, the harder question is how much of that training data can be made publicly available under fair use by those seeking algorithmic accountability. I would have liked to know more about how and where Levendowski would draw this line. Similarly, the evidence of some licensing for this use, needs more elaborate discussion. I agree that the transformative purpose is likely to insulate this use, and that this licensing market is really one for access to, rather than use of, the training data, which diminishes the impact under the fourth factor.3

With that said, I want to acknowledge the creativity of Levedowski’s thesis, and show appreciation for her clear, succinct presentation of the three stages of her analysis. This piece is a welcome contribution by an early-career researcher, and I look forward to reading her future work.

  1. Ben Packer et al., Text Embedding Models Contain Bias. Here’s Why That Matters, Google Developers Blog (Apr. 13, 2018).
  2. See Universal City Studios, Inc. v. Corley, 273 F.3d 429 (2d Cir. 2001)(recognizing that §1201 can be applied in a way that burdens speech and is subject to intermediate scrutiny when it does so).
  3. Here I want to recognize the argument advanced by Ben Sobel on the fourth fair use factor. He argues that, at least when creative works are used to train systems designed to create competing creative works, the fourth fair use factor should weigh against such use. See Benjamin L.W. Sobel, Artificial Intelligence’s Fair Use Crisis, 41 Colum. J.L. & Arts 45, 75-79 (2017). It is a creative argument by which I am not persuaded because fair use should not be the policy finger in the dike holding back automated creation in my view. But, I found his arguments about the ways in which machine learning systems may require more nuanced fair use analysis to be well made.
Cite as: Michael W. Carroll, Using Fair Use to Reduce Algorithmic Bias, JOTWELL (June 28, 2019) (reviewing Amanda Levendowski, How Copyright Law Can Fix Artificial Intelligence’s Implicit Bias Problem, 93 Wash. L. Rev. 579 (2018)), https://ip.jotwell.com/using-fair-use-to-reduce-algorithmic-bias/.

Personalizing Copyright Law Using Consumer Demographics

Adi Libson & Gideon Parchomovsky. Toward the Personalization of Copyright Law, 86 U. Chi. L. Rev.  527 (2019).

Most people assume, if implicitly, that there is a substantial element of uniformity in our IP system. At first blush, our copyright and patent laws extend a (presumably) uniform set of rights to (presumably) uniform authors and inventors, who can then sue (presumably) uniform unauthorized users. Scholarship for some time now has already noted that the bundle of rights is not actually uniform, and has theorized on the optimal tailoring of rights to particular industries and subject-matters. More recently the literature has started to unpack the implicit assumption of creator uniformity using data on the demographics of authors and inventors. Statistically speaking, the data has shown that creators of different races, genders and ages diverge in the rate and direction of their creative efforts. In this new and exciting article, Libson and Parchomovsky begin to unpack the assumption of user uniformity using user demographics.

Legal enforcement of copyrights entails benefits and costs. On the benefit side, it provides authors with an incentive to create, by securing to them the exclusive exploitation of their works. On the cost side, it reduces access to creative works, by endowing the author with a monopoly-like power. Optimally, copyrights would only be enforced against high value consumers (thus achieving the incentive rationale), but not against those with valuations lower than the market price (thus achieving the access rationale). In theory, allowing free access to those who cannot afford the market price would be efficient, as it would allow them access without sacrificing the author’s incentive. In practice, however, this cannot be done because many who are willing and able to pay would masquerade as ones who are not, and authors have no crystal ball to reveal consumer valuation. Copyright enforcement thus makes sure that those who can pay would, realizing that the access cost is borne as a necessary evil.

Not necessarily so anymore, say Libson & Parchomovsky. Using data on the demographics of consumers of audio and video content, they show that certain cross-sections of users never enter the market. With regards to these users, it does not make a lot of sense to harshly enforce copyright law. Rather, treating infringement by these users leniently would have the benefit of increasing access to content without sacrificing incentives to the author, namely without the risk that otherwise paying users would masquerade as low-value ones.

To illustrate how this can be done, Libson and Parchomovsky use two data sets. First, they use data from the Consumer Expenditure Survey of the Bureau of Labor Statistics that give a general view of household consumption patterns. For example, they note that average household spending on online audio and video consumption varies considerably with household demographics, including income, age, race, education, marital status and geographical location. Second, they use panel data on online purchases of music and video of over 80,000 households. Various household demographics correlate with purchase decisions, including, most prominently, race and age. They report that about 1500 of the 80,000 households did not buy music and about 4500 did not buy video online.

Together, these datasets give a sense of certain user cross-sections that are highly unlikely to ever purchase copyrighted content. For example, none of the 176 households that are southern, without a college degree, aged 24 years-old or younger, with income less than $100,000, and are not African American purchased copyrighted audio content online in 2016. Also, none of the 72 households that are southern, without a college degree, aged 29 years-old or younger, with income less than $100,000, and who are not African American purchased copyrighted video content online in 2016. Accordingly, under certain assumptions and caveats, the authors maintain that it would make sense to reduce the copyright liability of such households, and even exempt them from liability, because doing so would not disincentivize authors but would increase household—and so social—welfare.

Libson and Parchomovsky present their data as a proof of concept and suggest that much more could be done to optimize copyright policy if and when better data became available. But even with their data, the authors spell out three policy implications: the use of personalized consumption data can reduce the deadweight loss associated with copyright protection, copyright enforcement should be limited with regards to consumer demographics that are unlikely to purchase content, and that sanctions can be varied based upon user characteristics. This paper thus makes a novel contribution on its own, and opens up the way for further empirical investigation of users in IP.

Cite as: Dotan Oliar, Personalizing Copyright Law Using Consumer Demographics, JOTWELL (May 30, 2019) (reviewing Adi Libson & Gideon Parchomovsky. Toward the Personalization of Copyright Law, 86 U. Chi. L. Rev.  527 (2019)), https://ip.jotwell.com/personalizing-copyright-law-using-consumer-demographics/.