In 1984, the McDonald’s Corporation obtained a federal trademark registration for the mark MICKEY D’S for restaurant services, stating in its application that it had been using the mark since 1981. Research suggests, however, that the MICKEY D’S mark wasn’t the result of ad agency brainstorming but instead originated in the Black community in the United States as early as 1976. The McDonald’s Corporation’s subsequent adoption and use of the mark seems to have been part of a strategy to promote the restaurant chain back to the community from which the name emerged. (Specimens submitted in connection with maintenance activities include a national full-color ad that appeared in Jet and Ebony magazines in 1982.) David Green, senior vice president for marketing at the McDonald’s Corporation, implied this, glibly, in a 1996 article in AdAge, writing that the company’s work with the Black-run advertising firm Burrell Communications gave McDonald’s “the highest share of the African-American market. Talk to anyone in the ’hood, and they talk about Mickey D’s. We’ve become part of the vernacular.” Black consumers may have created “Mickey D’s,” but the federal trademark rights in the term now belong to McDonald’s.
The larger complicated relationship between McDonald’s and the Black community is the subject of Franchise: The Golden Arches in Black America, the excellent book by Marcia Chatelain, a professor of history and African American studies at Georgetown University. It’s a relationship that, in Chatelain’s telling, rests at the intersection of social change, economic development, and corporate profit, with seemingly contradictory valences. The company’s corporate practices were the subject of boycotts and protests at individual restaurants, but local franchises were also welcome sources of financial support for the communities in which they were located. The company took advantage of existing inequalities in various regions, including cheap land and a depressed labor force, and yet it also positioned itself as a source of employment and advancement for the Black community, particularly through its efforts to diversify its franchisee ranks in the wake of white franchisee flight in the 1960s. (Those franchise opportunities were not, however, often in economically prosperous neighborhoods.)
For trademark scholars, particularly those engaging in interdisciplinary work, Franchise’s exploration of these complications gives historical and sociological depth to the consideration of what trademarks mean. Modern trademark law’s embrace of the franchise structure assumes a certain level of consistency from location to location to give the mark meaning. Indeed, established procedures, supply chains, and marketing campaigns are ostensibly part of what makes franchising attractive to those new to business ownership. But reflecting and appealing to the communities in which a franchise is located require acknowledgment that the brand can have different meanings to different consumers. As Chatelain describes, McDonald’s and other chains attempted to shape their offerings and advertising in this way, often through naming choices, celebrity endorsements, advertising, or adapting the trade dress of stores to serve the communities in which they were located, all with varying success.
Chapter Five (“Black America, Brought to You by . . .”) in particular makes for rewarding reading for trademark scholars. Chatelain describes how the McDonald’s brand served as a reliable constant for consumers in underserved areas, as locations provided meals, financial donations, safe spaces, and other forms of support during local crises. As Chatelain writes, “When McDonald’s managers could be relied on more than school administrators or police officers, then the lines between where leadership and power rested in a city could become so blurred that a fast food restaurant could begin to look like a solution instead of a symptom.” (P. 168.)
The concept that “McDonald’s” connoted structural reliability was deliberately shaped by Burrell Communications, which promoted “the corrective possibility of advertising,” using images and language to convey Burrell’s normative vision of Black life in public spaces. (P. 173.) The artist Hank Willis Thomas’s work So Glad We Made It (featured in Franchise) highlights this dynamic by removing the branding language from a 1977 McDonald’s advertisement to, as the Museum of Modern Art puts it, “encourage[ ] viewers to look critically at how mass–produced and widely disseminated images in the media construct and reinforce stereotypes about African American life, and how the public participates in these narratives.”
But Burrell Communications also understood the ways in which advertising could reflect lived experience. Chatelain notes that Burrell “personally studied consumer reaction” to McDonald’s well-known national slogan of the 1970s and 1980s “You deserve a break today” and concluded that “black customers were not getting it. Black customers were confused. There were no breaks in their America.” (P. 169.) And while today’s viewers might see as outdated the advertisements that Hank Willis Thomas critiques, Chatelain suggests another interpretation: the advertisements tapped into concerns about equal access to restaurants by suggesting that McDonald’s could be a place of ease and relaxation. “Burrell and his creative team,” she writes, “knew about the fears that followed blacks when they took a seat in a booth or made a left turn into their favorite drive-thru.” (P. 176.) These observations remind us once again that brands — and the law that protects them — are in a constant process of both prescription and description.
“For most Americans,” Chatelain writes in her concluding chapter, “it may be hard to imagine a world without McDonald’s or Kentucky Fried Chicken or Taco Bell.” (P. 263.) The trademark applications for these famous brands, whether based on marks originating from the company or marks originating from the community, may refer simply to restaurant services, merchandise, and related goods. But “[w]hen resources that create steady infrastructure for well-paying jobs, a multitude of food options, and safe spaces for children and senior citizens to build community are absent, then fast food is able to present itself as capable of providing sustenance rather than simply feeding.” (P. 264.) In a year when companies are eager to communicate their commitment to social justice, Chatelain’s important and relevant work helps us to think more deeply about what a brand means and about the (in)ability of a company, relying on the strength of that brand among consumers, to address fundamental inequities.
Amanda Levendowski, Trademarks as Surveillance Transparency
, 36 Berkeley Tech. L. J.
__ (forthcoming 2021), available at SSRN
I call this paper a “Levendowski special.” It follows the signature format of much of Professor Levendowski’s prior work which, as in the latest article, recruits a legal tool typically aimed at one set of problems for the purpose of cleverly addressing a different set of problems. Her past articles harnessed copyright law to “fix artificial intelligence’s implicit bias” (2018) and to “combat revenge porn.” (2014). This paper draws on Professor Levendowski’s expertise working in private practice as a trademark attorney to address the problem of surveillance technology opacity. It is a primer on how to investigate trademark filings for hard-to-access information about surveillance technologies.
Levendowski laments and hopes to provide a partial solution to the informational asymmetry between law enforcement and the public about surveillance technologies. Private companies create surveillance technology – doorbell cameras, facial recognition tools, license plate readers – which are frequently used by law enforcement and embedded in communities. Community members are often unaware of the networks of surveillance until years later. Professor Levendowski explains that journalists and regulators often have difficulty investigating or tracking these surveillance tools because of weak or misaligned disclosure regimes.
But trademark filings – a process that aims to promote information clarity for consumers and other trademark owners – require specific public disclosures about the terms of the use of the good or service. And these trademark filings, as the article’s author demonstrates, reveal surprising and sometimes secret information about what surveillance technologies do, how they function, and sometimes who is financing them. These filings are usually made coincident with the product being used in commerce or sometimes before the product goes on the market. And so timely and rewarding disclosure of information is possible if an investigator knows how and where to look.
Trademark filings disclose several useful pieces of information about the surveillance technologies. For example, trademark filings declare when the product was first used in commerce (or when the trademark registrant intends that the product will be used in commerce). Trademark filings do this with a declared date of use in commerce and also a “specimen of use” filed with the proposed trademark. Sometimes this information, filed by a trademark attorney, unwittingly undermines the company’s desire for secrecy about its technology. For investment and fundraising purposes, trademark filings are sometimes made before there is a public announcement of the product or before a demonstration of a technology’s efficacy. This filing would thus show the technology works and is in service before the company has announced as much publicly. In her search of the trademark registration system and attached specimens proving commercial use, Professor Levendowski uncovered: surveillance technology that was subject to non-disclosure agreements; the identities of surveillance targets; and the specific terms of financial arrangements between a company and law enforcement.
Levendowski’s paper has many such surprising disclosures—including what some who are familiar with trademark registration procedure would consider unforced errors. Trademark attorneys do not have to file specimens of use that disclose confidential contracts, financing arrangements, or client information. In fact, it is possible that once this paper circulates among the relevant readers, there may be fewer such revelations. Trademark attorneys can redact documents and choose more carefully how to demonstrate use in commerce. They could coordinate better with the company’s other legal and corporate counsel to make sure the right hand knows what the left hand is doing. Does Professor Levendowski mean her article to be a tool that is so short-lived in utility?
Levendowski responds by doubling down. First, working with Dyllan Brown-Bramble at Georgetown Law Center (a former student of the well-regarded and innovative course “Computer Programming for Lawyers” taught by Professor Paul Ohm), Levendowski built a “Trademark Watcher” tool to use with her clinic students at Georgetown Law Center. This tool will help uncover as quickly and efficiently as possible many more of these revelations that are already part of the PTO record. So we can expect that the case studies she describes in her article are just the tip of the iceberg. Second, she hopes that this search tool and the paper – while it may lead to more careful practice by some trademark lawyers in the future – will nonetheless immediately provide a critically important investigative tool for journalists and advocates seeking to right the balance of surveillance transparency. Third, as a former trademark law practitioner, Levendowski is uniquely positioned to propose reform to the requirements for specimens of use at the Trademark Office. By doing so, she could transform what was previously an accidental disclosure into a required one, e.g. financial arrangements or roll-out plans for next phase technology. Doing so may require connecting the required disclosure with the purpose of trademark law. But since information clarity about sponsorship, affiliation, and use is a core trademark goal, the reform effort would align with long-standing legal principles.
I look forward to witnessing the benefits that flow from this article and hope they may be enduring. Surveillance transparency is hard to achieve for the reasons Professor Levendowski explains, which have to do with the entrenched features of capital systems and organizational infrastructures that promote hierarchies of control and minimize accessibility of information. Her cogent and incisive article may be just the effort to set us on a new and productive path. Perhaps it will succeed at aligning trademark’s consumer-oriented regime, whose goal is fair competition, with justice-enhancing disclosures about surveillance technologies which, when used en-mass and in secret, undermine community well-being and consumer autonomy.
Robert Brauneis, Copyright, Music, and Race: The Case of Mirror Cover Recordings
, available at SSRN
Several scholars have shown that while copyright law may appear facially race-neutral, in its application many of its provisions perpetuate systemic discrimination, particularly against African American creators. K.J. Greene, Funmi Arewa, and Candace Hines have each argued that seemingly harmless features of copyright law can interact with industry practices to operate to the disadvantage of African American authors. The pre-1978 copyright statutes, for example, required authors to navigate a series of formalities in order to vest their exclusive rights. Greene has shown that these provisions tended to deprive copyrights disproportionately to African American authors, who were less likely to have access to legal information and advice necessary to navigate the requisite technicalities.
Bob Brauneis’ article, Copyright, Music, and Race: The Case of Mirror Cover Recordings, builds on this work by providing a detailed case study of one particular instance in which apparently race-neutral copyright law combined with entertainment law norms to discriminate against African American creators: the so-called “mirror” cover recording, a practice where white performers would create nearly identical versions of sound recordings by African-American artists. Brauneis’ work consciously styles itself as an effort at historical recovery, which is valuable because it revivifies a time in the history of the music industry that has been largely forgotten.
Early in the twentieth century, recordings supplanted sheet music as the primary way that Americans consumed popular music. Recordings were marketed not only to different genre tastes (e.g., classical versus dance hits), but also to different social and racial groups. In response to this segregated marketing, record companies would often create a new and closely imitative recording of the same musical composition, one performed by white artists and aimed at a white audience, and the other performed by African American artists and aimed at an African American audience. This usually meant that if a label released a recording by African American creators that met with success, another record label would make a nearly identical (hence “mirror”) version of that recording and market it to a white audience.
The second recording often reaped much greater economic returns without providing any compensation to the label whose record it replicated. This success was not due solely to the appeal of the underlying musical composition on which each recording was based. Brauneis’ case study, for example, focuses on the hit track “A Little Bird Told Me.” The original recording performed by African-American artists and released by Supreme Records sold a couple hundred thousand copies. Soon after, two recording companies, Decca Records and Capitol Records, made mirror covers of “A Little Bird Told Me” that sold millions of copies. The mirror recording by white record labels did not merely also perform the same musical composition as the African-American performers who preceded them. They copied the distinctive arrangement and performance elements of the initial sound recording, making a version that was identical in all respects save for the race of the people who executed it.
This is not just a story about consumer demand and marketing, it is also a story about copyright law. If anything, it seems counter to copyright’s aim of protecting creators’ exclusive rights to allow competing record labels to make mirror cover recordings without any recompense to the owner of the initial recording. A 1909 statutory compromise defined recorded versions of musical works as copies subject to infringement liability. But while this statutory provision obliged creators of sound recordings to pay a statutory license to the owner of the underlying musical work they were performing, it created no exclusive rights in sound recordings themselves. Record labels thus remained largely free to copy one another’s tracks, and increasingly did so.
The dispute about “A Little Bird Told Me” culminated in a little known but highly consequential federal district court case, Supreme Records v. Decca Records. Supreme sued Decca in federal court, though on a theory of unfair competition rather than copyright. Here, too, copyright’s formalities got in the way of vindicating the interests of the creators of “A Little Bird.” The recording itself enjoyed no federal copyright protection. And while the Supreme team had re-arranged the original composition, including written notations, before recording their version, those notations were neither published with proper copyright notice nor federally registered as an unpublished work, so did not comprise separately protected expression. Supreme’s failure to check all the numerous boxes required to assert federal copyright thus limited the legal theories they could assert against Decca’s piracy.
The decision in Supreme Records dealt a blow to record companies seeking any legal bulwark against mirror cover recordings. Aping copyright doctrine even in the context of an unfair competition suit, the court’s decision was read to mean that arrangements of original music could never be copyrighted. Supreme Records may not be the best-known copyright decision, but as Brauneis shows, it was highly impactful. It removed any concern record companies had about whether they were liable to other record companies for copying their sound recordings, including but not limited to mirror covers. This paved the way for countless white artists who made their careers by making virtually identical versions of tracks performed by African American artists without having to pay those performers a dime. Yet the most immediate impact of Supreme Records may also be the most tragic: Burdened by costs associated with the litigation and deprived of any judgment from it, Supreme Records was rendered insolvent and ceased to exist.
Brauneis’ case study exposes a particular kind of social injustice: One that tends to evade detection because it is caught up in the weeds of dry, obscure transactional and legal details. The racial inequity exemplified by Supreme Records is not the product of a highly visible event or rule to which we all share moral revulsion, but to a series of copyright’s notoriously technical legal rules—mechanical licenses, fixation, pre-1976 Act registration—that taken together created a system in which white artists regularly profited off the work of African American artists free of charge. Copyright’s formalities are not, by any means, the only context in which neutrally phrased rules interact with intractable social racism to create racial disparities. In fact, this interaction is likely the case with much if not most regulation. Still, Brauneis’ case study, as well as the scholarship of Greene and others, serves as a useful reminder to be mindful of this risk in the context of copyright, where consciousness of discrimination remains limited.
Brauneis concludes on a somewhat optimistic note, observing that doctrinal developments and statutory reforms—along with changing social preferences—have diminished the popularity and the problems of mirror covers. Yet this still leaves us with the concern that other features of copyright law may unwittingly contribute to racial injustice. How then to assure that copyright does not replicate the mirror covers problem that Brauneis chronicles? This is a hard question to answer because, as is so often the case with systemic bias, the blame is widely distributed. The disproportionate harm to African American authors Brauneis locates in the case of mirror covers, and that Greene and others have exposed in other parts of copyright law, is not the work of a single mastermind, but of a series of unfortunate events: Congress’s creation of weak rights in sound recordings, majority consumer preferences for music performed by white artists, recording companies uncritically responsive to those preferences, and African American artists who tended to lack the information and resources to navigate the various pitfalls of the Copyright Act. That this outcome is the result of distributed, often unreflective, acts makes reform all the more challenging.
Kristelia García, Super-Statutory Contracting
, __ Wash. L. Rev.
__ (forthcoming, 2020), available at SSRN
Economic activities often conflict: a rancher’s stray cattle may reduce the value of a neighboring farmer’s crops, or a tech company’s file-sharing app may reduce the value of music labels’ records. When conflicts arise, society needs to decide which party’s interest to protect, and whether to do so with a property right or a liability rule. The law and economics literature teaches that lawmakers should devise entitlements while taking into account post-allocation transaction costs, with the goal of ensuring that resources end up in the hands of parties who put them to their most productive use.
Scholars have accordingly debated the relative performance of property rights and liability rules. In a seminal article, Rob Merges famously argued that one should not worry too much about transaction costs accompanying property rights. Should these prove prohibitive (as in the case of radio stations who need to license rights to many musical compositions that they wish to play over the air), and the property right choice inefficient, IP owners are likely to “contract into liability rules”; that is, they will privately arrange liability-rule-based licensing schemes (such as ASCAP) to lower licensees’ costs of access.
In a wonderful new article, Kristelia García reviews recent market dynamics that lend support to the mirror-image argument; namely, that one should not overstate the arguable inefficiencies of liability rules.
Through a series of case studies of privately ordered deals, she shows how time and again IP owners have opted out of their statutorily-dictated liability rule protections and have in effect transformed them contractually into either property rights or differently delineated liability rules. Importantly, García argues, this form of private ordering has resulted in a stronger form of protection than that allocated to IP owners statutorily. Examples include YouTube’s Content ID agreement (allowing content owners to prevent infringing uploads ex-ante rather than only remove them ex-post under section 512 of the Copyright Act); or the 2012 Big Machine-Clear Channel Communications deal (creating a new terrestrial public performance right protected by a negotiated-rate liability rule where none exists statutorily). Analyzing these deals, García concludes that a new and additional consideration – right holders’ “perceived control”, i.e. their ability to grant or withhold permission to use their work and to set terms for its use—should guide lawmakers’ choice between property rights and liability rules.
Where does García’s article leave us? It seems that when transaction costs become substantial and may hinder owners’ ability to profit from their entitlements, they would rationally engage in market and institutional innovation in order to reduce transaction costs and get licensing deals done. They would do so regardless of whether the initial entitlements allocated were property rights or liability rules. Either way, when the initial entitlement proves inefficient, IP owners are expected to contract around the statutory default. This tends to suggest that the choice between property rights and liability rule protection, while important, should not be overstated (especially in an age where parties’ ability to contract around statutory defaults is aided by technological advancements in detecting, measuring and charging for use). This realization should not come as a surprise, but merely as a reminder of the Coase Theorem that whenever transaction costs are zero, bargaining will lead to the efficient result regardless of the initial allocation of entitlements.
As we approach a zero transaction costs world (or some attainable lower bound), should we care about the property right vs. liability rule debate? Even if transaction costs are zero, the legal rule still affects the division of surplus between the transacting parties. As I have shown elsewhere, in cases where parties make investments ex-ante (i.e. at a time prior to transacting, so the investments cannot be subject to negotiation), the ex-post division of surplus affects their ex-ante incentive to invest. Thus, the choice between property rights and liability rules in IP should be made primarily in light of its effect on parties’ ex-ante incentives to invest rather than in view of ex-post transaction costs.
Be that as it may, García’s article eloquently reviews recent deals in which IP owners engaged in contractual and institutional innovation to contract around inefficient default liability rules and at the same time managed to assert greater control over their content. It is an enjoyable and worthwhile read that makes an important contribution to a longstanding and venerable debate.
Nicholson Price II, The Costs of Novelty
, __ Colum. L. Rev.
__ (forthcoming 2020), available at SSRN
Patents exist to promote the progress of innovation, but a wealth of recent scholarship has demonstrated the ways in which patents influence the pace and direction of innovation in potentially problematic ways. The prospect of patent protection may cause innovators to focus on particular kinds of solutions to problems over others (i.e., those that can be patented), and it can cause researchers to focus on solving certain kinds of problems over others (e.g., those that offer the greatest opportunities for financial returns). In his new essay, Nicholson Price II describes another way in which patent law doctrines encourage certain kinds of innovations—“differentiating” innovations—over others—“deepening” or “exploring” innovations.
Borrowing insights from research on cumulative innovation and product differentiation, Price develops a taxonomy of different innovation strategies that researchers might adopt. They might focus on developing richer knowledge about existing technologies. Price calls this “deepening” innovation. Or, researchers might seek to take a large step beyond the existing field of knowledge. This is “exploring” innovation. Finally, researchers might opt for a middle strategy that does not produce substantial differences from existing approaches. These “differentiating” innovations do not take the great leaps that exploring innovations do, and nor are they intended to enrich our knowledge of existing solutions.
As Price explains, however, none of these strategies is necessarily better from the perspective of social welfare. Sometimes exploring innovation will prove more valuable, but in other cases deepening innovation can offer more utility, for example by developing richer knowledge about available technologies and their uses. But, he argues, patent law exhibits a strong preference for differentiating and exploring innovations over deepening innovations. Most obviously, the novelty and obviousness doctrines demand differentiation from the prior art in order to obtain patent protection. While some opportunities exist for patenting new uses of existing technologies, the incentive for this sort of R&D is much weaker than it is for differentiating innovations. Similarly, developing variations from prior art shields firms from the threat of infringement for their efforts.
Yet while patent law encourages variation, Price argues, it does not systematically favor exploring innovations over more modest differentiating ones. Patent law doctrine, including the mostly toothless utility requirement, does not specifically encourage large creative leaps forward (although they may receive broader scope). Instead, patent law relies on the noisy and biased signals of market participants to reward inventions. Patents provide value only to the extent that consumers purchase the products that innovators create. And very often, innovators can reap substantial returns by simply mimicking others’ products or tweaking their own. To demonstrate these effects, Price explores how patent law and the market affect the incentives of pharmaceutical and biotech firms, often resulting in “me too” drugs and “evergreen” patent portfolios that generate little social welfare improvement.
Price’s main points in the Essay are descriptive, rather than normative. He wants to show us that patent law’s novelty and obviousness doctrines have significant innovation costs, especially in their current formulations. Of course, Price acknowledges, differentiation has significant benefits. Having multiple options increases consumer choice, which is especially valuable if consumers have heterogeneous needs. And having multiple drugs treat the same condition can create some downward pressure on prices, although the evidence for this is decidedly mixed. But differentiation’s costs are also severe. Price focuses on three principal costs:
- The costs of inventing around upstream products are high.
- When products diverge from each other, they are less interoperable. This is costly for consumers who face higher switching costs (e.g. between different medical devices) and for innovators who are trying to work across incompatible products.
- Differentiating innovation reduces society’s depth of knowledge about existing products, in favor of shallow knowledge about a broader range of products.
Price helpfully illustrates each of these three costs with examples from pharmaceutical and biotech innovation that appear to be decidedly suboptimal. He also adds a lot of nuance derived from his extensive knowledge of FDA regulation and insurance reimbursement.
The essay finishes with some ideas for how, if policymakers decide that innovation incentives are improperly skewed, they might intervene either in patent law (by increasing the obviousness threshold) or via the regulatory system (by giving FDA the power to limit approval for drugs that do not demonstrate meaningful improvements over the status quo). There might also be opportunities for insurance companies and government payers to play a role in directing innovation towards more socially beneficial outcomes.
While I understand Price’s choice to frame this essay modestly and descriptively, I believe that the available evidence points towards clear innovation failures in the pharmaceutical context. Recent empirical research indicates that as many as half of FDA approved drugs are no better than previous treatments, and as many of a quarter of them are actually worse. Moreover, while I’m nervous about giving the FDA more power to engage in ex ante cost-effectiveness analysis as some European agencies do, I think there may be opportunities for patent law to affect firms’ incentives with ex post adjustments to patent duration and strength. In a new paper, Jonathan Masur and I lay out a few of these options. Neel Sukhatme and Gregg Bloche have recently explored similar ideas. Ultimately, however, Price’s essay is another characteristically thoughtful contribution to a hugely important field.
Enrico Moretti, The Effect of High-Tech Clusters on the Productivity of Top Inventors
, NBER Working Paper No. 26270 (Sept. 2019), available at NBER
Why do inventors increasingly locate near each other in metropolitan technology hubs like the Bay Area, Seattle, or Boston, despite the high costs of living in these areas? Just ten cities accounted for a remarkable 70% of computer science inventors in 2007. A leading view has been that “agglomeration economies” make researchers inside these innovation clusters more productive, although measuring this effect is difficult. In his new working paper, economist Enrico Moretti examines the location of U.S. patent inventors over time to estimate just how large these productivity gains are: If inventors were distributed uniformly across the United States, Moretti estimates that their overall patenting rate would decline by 11%.
I like this paper (lots) not because the result is counterintuitive—quite the opposite. Rather, in a field with so many barriers to real empirical progress, it is worth celebrating work that attempts to rigorously understand what factors actually affect innovation. And Moretti’s work on the geography of innovation has important lessons for law and policy scholars, including about the importance of looking outside IP for evidence-based innovation policies and the complex connection between innovation and growing wage inequality.
Moretti motivates the productivity effect of innovation clusters by examining the rapid collapse of Kodak in 1996, which led to a nearly 50% decline in the Rochester technology cluster. By 2007, patenting by the average non-Kodak and non-photography inventor in Rochester had declined by 20% relative to the typical inventor in other cities. In his main analysis, Moretti studies this effect for all U.S. patents filed between 1971 and 2007, classified into five technology areas—semiconductors, computer science, biology and chemistry, other engineering, and other science—and assigned by the inventors’ addresses into 179 “economic areas” covering the United States. As he notes, there are limitations to this data; for example, citation-weighted patent counts are imperfect proxies for innovation (particularly in computer science), and he only has observations for inventors in years when they patent. But having an enormous dataset with worker-level measures of both productivity and location allows Moretti to go beyond prior work in studying the productivity effect of agglomeration.
Using different empirical methodologies and robustness checks (such as separately examining effects on inventors who stay in a city while its cluster size changes, and using models based on instrumental variables), Moretti estimates how patenting productivity varies with cluster size. For example, the average computer scientist moving from the median cluster (Gainesville, FL) to 75th percentile cluster (Richmond, VA) has a 12% increase in patenting, with no evidence of a “pre-trend” that would predict this productivity increase before the move. And a Bay Area computer scientist patents 23% more than they would in an average cluster. Furthermore, Moretti finds that the patenting gains for large clusters outweigh the losses for small clusters, which is why smoothing out the distribution of inventors would lead to an overall decline in U.S. patenting by 11%.
This work has at least three lessons for legal scholars. First, scholars who care about innovation should pay attention to geography. Some do, of course; Camilla Hrdy has excellent articles on incentives for local innovation clusters, for example. But we still generally teach our students that IP law is the most important tool in the innovation policy toolkit, despite continued empirical uncertainty about whether stronger patent laws even increase aggregate research investments. Policymakers may have a more demonstrable effect on innovation by focusing on legal institutions affecting where people live—including laws affecting land use, taxation, and immigration.
Second, Moretti’s work highlights how innovation policy can’t escape discussions of inequality. This is not just an issue of IP reflecting a choice in how to allocate access to knowledge goods; IP and other innovation incentives also reflect transfers to innovative firms and individuals, and the returns to patenting are distributed unequally. In addition to persistent disparities by race, gender, and socioeconomic class, the geographic distribution of patents is increasingly concentrated in metropolitan areas and near research universities and is correlated with a region’s economic health. As Moretti explains, clustering is “important for overall production of innovation in the US” but also “may exacerbate earning inequality across US communities.” Colleen Chien tackles this issue in a terrific new working paper, in which she argues that the history of U.S. patenting supports both an optimistic view of increasing innovation and a pessimistic view “that the innovation pie has become increasingly unevenly distributed and centered on immigrants and coastal elites.” There is no easy solution, but more widespread acknowledgement of the problem is a first step.
The third lesson is the flip side of the second: just as innovation scholars shouldn’t ignore inequality, scholars and policymakers concerned about inequality should recognize that trying to lure tech firms to distressed cities with economic incentives likely comes with a real efficiency tradeoff. Greater improvements for both innovation and equality might be made, for example, by seeding promising tech hubs with federal funding, or by focusing on the land use policies that are currently limiting growth in places like the Bay Area. These policies wouldn’t just help tech workers: in earlier work, Moretti has concluded that “for each new high-tech job in a city, five additional jobs are ultimately created outside of the high-tech sector in that city,” and has argued that the federal government should provide greater relocation assistance to unemployed workers. Of course, that assistance is only helpful if there is actually affordable housing to relocate to. In January, for the third year in a row, the California Senate rejected a bill meant to stimulate housing production. California’s inability to increase the supply of housing is not just a failure for Californians facing homelessness and housing instability—it is also a failure for U.S. innovation. And based on Moretti’s work, policymakers have at least a rough quantitative measure of how much more productive researchers can be when they are able to locate in Silicon Valley and other technology clusters.
Camilla Hrdy, Intellectual Property and the End of Work, 71 Fla. L. Rev. 303 (2019).
Do intellectual property (IP) rights create or destroy jobs (or both)? Industry associations and governmental agencies, such as the Patent & Trademark Office (PTO), frequently tout IP as a major force in creating (good) jobs as well as significantly contributing to economic growth. In 2016, the PTO, for instance, claimed that IP-intensive industries were directly or indirectly responsible for 45.5 million jobs, said to represent 30 percent of all jobs in the US. Without questioning this statistic, Professor Hrdy’s article explains that this is at best only one side of the story.
The main insight of the article is this: “Intellectual property may be partly responsible for job creation for people who work within IP-intensive industries . . . But a significant subset of innovations protected by IP, from self-service kiosks to self-driving cars, are labor-saving, and in many cases also labor-displacing” (emphasis in the original). The development and deployment of automated systems for performing a wide variety of tasks in a wide array of industries is “drastically reduc[ing] the amount of paid human labor required to complete a task.” Job losses resulting from technological change give rise to what economists call “technological unemployment.”
While some studies have concluded that the displacement of labor due to technological innovations has resulted in more job creation than job destruction, Hrdy questions whether this result will hold true in the near future owing to several factors. These include the increasing quality and pace of automation in various sectors, a decrease in quality of the work that remains unautomated, a rising inequality in who has what kinds of jobs, and the inability of education to keep pace with the needs of displaced workers.
Hrdy offers self-driving trucks as a case study. According to the American Trucking Association, there are currently 3.5 million professional truck-drivers in the US. The median salary of these truck drivers is about $40,000 per year. If the huge investments now being made in the development of IPR-laden self-driving trucks pay off, truck driving may no longer be a viable source of employment for most, if not all, of these people. The highly skilled engineers who are developing the software and hardware for self-driving vehicles generally earn more than $200,000 per year. But truck drivers cannot easily or quickly become engineers. Their jobs are going to be substantially displaced by automation. What is to become of these workers?
What should the US do about the labor-displacing impacts of technological innovation? The right answer is not the one Queen Elizabeth I chose when asked to grant exclusive rights to a knitting machine in 1589. She denied the inventor’s request for a patent because it would bring ruin to the many workers who made their living by hand-knitting clothing and other products. Elon Musk has suggested a universal basic income initiative as a solution. Bill Gates has proposed a robot tax. These and other possible solutions to the end-of-work problem brought about by advances in technology and IPRs on which these innovations are built are discussed in Hrdy’s fascinating paper. IP may not be wholly responsible for the end of work, but Hrdy says that it “magnifies the division of rewards between generators of IP and the workers whom their innovations replace.” Without destroying the incentive effects of IP, Hrdy would have us consider and address the distributive effects.
Whether you agree with Hrdy’s conclusions or not, this provocative article is well worth a read.
Brian L. Frye, Plagiarize This Paper
, IDEA: The IP Law Review
(forthcoming 2020), available at SSRN
Oscar Wilde: “That was an awfully good joke you made last night. I wish I could say it was mine.
James Whistler: “You will my boy. You will.”
Melvin Helitzer: One day Milton Berle and Henny Youngman were listening to Joey Bishop tell a particularly funny gag. “Gee, I wish I said that,” Berle whispered. “Don’t worry, Milton, [said Henny,] you will.”
Plagiarism is not a crime, or even a cause of action. But it is the “academic equivalent of the mark of Cain,” a curse that cannot be undone. Even an unsubstantiated accusation leaves an indelible stain, and a credible complaint cannot be countered. A plagiarist is an academic pariah, a transgressor of the highest law of the profession, the embodiment of the “great deceiver,” who leads everyone astray. Anything else can be forgiven, for the sake of the scholarship. Plagiarism tarnishes the scholarship itself, and leaves it forever suspect. If the purpose of scholarship is dowsing for truth, then the plagiarist is a liar who poisons the well from which everyone draws.
This is a jot recommending Brian Frye’s short, lively, and incisive article about plagiarism, Plagiarize This Paper. And, fittingly, everything you’ve read before this paragraph I’ve plagiarized from Brian’s work.
Or have I?
Brian wants others to copy his words, and even his ideas, and he doesn’t care whether we attribute them to him. He tells us that very clearly in his paper:
I explicitly authorize plagiarism of this article. I permit and encourage people to copy this article and republish it under their own name. I permit and encourage people to copy expressions from this article and use them without attributing them to me. And I permit and encourage people to use the ideas expressed in this article without attributing them to me.
Brian’s attempt to license plagiarism raises a fascinating question – what are norms against plagiarism really about? There are two, often entwined, interests that are usually identified as being protected by norms against plagiarism. First, it is often said that norms prohibiting plagiarism are there to protect authors’ interest in attribution. So, for example, anti-plagiarism norms would protect a junior academic against unacknowledged taking of her ideas by a senior academic who might otherwise have the power to get away with it. That seems like an interest worth protecting, though, of course, figuring out the origin of ideas with any precision is often difficult in reality. And second, anti-plagiarism norms are often characterized as protecting readers’ interest in not being defrauded – i.e., preventing a writer from fooling readers into thinking that a penetrating idea or a felicitous sentence is his when in fact it was invented by another. As a law professor who grades papers, this justification rings true to me – I want to know that the brilliant things that my students say in their papers are actually the product of their minds.
With respect to authors’ attribution interest, Brian has made clear that he doesn’t care about attribution, and that raises the question whether it’s fair to condemn as a “plagiarist” someone who denies to Brian what he doesn’t want. Brian is engaging in a provocation here: He thinks that people are likely to resist the notion of licensed plagiarism, which suggests that anti-plagiarism norms aren’t actually about protecting authors or readers, but about something else:
Ultimately, plagiarism norms are just cartel rules dressed up as moral obligations. Different discursive communities have adopted different plagiarism norms because they have different economic interests. And the plagiarism norms adopted by a community reflect the economic interests of its members. As those economic interests are contested and shift, the community’s plagiarism norms also are contested and shift. Accordingly, the plagiarism norms of any particular discursive community typically reflect the consensus interests of that community at that point in time.
I think Brian is on to something: If you look closely at institutional anti-plagiarism policies in the real world, it’s difficult to explain their actual content purely by reference to the interest of authors in attribution, or of readers in not being lied to. As an example of how anti-plagiarism norms tend to over-run both of these justifications, take my own institution, NYU. Its standard of “academic integrity” begins with a general definition of plagiarism as “presenting others’ work without adequate acknowledgement of its source, as though it were one’s own.” “Plagiarism,” the general definition continues, “is a form of fraud. We all stand on the shoulders of others, and we must give credit to the creators of the works that we incorporate into products that we call our own.”
This general definition that seems to focus precisely on the two justifications discussed earlier: (1) the interest of readers in not being lied to and (2) the interest of writers in receiving attribution when others are “standing on their shoulders.” These are valid justifications for anti-plagiarism norms, so all seems fair enough so far. But then the NYU plagiarism statement gets down to some examples of use by one writer of the work of another that would count as plagiarism:
- a sequence of words incorporated without quotation marks
- an unacknowledged passage paraphrased from another’s work
- the use of ideas, sound recordings, computer data or images created by others as though it were one’s own
These examples don’t mesh well with the two justifications. Take the first, “a sequence of words incorporated without quotation marks,” or the second, which bars unacknowledged paraphrasing. If you think about whether taking a “sequence of words” or paraphrasing without acknowledgment should count as either fraud on readers or unacknowledged shoulder-standing that hurts writers, you will realize rather quickly that the answer depends entirely on the particular words involved.
Let’s first imagine a student who takes these words: And then the day came, when the risk to remain tight in a bud was more painful than the risk it took to blossom.
Some people might recognize that these words comprise the entirety of Anais Nin’s short poem “Risk.” But some people won’t. If the student uses the words without acknowledgment, she will have defrauded readers who don’t recognize the source. We recognize the unacknowledged taking here as fraud on readers because of the effect of the specific words. They are lovely, insightful, richly metaphoric. Someone who doesn’t have Nin’s poetic oeuvre committed to memory might think that the genius of the words is the student’s genius, which it is not. And that gets to the other justification – the attribution interest of writers. Nin has a legitimate attribution interest here, precisely because this particular sequence of words is so highly creative The student is standing on a giant’s shoulders, but unless she says so, not everyone is going to see that. If anyone qualifies as a plagiarist, this student qualifies.
Now let’s imagine a second student who takes these words: When he lived in Chicago, Mojica sang in punk bands, ran a record label, and owned the Jinx Café and a video rental shop called Big Brother.
I doubt you will recognize these words, and there’s no reason anyone should. They comprise a sentence of workmanlike journalistic prose, reporting basic facts about an actual person. These words, and others like them, were the subject of a recent public plagiarism scandal involving former New York Times editor Jill Abramson, who was accused of copying (actually, mostly paraphrasing) in a book she wrote short passages from articles written by a number of less-well-known journalists. In fact, Abramson cited (in the book’s endnotes) virtually all of the articles she used. But in several places she failed to put language that she copied or paraphrased in quotations. For that, she was publicly flayed.
Did Abramson deserve to be labeled as a plagiarist? She did not commit fraud in the same sense as did the student who took Anais Nin’s words without acknowledgment. The words Abramson took are certainly useful to her narrative, but no one is going to give Abramson any credit tied to the particular words. As literary composition they are utterly banal. The words do report facts, but the facts are of little or no value in themselves. Abramson is not making off with someone else’s diamonds, nor is she standing on another writer’s shoulders – the words comprise, at most, a very small foot-stool. At bottom, Abramson is simply reproducing a sequence of words without telling you where they originated.
In the end, that’s what many plagiarism disputes are largely about. Not fraud on readers, or failing to acknowledge that one is “standing on the shoulders” of another writer, as the NYU general definition of “plagiarism” claims, but the mere reproduction of words. That is also what copyright law is about, and one might ask why we extend plagiarism norms beyond their core when we have copyright to police mere word-taking. Brian has an explanation:
Essentially, academic plagiarism norms are the equivalent of a tax imposed on junior scholars, for the benefit of senior scholars. Junior scholars must err on the side of attributing ideas to senior scholars, whether or not attribution is accurate or helpful, on pain of suffering a plagiarism accusation. As a consequence, senior scholars collect “interest” on the intellectual capital of junior scholars.
That’s a big idea, one that extends beyond ideas to works, and also beyond academia to journalism and other places where enforcement of anti-plagiarism norms is particularly fierce. And Brian deserves full credit for this idea. Even if he doesn’t want it.
Cite as: Christopher J. Sprigman, Plagiarize This Jot
(January 13, 2020) (reviewing Brian L. Frye, Plagiarize This Paper
, IDEA: The IP Law Review
(forthcoming 2020), available at SSRN), https://ip.jotwell.com/plagiarize-this-jot/
Alexandra J. Roberts, Trademark Failure to Function, 104 Iowa L. Rev. 1977 (2019).
When a new word or catchphrase enters the social lexicon, some individuals will rush to the U.S. Patent and Trademark Office to try to be the first to obtain a trademark registration. That was the case with one John E. Gillard, who applied pro se to register #COVFEFE for hats, T-shirts, and related goods mere hours after the President included the word at the end of a midnight tweet on May 31, 2017. The Trademark Trial and Appeal Board affirmed refusal of the registration on the grounds that #COVFEFE, as a “sui generis nonsense word” that allowed users and observers to “project onto it any meaning they wish,” failed to function as a trademark for the applicant’s goods, particularly given the wide array of merchandise from different sources already featuring the term.
The question may be a bit murkier, though, when the applied-for mark has a closer association with the applicant. In August, many media outlets were abuzz with the news that The Ohio State University had filed an application seeking a trademark registration for the word THE, for use on “clothing, namely t-shirts, baseball caps and hats” in “standard characters, without claim to any particular font style, size, or color.” Included among the submitted specimens was a photograph of a women’s T-shirt from “The Official Team Shop of Buckeye Nation®,” depicting a red shirt with the word THE in large block letters, with a smaller Ohio State logo underneath, as well as a photograph of a white baseball cap with the word THE in large red capital letters. The popular reaction seemed to revolve around the perception that it was ludicrous to obtain a trademark registration for a common English word, even though many such trademarks exist. (APPLE comes immediately to mind.)
A more nuanced critique, as Alexandra Roberts’s insightful article Trademark Failure to Function helps us to understand, would ask whether THE functions as a trademark in this context — whether consumers would use the word THE on the front of a T-shirt or cap to allow them to find merchandise produced by (or perhaps authorized by) the university or whether the word functions simply as an expressive device, a way of communicating support for the institution and/or its athletic teams (what trademark law calls ornamental use). This, of course, is not an issue limited to Ohio State’s application. A consumer who buys a t-shirt with a swoosh on the front may be using the swoosh both to identify a perceived high-quality shirt and to communicate to others that the consumer is a person who wears Nike apparel. But when what is on the front of the shirt is less likely to be used to communicate the quality of the manufacture, as with many T-shirts adorned with words or graphics, one might ask whether it is functioning as a trademark at all. Indeed, on September 11, the trademark examining attorney refused Ohio State’s registration. Given the location of the word THE on the “upper-center area on the front of the shirt and the front portion of the hat, where ornamental elements often appear,” the examining attorney concluded, the word did not function as a trademark to indicate the source of Ohio State’s merchandise. It remains to be seen whether Ohio State will challenge this conclusion or abandon its attempt to seek registration.
Professor Roberts’s article highlights that the “failure to function” doctrine has not received enough attention from trademark scholars, particularly in comparison to the question of trademark distinctiveness. The two are different, although related. Distinctiveness analyzes the semantic content of a mark — whether consumers will understand the mark to have a relationship to its good or service that conveys that it is functioning as a proper name and not as a description or generic term. The trademark GREYHOUND for bus services will be interpreted by consumers as a proper name, we surmise, because the term is being deployed metaphorically. One must first understand that greyhounds are known for being fast animals before one can understand that the mark suggests that the bus service is also fast. Because the mark owner could have conveyed this information more directly by using a word like “speedy,” we assume that consumers will conclude that the choice to use a metaphor must mean that the mark owner intends the mark as a name. The Abercrombie spectrum (derived from Abercrombie & Fitch Co. v. Hunting World, Inc.), a hierarchy of trademark distinctiveness, is a mainstay of trademark validity analysis.
Whether a trademark functions as a mark, however, asks whether the mark “appear[s] where consumers expect a trademark to appear” and whether it is “sufficiently set off from the surrounding text and images to attract notice” (P. 1981) — in other words, the “visual relationship” between the mark and its goods or services rather than the “conceptual relationship” between them. (P. 1983.) The interaction between distinctiveness and function, Professor Roberts contends, is “interdependent and inverse: the less distinctive a mark is, the greater indicators of trademark use are needed to ensure consumers will perceive it as a mark, and vice-versa.” (P. 1987.) In other words, consumers might recognize a fanciful term such as a pharmaceutical name no matter how it appears, but a less distinctive mark might require visual clues such as font, color, design, or a ™ symbol to convey that it is intended as a mark. These are things that the marketing and design literature has studied for some time, and Professor Roberts mines the research to demonstrate the importance of a consumer-centric analysis. Scholars building on her work might think about the ways in which these interpretations depend, as Rebecca Tushnet has noted, on understanding the role of implication in speech and on cultural competency; considerations of literacy, visual acuity, and other tools that consumers may or may not bring to the table may also complicate the analysis.
Because courts have not paid enough attention to the failure to function doctrine, writes Professor Roberts, and often analyze it separately from distinctiveness, the doctrine has been underdeveloped, which means that some faulty registrations have managed to escape close review. A tandem review also militates against strategic lawyering, as Professor Roberts points out, as a savvy trademark practitioner can easily help her client overcome a failure to function refusal by, for example, advising the creation of a hangtag or other ways commonly used to indicate trademark use.
The article caused me to think more deeply about something I had largely taken for granted. Professor Roberts’s focus is largely on word marks, but her analysis of trade dress and the Seabrook standard provides a helpful way of considering the various ways in which we assume consumers will understand trademarks. The more nuanced analysis she suggests doesn’t always, of course, provide an easy answer. Professor Roberts cites as examples of error the registration of #BeUnprecedented for legal research services and #SharetheSilence for alcoholic beverages, contending that the specimens offered no evidence that consumers would see these as anything but hashtags. (Pp. 2011-2012.) But a marketing expert might argue that those are exactly the kinds of phrases one would develop as a slogan for a client, and consumers in this context might assume that the hashtag developed from a slogan, rather than the reverse. Those who follow the news might immediately understand that #COVFEFE refers to the President’s tweet, not to any one manufacturer, but do buyers of Ohio State apparel understand THE to be only an expression of fan support or also an indicator of licensed merchandise? And if the university’s registration attempt is ultimately successful, what will consumers learn from that about trademark law generally?
Professor Roberts’s article reminds us that consumer perceptions are both the cause and the result of trademark validity. And it comes at the right time: The TTAB, as John L. Welch has documented, appears to be focusing more on failure-to-function in recent months, which makes Professor Roberts’s article even more relevant for practitioners as well as scholars. (A commenter at Mr. Welch’s blog offers the perceptive suggestion that the recent spate of failure-to-function denials at the TTAB may be evidence of a developing post-Brunetti strategy, as this example may confirm.) Failure to function may have been a somewhat dormant doctrine in recent years, but thanks to Professor Roberts, we can better understand the benefits of its coming out of hibernation.
We have long understood that people have a right to repair what they own, but this right to repair is under siege. A new article by Leah Chan Grinvald and Ofer Tur-Sinai explains how IP rules are inhibiting these repair rights and why laws protecting the right to repair are necessary and justifiable. As I explain below, authors Grinvald and Tur-Sinai describe the growing right to repair movement pushing for legislation to protect the right to repair and show how intellectual property laws should facilitate not interfere with consumers rights to repair what they own. The authors also propose a theoretical framework through which they analyze the intellectual property doctrine as enabling rather than inhibiting of consumer rights..
The right of repair problem is easy to identify and touches many aspects of everyday life. Our coffee machines break and, because certain components are protected by patent and copyright, this relatively simply machine cannot be fixed except by the manufacturer, who charges as much as the machine itself to replace the part. Automobile repairs performed only by “authorized dealers” preserve warranty agreements and are enforced through trademark law as well as patent law. These kinds of restrictions make the market for repairs tightly controlled and expensive, sometimes entirely foreclosed, pushing consumers to buy new products instead of fixing old ones. The restrictions benefit manufacturers and their business partners. But it hurts consumers and repair shops and contributes to substantial amounts of waste inhibiting efforts at reversing devasting climate change.
In an elegant analysis, the Article identifies IP as both the problem and solution to protecting the right of repair. In a reversal of the usual use of IP, the authors cleverly rely on IP law’s internal justification of utilitarianism, incentivizing innovation, personal autonomy, scientific progress, and promoting information disclosure to justify expanding access to consumer products for users and small-businesses as opposed to strengthening control in the owners of intellectual property. When this analysis is paired with the socio-legal movement also described in this Article – the consortium of independent repair shops, “do-it-yourselfers,” and e-recyclers – the Article is a helpful and insightful roadmap for the “fair repair” legislative initiatives across the country.
One of the Article’s many engaging qualities is its plethora of details about the “fair repair” movement and the overwhelming variety of everyday consumer products affected. The Article describes legislative initiatives in approximately 20 states, the proposed model legislation for the “right to repair,” and the problems the legislation has been facing. In the process the authors paint a picture of a socio-political movement that should be relevant to most people but somehow has stayed relatively obscure. The examples of IP rules burdening or blocking our ability to repair cell phones, coffee makers, computers, agricultural machinery and home heating and cooling systems, to name just a few, are sufficiently ubiquitous for the problem described to feel personal and urgent.
The Article’s central innovation is its configuration of the “right of repair” in terms of four concentric circles.At the core is the individual consumer’s right to repair goods they own without interference by the original manufacturers. For this right, the Authors rely heavily on the first sale doctrine – the IP principle that once owned, an object is the dominion of the owner free from restrictions by others to use, resell, alter, or destroy.
The next level circle expands the right of repair beyond consumers to repair shops and includes not only repair (as a form of property use) but diffusing information regarding repair, such as through instructional videos, paper manuals, or advertising. The Authors argue that diffusion of repair information is essential to enable the right of repair in a meaningful manner. This circle implicates First Amendment interests and the legality of circumventing “digital locks” that prevent access to technological mechanisms in need of repair.
The third circle focuses on the right to make, sell and use replacement parts in competition with the original equipment manufacturer. This level of the right to repair aims to disrupt the original manufacturers’ monopoly over replacement parts – a monopoly IP law arguably grants them – while preserving some market share to IP owners to earn profits necessary to incentivize the parts’ production in the first place.
The fourth circle proposes an affirmative duty of original manufacturers to ensure effective implementation of the right to repair. The Authors acknowledge this may be the most controversial component of their analysis but it is the focus of the model legislation currently debated and thus essential for a thorough discussion. It is also one of the most interesting aspects of the Article because it directly addresses the challenging theoretical issue of “rights” being meaningful only when complemented by “affirmative duties.” This discussion mentions the possible fruitful path of compulsory IP licenses and the difficulties posed by the 2016 Defend Trade Secrets Act.
At each level of the right to repair, the Authors analyze the relevant IP doctrines and legislative initiatives, emphasizing tweaks to IP doctrines necessary to effectuate the right and legislative corrections that would make proposed “right of repair” bills stronger or more likely to succeed. At the end of the Article, the Authors address lingering and plausible critiques of their proposals – such as maintaining quality, economic loss, and counterfeiting – tying up some of the loose ends that are inevitable in an ambitious project such as this one.
As all IP professors know, IP laws overlap, and that complicates the analysis of their application across a range of scenarios. Students nonetheless usually learn IP law doctrine-by-doctrine, and the justice implications of IP law can be frustratingly marginal in many introductory courses. But this Article and the right to repair for which it advocates provides serial examples that demand inter-doctrinal analysis combining trademark and patent law, for example, or copyright with design patent law. And in the context of the right of repair, the analysis also demands attention to welfare and environmental justice and implications of consumers investing in property they are legally forbidden to maintain. It is an article easily used when teaching to draw on these overlapping IP issues that also combine civil rights concerns, or to refer to students searching for a research topic with contemporary significance and many still-unanswered questions.
Although drawing on a political movement already afoot, this Article reads like a call to arms written by advocates who are equally theoretical and practical to launch a successful revolution. The elegance and thoroughness of the Article’s arguments, however, diminish the need for a revolution of IP law and call only for targeted adjustments fully consistent with IP law’s purposes and structural features. There are other likely partners in this movement left untapped. I couldn’t help but think of the many rights on which we rely that are dependent on others and on which this Article’s analysis could draw – rights to reproductive freedom and the right to vote, for example, both of which are under siege at the moment and critical to the human autonomy and equality on which this Article’s “right of repair” ultimately relies. Perhaps painting with too broad a brush and drawing analogies to controversial subjects undermines the Authors’ agenda. The Article’s framework is IP law, not constitutional law, but their mutual resonance makes the Article intriguing and timely. It is well worth consideration as a model of IP scholarship increasingly relevant in the digital age.