The Journal of Things We Like (Lots)
Select Page

Can Neuroscience Fix Trademark Surveys?

Zhihao Zhang, Maxwell Good, Vera Kulikov, Femke van Horen, Mark Bartholomew, Andrew S. Kayser & Ming Hsu, From Scanner to Court: A Neuroscientifically Informed "Reasonable Person" Test of Trademark Infringement, 9 Sci. Advances 1 (2023).

In trademark litigation, consumer surveys can determine a number of important doctrinal questions, including the most important one: whether consumers are likely to be confused into thinking that the defendant’s product was made or licensed by the plaintiff. Recently, scholars have questioned the validity and reliability of standard trademark surveys, suggesting that they are easy to manipulate and biased in favor of one party or the other. Wouldn’t it be great, then, if there was a reliable way to determine whether a survey was biased or not? Using neuroscientific imaging, an interdisciplinary group of researchers (including law professor, Mark Bartholomew) has proposed just such a possibility in a new paper, From Scanner to Court: A Neuroscientifically Informed “Reasonable Person” Test of Trademark Infringement.

Trademark surveys can suffer from a number of flaws. They may be explicitly biased in favor of one party or another, for example, by describing the defendant as a “copycat” or the plaintiff as a “trademark bully.” They may exhibit more subtle biases in how they frame questions about similarity and confusion. And, finally, survey participants always know the nature of the survey they are taking, so participants may exhibit “demand effects,” providing what they anticipate are the surveyor’s desired answers rather than their true responses.

The researchers began by demonstrating the manipulability of survey instruments. With two different plaintiff products, they manipulated the survey language to create both “pro-plaintiff” and “pro-defendant” surveys, as well as a putatively “neutral” survey. For each plaintiff product, they included a list of other products that varied in their degree of apparent similarity. For example, Reese’s Peanut Butter Cups served as the plaintiff product in one group, and the other products included Toffee Crisp (an actual defendant in litigation brought by Reese’s) as well as less similar products like Snickers, Justin’s, and Ghirardelli. When they tested a group of participants recruited from Amazon Mechanical Turk, the biases had the expected effect. Participants thought the defendant’s product (Toffee Crisp) was more similar to the plaintiff’s (Reese’s) in the “pro-plaintiff” survey than in the “pro-defendant” survey, and the “neutral” survey produced intermediate results.

To generate a more objective measure of product similarity that does not rely on participant reports, the researchers exploited an intriguing feature of human perception and cognition known as “repetition suppression.” The idea is simple: when we are presented with a stimulus that is very similar to one that we have just seen, our perceptual response to it diminishes. This is effectively a visual heuristic. Having seen something once, our brains devote less cognitive capacity to seeing it the second time. The empirical strategy, then, compares the degree of neural diminution across various stimuli to objectively measure stimuli similarity. The more similar two stimuli are, the more participants’ neural responses to the second stimulus will be diminished.

The study entailed functional magnetic resonance imaging (fMRI) scans of 26 participants who viewed the various products described above. fMRI scanning measures relative changes in brain blood levels as a proxy for neural activity. By focusing on regions of the brain known to process visual stimuli, the researchers could measure the degree of diminished neural activity associated with the Reese’s-Toffee Crisp pair compared to the Reese’s-Snickers pair.

When the researchers compared the neural similarity measures detected by fMRI to the self-reported similarity measures from the prior surveys, they found a strong correlation between the fMRI data and the “neutral” survey but no significant correlations between the fMRI data and either the “pro-plaintiff” or “pro-defendant” surveys. This suggests that the “neutral” survey is, in fact, a good proxy for participants’ actual experiences of visual similarity.

To be clear, the researchers’ methodological contribution isn’t to suggest that all trademark cases should require incredibly expensive neuroscientific studies. Rather, by using techniques like this one, scholars can develop a set of “best practices” or “gold standards” for trademark survey research. The goal is to use neuroscience to validate much cheaper behavioral surveys.

Of course, this is just the beginning. There is much that the current study doesn’t tell us. It can tell us that participants think the overall visual impression of certain trade dress is more or less similar to other trade dress. But it cannot tell us, for example, whether the participants were paying attention only to the protectible aspects of the trade dress or not. Nor do we know if some degree of similarity is consistent with consumers being confused as to source. The Mercedes logo and a “peace” sign look very similar, but people may not be confused by them. But this study seems a step in the right direction, and I’m excited to see where this kind of research will go next.

Cite as: Christopher J. Buccafusco, Can Neuroscience Fix Trademark Surveys?, JOTWELL (June 7, 2023) (reviewing Zhihao Zhang, Maxwell Good, Vera Kulikov, Femke van Horen, Mark Bartholomew, Andrew S. Kayser & Ming Hsu, From Scanner to Court: A Neuroscientifically Informed "Reasonable Person" Test of Trademark Infringement, 9 Sci. Advances 1 (2023)), https://ip.jotwell.com/can-neuroscience-fix-trademark-surveys/.

The Complex Effects of Piracy on the Movie Industry

Klaus Ackermann, Wendy A. Bradley & Jack Francis Cameron, Avengers Assemble! When Digital Piracy Increases Box Office Demand (June 30, 2020), available at SSRN.

Does piracy of creative goods such as movies, books, or songs reduce paid demand for those goods? This seemingly straightforward question has proven surprisingly difficult to answer in the real world.

Piracy may draw away customers who might otherwise have paid. But it’s also possible that consumers of pirated copies are, by and large, not people who would have paid to consume if they couldn’t get access for free. Piracy may also help spread the word about a good movie, book, or song. This sort of informal advertising might drive up paid consumption, even if some people who would have paid are lost to piracy. It’s also possible that some combination of all these things might happen, with uncertain net results.

In a new empirical paper, titled Avengers Assemble! When Digital Piracy Increases Box Office Demand, Klaus Ackermann, Wendy A. Bradley, and Jack Francis Cameron offer a nuanced and interesting study of the effects of piracy on the movie industry. The effects of piracy, as it turns out, have a lot to do with what kind of movies we’re talking about.

The authors built a novel dataset that identified the existence and the timing of the earliest upload of a high-quality pirated copy for every U.S. movie release. The authors did this with data on the appearance of movie piracy “torrents” between January 2004 and January 2020 from online piracy site The Pirate Bay (TPB). The authors then matched this data with movie release information during the same period from the well-known IMDb database.

Merging these two streams of data allowed the authors to match up release dates and “piracy dates.” They measured changes in box-office revenue for pirated movies in the first 48 days after their releases in theaters, relative to the preceding period following releases before the movies were pirated. They then used a formula to adjust for the general fall-off in movie box-office revenues over time. If piracy was substituting for paid demand, the authors could pick up that effect by comparing (time-adjusted) pre-piracy vs. post-piracy box office revenues across many films.

The authors hypothesized that piracy has different effects on different types of movies. Specifically, they theorized that “spectacle” movies—the kind of movies that people want to see in the movie theater—may be less affected by piracy than “story” movies that people are more content to watch on their computers. In other words, spectacle movies may benefit more from word-of-mouth advertising that piracy may provide while losing fewer customers to demand substitution, compared to story movies.

To aid this assessment, the authors constructed two measures of movie “spectacleness.” One used a movie’s release in 3D or IMAX formats as a proxy for that quality (because “spectacle” movies are the kinds of movies that people want to see in these especially immersive formats). A second categorized movies into genres associated with spectacleness and story-focus by measuring the number of movies in various genres nominated for the “best visual effects” Oscar (associated with spectacleness) as opposed to the “best original screenplay” Oscar (associated with story-focus).

Based on data for more than 400 movies, about half of which have been pirated within the first 48 days of release, the authors concluded that piracy had the mixed effects they predicted. For films for which in-theater viewing adds value (“spectacle” films), there is a 13% increase in average daily box office revenue after the appearance of a high-quality pirated version of the film online. For story-focused films, on the other hand, there is as much as a 30% decline in average daily box-office returns after the appearance of a high-quality pirated version of the film online. This is consistent with the idea that piracy acts as a substitute to films focused on story, where the full value of the film can be consumed at home.

The authors’ findings shouldn’t be too surprising. Think for a moment about the music industry. Recorded music is more vulnerable to piracy than live music because a big part of the appeal of live music is the immediacy and communal experience of the concert. Such experiences cannot be replicated in a pirated recording.

So we might expect that during the post-Napster but pre-Spotify/Apple Music era when online piracy was driving down revenues for recorded music, there would be an industry shift toward more focus on live music. There was indeed a very rapid growth in that period of big live music firms such as Ticketmaster and Live Nation. Moreover, during that period the rise of live music revenue very closely mirrored the decline of recorded music.

In 2000 (just after Napster’s debut), recorded music represented 53% of the global music industry. By 2017 (when paid streaming started to restore lost recorded music revenues), recorded music’s share of total music industry revenues had dropped to 38%, while live music went from 33% to 43% of the industry.

Something analogous is happening in the motion picture industry, although the effect is probably not as pronounced. That is, the industry’s product mix may have shifted toward “spectacle” films because these sorts of film tend to be more resistant to piracy. Indeed, the authors gesture in this direction, stating that their findings suggest because the value of a film is linked to its “spectacleness,” the industry would be wise to adjust its creative output on the margin—i.e., to produce more spectacle films—to blunt piracy’s effect rather than investing in the law enforcement efforts that would be required to reduce piracy by any substantial amount.

Alternatively, movie studios may seek to insulate story films against piracy by, for example, releasing them to streaming channels simultaneously with theatrical release. Or, maybe movie studios could invest directly in upgrading theaters for these story-focused films to enhance the in-theater viewing experience in other ways, such as by making the theater a place for fun and social interaction. (Theaters such as the Alamo Drafthouse are already offering this kind of experience).

If so, then the principal effect of movie piracy may not be to lower the overall demand for movies or the number of movies produced. It may be to shift the kind of movies produced, or, more subtly, to shift the way that movies are presented to the public. Unlike the relatively simple framework in which piracy leads to fewer movies, the real effect of piracy may be more subtle, and the case for investing significant resources (especially public resources) in anti-piracy efforts less clear.

Cite as: Christopher J. Sprigman, The Complex Effects of Piracy on the Movie Industry, JOTWELL (May 9, 2023) (reviewing Klaus Ackermann, Wendy A. Bradley & Jack Francis Cameron, Avengers Assemble! When Digital Piracy Increases Box Office Demand (June 30, 2020), available at SSRN), https://ip.jotwell.com/the-complex-effects-of-piracy-on-the-movie-industry/.

Policy Experimentation to Address Inequality Among Innovators

Nicholas A. Pairolero, Andrew A. Toole, Peter-Anthony Pappas, Charles A.W. deGrazia & Mike H.M. Teodorescu, Closing the Gender Gap in Patenting: Evidence from a Randomized Control Trial at the USPTO (Nov. 1, 2022), available at SSRN.

Inequality among innovators is a substantial social problem in terms of both equity and economic growth. For instance, Raj Chetty’s Opportunity Insights group has documented that if women, racial minorities, and low-income Americans invented at the same rate as high-income white men, then the rate of U.S. patenting would quadruple. They also note the glacial progress toward closing these gaps, such as the 118 years it will take to reach gender parity at the current rate.

These inequalities affect not only the rate of innovation, but also what kind of innovations are created—for example, all-female inventor teams are more likely to focus on women’s health. Unfortunately, the evidence base for policy interventions to reduce these innovation gaps remains depressingly shallow. Most policies are tested without a rigorous evaluation strategy or control group, making it difficult to determine whether they had any effect.

A new paper from the U.S. Patent and Trademark Office (USPTO), Closing the Gender Gap in Patenting: Evidence from a Randomized Control Trial at the USPTO, is a remarkable addition to this literature. For the first time ever, the USPTO has tested a policy intervention as a randomized experiment, allowing a credible evaluation of its effectiveness. Changes in patent policy have rarely been tested with any element of randomization and have never been tested previously by the USPTO itself. Even if this experiment had yielded null results, the effort would still have been laudable as a model for how agencies can assess the impact of a new policy and publicly disclose the results. But the experiment also documents that the intervention—a new program to help patent applicants without legal representation—led to a sizeable decrease in the gender patenting gap.

The USPTO’s experiment began in 2014, when it created a new “Pro Se Pilot Examination Unit” to help pro se inventors (those without professional assistance) through the patent examination process. Obtaining a patent is not a user-friendly process, with most patent applications receiving a “rejection” or even a “final rejection” (which is actually more akin to a “revise and resubmit”) before eventually being allowed.

One study suggests that around half of the patent gender gap is due to women being more likely to abandon their patent applications after these discouraging replies rather than persisting in this back-and-forth process with the patent office. To address a concern that pro se inventors may be particularly disadvantaged in this process—for reasons unrelated to the merits of their inventions—patent examiners in the Pro Se Pilot received training on strategies to assist these inventors. For example, examiners would encourage applicants to call them with questions and would proactively help applicants draft better patent claims.

In the same way that promising new medicines are rigorously tested in randomized controlled trials that assign patients to either the new treatment or a control group, the USPTO decided to test this new examination unit by randomly assigning pro se applicants to either the Pro Se Pilot or to the regular examination process. By comparing outcomes across the two groups, they found that the Pro Se Pilot increased the likelihood of receiving a patent for all pro se applicants, and that it had a particularly striking effect for women. The likelihood of receiving a patent increased by 6.1 percentage points for men compared with 16.8 percentage points for women. The gender effect was even larger among first-time U.S. applicants: the likelihood of receiving a patent increased by 5.8 percentage points for men and a remarkable 23.5 percentage points for women. These results provide strong causal evidence of the new program’s value in closing the patent gender gap for pro se applicants.

Of course, this intervention is only one small step toward addressing the innovation gender gap more broadly. Future research should investigate whether similar changes in examiner training could help reduce the patent gender gap for broader groups of applicants. Less than 1% of U.S. patent applicants are pro se, but the additional guidance provided through the Pro Se Pilot might also help a larger group of inventors, such as those at small and micro entities who are currently disadvantaged by lower-quality legal representation.

In addition, the USPTO should study whether the reduced gender gap persists beyond patenting. Receiving a patent is worth little in isolation; financially benefiting from patents depends on other institutions with their own gender biases, such as corporate rent-sharing and venture capital. The Pro Se Pilot increased the likelihood that a pro se applicant would receive a patent, but it is worth examining longer-term outcomes such as assignments of these patents, new patent applications from these inventors, and non-patent outcomes gathered by survey or by linking to other datasets.

But the need for further research should not detract from the monumental nature of this study, which has simultaneously tackled two problems of bipartisan interest: inequality among innovators, and the need for better evidence to improve government effectiveness. In 2018, President Trump signed the SUCCESS Act of 2018, which tasked the USPTO with studying and recommending solutions to the problem of inequality among innovators. And the USPTO’s current Learning Agenda—developed pursuant to the Evidence Act of 2018—commits the agency to develop evidence on how to improve the effectiveness of patent examination in general, and with assessing participation in the patent system by underserved populations. The success of the first randomized controlled trial run by the USPTO on both of these fronts will hopefully inspire the use of rigorous experiments to test other policy interventions, both within and outside the patent context.

Cite as: Lisa Larrimore Ouellette, Policy Experimentation to Address Inequality Among Innovators, JOTWELL (April 4, 2023) (reviewing Nicholas A. Pairolero, Andrew A. Toole, Peter-Anthony Pappas, Charles A.W. deGrazia & Mike H.M. Teodorescu, Closing the Gender Gap in Patenting: Evidence from a Randomized Control Trial at the USPTO (Nov. 1, 2022), available at SSRN), https://ip.jotwell.com/policy-experimentation-to-address-inequality-among-innovators/.

Do Sex and Drug Associations Tarnish or Burnish Trademark Reputations?

Jake Linford, Justin Sevier & Allyson Willis, Trademark Tarnishmyths (Aug. 6, 2022), available at SSRN.

Federal trademark law now protects certain marks against “tarnishment.” If a mark is associated with “bad” things such as drugs or sex, the theory goes, that may harm the seller’s reputation and dilute the mark’s “commercial magnetism.” The theory sounds plausible enough, in theory. But what if that is not how it works in practice?

In Trademark Tarnishmyths, Linford, Sevier, and Willis add to the growing literature that empirically tests this theory of trademark tarnishment. The authors conducted two experiments in which famous marks were associated with sex, drugs, or sacrilege. The authors assert that theirs is “the first test of whether drug-related and sacrilegious uses tarnish appropriated marks, in two separate studies.” (P. 9.)

In the first study, “subjects were shown images of target marks used to sell cannabis products or in off-color, sexual contexts.” (P. 1.) They found that, instead of decreasing the desirability of the target marks, these exposures actually increased or “burnished” the desirability of those marks.

In the second study, “subjects were shown banner ads with cannabis-infused Skittles and satanic-themed Sunday sales of Chick-fil-A sandwiches.” (P. 2.) They found that highly religious respondents “reported Chick-fil-A was less tasty” after being exposed to the sacrilegious imagery. But the same was not true for cannabis; “conservative respondents exposed to the drug-related stimuli reported Skittles were more wholesome compared to the control – another burnishment effect.” (P. 2.)

The authors conclude that “the case for tarnishment might be weak in circumstances where courts have been most willing to presume tarnishment occurs”—i.e., when a famous mark is associated with drugs or sex. (P. 2.)

Insofar as these studies cast doubt on the conventional theory on tarnishment, this has some important implications for trademark law and policy.

First, as the authors note, to the extent their results are generalizable, “courts should require some evidence of likely tarnishing effect before granting relief on a claim of dilution via tarnishment” instead of just assuming such an effect. (P. 9.)

Second, the authors suggest that “anti-tarnishment protection might well be due for congressional reevaluation or vulnerable to constitutional challenge on First Amendment grounds.” (P. 9.) Claims of trademark dilution, and tarnishment in particular, have always been in considerable tension with the First Amendment, as other scholars have noted. If the types of uses that courts have, to date, treated as presumptively tarnishing don’t actually tarnish—and may even burnish—the mark’s reputation, it’s hard to justify this (relatively recent) federal cause of action.

In light of expanding legalization of marijuana at the state level, I wonder whether cannabis would be viewed as “bad” by enough people to make it the best testing reference. After all, older cases dealt with things like cocaine. But considering the public reactions to the October 6 “Statement from President Biden on Marijuana Reform,” it appears that at least some politicians continue to think—or at least, think that their voters think—cannabis is, in fact, a seriously bad drug. It would be interesting to see what would happen if these types of experiments were run using other drugs, such as methamphetamine.

Overall, this is a very valuable addition to the trademark dilution literature, and I highly recommend it.

Cite as: Sarah Burstein, Do Sex and Drug Associations Tarnish or Burnish Trademark Reputations?, JOTWELL (February 23, 2023) (reviewing Jake Linford, Justin Sevier & Allyson Willis, Trademark Tarnishmyths (Aug. 6, 2022), available at SSRN), https://ip.jotwell.com/do-sex-and-drug-associations-tarnish-or-burnish-trademark-reputations/.

How Do Patents Influence Cumulative Innovation?

Janet Freilich & Sepehr Shahshahani, Measuring Follow-On Innovation (Feb. 20, 2022), available at SSRN.

Ask any patent law student why we have a patent system, and they are likely to answer that patent law addresses a fundamental market failure: the free-riding by non-inventors on the inventions of others. A patent holder’s right to exclude others from making and using her patented invention addresses free-riding directly, restoring ex ante incentives to invest in innovation. But in solving the free-riding problem, patents create a second-order problem—one that is inextricably linked to the dynamics of innovation itself. Because all knowledge, and therefore all innovation, is cumulative, patents make innovations that build upon a patented feature more costly for parties other than the inventor, who must license an invention if they are to build upon it.

The problem of “follow-on” innovation has long preoccupied both economists and legal scholars. In their excellent paper, Measuring Follow-On Innovation, Janet Freilich and Sepehr Shahshahani contribute to this debate by bringing together both a deep understanding of patent law doctrine and precise econometrics research. In so doing, they make important contributions not only to the empirical literature, but also to our current theoretical thinking about the impact of patents on follow-on innovation.

Freilich and Shahshahani’s key empirical contribution is a refinement of the measure to capture the universe of follow-on innovations that are plausibly influenced by a patent. Two dimensions of patent law doctrine impact follow-on innovation. The first, patent breadth, determines how wide a net a specific patent casts—the more expansive the metes and bounds of a patent entitlement, the more follow-on innovations it will ensnare. The second, the collection of patent infringement doctrines, determines which types of activities in fact encroach upon the patent right. Their paper largely cabins the issue of patent breadth by focusing exclusively on measures of patent infringement.

Their article’s core claim is that existing empirical analyses of the impact of patents on follow-on innovation include activities that are non-infringing—and therefore, not the type of “patent related” follow-on innovation that those studies attempt to measure. The effect of this incorrect inclusion is large; in fact, Freilich and Shahshahani conclude that “little follow-on innovation is directly affected by the patent, with implications for theorizing the tradeoff between initial and follow-on innovation.” (P. 4.)

Freilich and Shahshahani focus largely on replicating and refining results in one important paper by Bhaven Sampat and Heidi Williams. In that study, Sampat and Williams measure follow-on innovation on patented and unpatented genes, using creative strategies to eliminate selection biases and concerns about claim scope.1 Freilich and Shahshahani’s refinement consists of eliminating non-infringing activities from the pool of follow-on innovations. Replicating Sampat and Williams’ methodology, they rely on articles published in a scientific journal that discuss research on the patented genes as their measure of follow-on innovation. Specifically, their refinement removes from the pool of follow-on innovations those scientific publications that fall under the following well-defined non-infringement categories: (1) extraterritorial activities; (2) activities by persons affiliated with the state government; (3) activities by persons affiliated with the federal government; (4) activities generating information for submission to the Food and Drug Administration (FDA) under the 35 U.S.C. § 271(e)(1) safe harbor provision; (5) and using or importing results generated from a patented technique.

Sampat and Williams found a small but statistically significant effect of gene patents on follow-on innovation. In light of other studies on gene patents,2 one might have expected that this paper’s correction would have uncovered a larger impact of gene patents on follow-on innovation. But the opposite was in fact the case. While Sampat and Williams found that gene patents had a small negative effect on follow-on innovation, Freilich and Shahshahani’s refinement found an even smaller effect. In short, with the new correction, patenting genes has an almost negligible impact on follow-on innovation.

Their refinement is important in no small part because it requires deep knowledge of patent infringement doctrines, including understanding which infringement doctrines produce predictable results ex ante that are unlikely to lead to litigation. This last point is critical: because the universe of follow-on innovations should include any invention for which a license would reasonably be required, their measure should only exclude those innovations for which we can be certain that a license would not have been sought ex ante. The importance of identifying clearly non-infringing activities also puts pressure on their empirical methodology. In this regard, one particular infringement area—the safe-harbor provision under 271(e)(1)—is worth discussing further.

The experimental use exception proves to be an important filter in their corrected measurement, eliminating 388 publications from the original pool of 2,771 follow-on publications. The 271(e)(1) safe harbor provision is meant to exempt from infringement activities “reasonably related to the development and submission of information”3 to the FDA. Courts have interpreted this provision broadly to include many drug-development activities, including early-stage research. Recent caselaw, as Freilich and Shahshahani point out, excludes two key types of activities from this safe harbor: basic scientific research and patented technology that is not itself subject to federally regulated approval (also known as the research tools exclusion).

To identify publications that fall under the 271(e)(1) exception, the authors use a proxy: whether the publication discussed a potential therapeutic application. This proxy is cleverly designed but both potentially under- and over-inclusive, as the authors recognize. Some exempt research projects may not explicitly address therapeutic applications. On the other hand, it is plausible that scientific publications engaging only in basic scientific research mention therapeutic applications for the purpose of seeming more attractive to funders or publishing venues. The research tool exclusion could also lead to over-inclusivity concerns. Because correctly identifying research that is exempt under 271(e)(1) is likely to be crucial in future studies of follow-on innovation in the biomedical sciences, further refining this measure would be a fruitful avenue of future research.

Freilich and Shahshahani next turn to exploring how their refinement might help reconcile other studies on follow-on innovation—many of which found larger effects of patents on downstream innovation. They begin with an explanation rooted in sociological factors. University researchers, who are likely overrepresented in the sample of follow-on innovations measured by published scientific papers, may be largely unaffected by details of patent law doctrine. This effect can be explained by a robust underlying social norm that fosters the free sharing of information and materials, and a disregard for patent entitlements. The situation might be quite different for communities embedded in biotech start-ups and established pharma companies, where uncertainties about the content of 271(e) and doctrines designed to police claim breadth may lead to choices of research projects away from those that require the use of patented materials.

Their explanation reveals another interesting mechanism worth exploring in future studies on follow-on innovation: the impact of patents is likely to be mediated by communities’ social norms. This makes it crucial to understand those social norms and the likely distinct mechanisms by which communities of university researchers, biotechnology startups, and established pharmaceutical companies (among others) decide which follow-on research projects to pursue and which ones to abandon.

Finally, Freilich and Shahshahani consider how their measure may inform existing theories of innovation. As they recognize, “the proper measure of follow-on innovation depends on the hypothesized mechanism through which a patent might affect downstream innovation.” (P. 29.) Two quite different theoretical perspectives on the role of patents in follow-on innovation have emerged from the literature. The first one conceptualizes patents as increasing incentives for patent holders to invest in follow-on innovation. Under this theory, patents do not diminish follow-on innovation; rather, patents tend to concentrate follow-on research in the hands of fewer patent-holding inventors. In contrast, the second theory emphasizes the access costs that patents impose on follow-on innovators other than the patent holder.

Freilich and Shahshahani’s combination of detailed legal analysis with econometric research opens the door for testing a number of additional questions about the mechanisms of follow-on innovation. For example, one interesting question that emerges from these two theories is whether innovation by multiple parties is likely to be qualitatively different from concentrated innovation by a few patent-holding pioneers. Here, it would be interesting to further parse the authors’ refinement to test whether patents influence the kinds of follow-on innovation that take place. More specifically, do patents change the balance between incremental and breakthrough innovation? Network theories of innovation suggest that breakthrough innovation benefits from knowledge recombination across firm boundaries. By increasing access costs outside the firm, patenting may lead to more incremental and less breakthrough follow-on innovation.

A single paper cannot settle the debate between these two theories of follow-on innovation. But by emphasizing the importance of infringement doctrines in measuring follow-on innovation, and by showing how this measure modifies the experimental results of several prior articles, the authors move us towards a more precise answer, an answer that is likely to vary by industry and to impact basic and applied research in different ways.

  1. Bhaven Sampat & Heidi L. Williams, How Do Patents Affect Follow-On Innovation? Evidence from the Human Genome, 109 Am. Econ. Rev. 203 (2019).
  2. Fiona Murray & Scott Stern, Do Formal Intellectual Property Rights Hinder the Free Flow of Scientific Knowledge?: An Empirical Test of the Anti-Commons Hypothesis, 63 J. Econ. Behav. & Org. 648 (2007).
  3. 35 U.S.C. § 271(e)(1).
Cite as: Laura Pedraza-Fariña, How Do Patents Influence Cumulative Innovation?, JOTWELL (January 24, 2023) (reviewing Janet Freilich & Sepehr Shahshahani, Measuring Follow-On Innovation (Feb. 20, 2022), available at SSRN), https://ip.jotwell.com/how-do-patents-influence-cumulative-innovation/.

Blockchains as Technosocial Systems

James Grimmelmann & A. Jason Windawi, Blockchains as Infrastructure and Semicommons, __ Wm. & Mary L. Rev. __ (forthcoming 2023), available at SSRN.

In popular culture, blockchains (to the extent they are understood at all) are associated with cryptocurrency, and following the crypto crash of 2022, increasingly dismissed as part of a classic asset bubble. But legal scholars are more sanguine. They tout the potential of blockchain, or, more prosaically, “distributed ledgers,” to transform private law, from contracts to securities to property. Blockchains are hoaxes or panaceas depending on which source you consult.

James Grimmelmann and A. Jason Windawi’s sparkling essay, Blockchains as Infrastructure and Semicommons, charts a path between these two extremes. It does so by calling attention not to what blockchains can do for law, but rather by focusing on the novel question of what legal theory can tell us about how blockchains work. The essay leverages two influential notions from property—infrastructure and the semicommons—to deliver insights about blockchains as well as an object lesson in the value of looking at distributed ledgers through the lens of legal theory.

The co-authors first apply Brett Frischmann’s work to show how we can understand blockchains as infrastructure: blockchains (that is, the ledgers themselves, separate from the hardware that enables them or the assets they chronicle) are nonrival inputs that may be used in a variety of goods and services. This in turn highlights that to maximize the social value of blockchains, it is not enough to treat them as purely private goods, since this will fail to maximize the value they could otherwise generate in the form of positive spillovers. Hence, the essay points out, (public) blockchains operate (and should operate) as commons property in two senses: they can be used and read by anyone; and they are managed collectively through numerous maintainers rather than a single centralized owner.

The essay then analyzes blockchains through a second property theory: Henry Smith’s notion of semicommons. Semicommons are resources that are held publicly with respect to some substantial uses and held in common with respect to others, where the public and common uses interact with one another. So, too, are blockchains. Some of their features—hardware, network connection, and the significant work necessary to operate them—are private. Others—most notably, the ledger itself—are common.

This semicommons nature of blockchains explains two key features of why blockchains work. First, the incentives of those who use their private resources to mine are determined only by their computational resources, which preclude corruption and collusion. Second, the public/private interface incentivizes users to agree on a shared governance standard (e.g., the convention that the state of the ledger is defined by the longest chain). Participants cannot go rogue and ignore the majority’s rule without forfeiting their assets.

At this point, one might think that the co-authors are, like most others who have written about law and blockchains, unalloyed enthusiasts of distributed ledgers. As they observe, “The key technical features of a blockchain…fit together like the parts of a finely engineered watch.” (P. 19.) The third and final part of the essay complicates this story. The authors detail six ways, illuminated by infrastructure and semicommons theory, in which blockchain governance can break down. For example, they observe that the technology that animates blockchains—the protocols that describe their operation and the software that implements it—is conceived and written by humans. It is in this sense a pure public good, since protocols and software are information that is typically left open-source. But this raises the risk that, as with all public goods, individuals will lack sufficient incentive to supply it. Hence, blockchains typically include private incentives for their creation, which in turn create their own risks of self-dealing that, again in turn, necessitate governance mechanisms to constrain these risks.

This and the other problems highlighted in the essay’s final part sound a common theme: that while blockchains appear to be self-governing, that appearance is false and obscures the extent to which they—like all institutions—are created by humans, depend on human intervention, and may suffer from the frailties of their human creators and managers.

At the core of how many observers (mis)understand how blockchains work is the assumption that they are mere technologies, no more than protocols and technologies that, once created, operate without any need for intervention. As Grimmelmann and Windawi eloquently put it, “Blockchains are technosocial systems, not just technologies.” (P. 30.) This insight exposes the inevitable presence of human actors in implementing and governing blockchains, and frames the promise for legal scholars of considering blockchains as institutions rather than just technical processes.

Grimmelmann and Windawi’s article is a striking departure from the current run of scholarship about law and blockchains. Nearly all writing in this vein follows the same pattern. It briefly describes how distributed ledgers work and then shows their promise for a particular area of private law. What the co-authors illuminate is not that this field-specific approach is flawed—on the contrary, in many instances is has produced valuable insights about how blockchains may transform law—but that it is incomplete. But the field-specific approach does not delve into the operation of blockchains themselves; it rather describes and accepts their operation uncritically.

The essay, by contrast, examines blockchain not as a means to some legal end, but as an institution in itself as worthy of investigation as any of the other institutions that legal scholars scrutinize. And the richness of the authors’ deployment of property theory as the lens they use to analyze distributed ledgers is not only engrossing and insightful on its own terms, but demonstrates the potential of this largely untapped modality of legal scholarship about blockchains.

Another upside of the authors’ approach is that it counters the oversimplified dichotomy that often prevails with respect to treatments of blockchains. Public discourse about distributed ledgers tends to polarize between dismissive critics and true believers. Most of the law review literature falls into the latter category, with scholars touting the transformative potential of blockchains for particular subsets of private law. By investigating the operation of blockchains themselves, the co-authors are able to offer an informed, measured take. They reject the notion that distributed ledgers are mere hype while also stating realistic reservations about the governance challenges these systems face.

Cite as: David Fagundes, Blockchains as Technosocial Systems, JOTWELL (December 12, 2022) (reviewing James Grimmelmann & A. Jason Windawi, Blockchains as Infrastructure and Semicommons, __ Wm. & Mary L. Rev. __ (forthcoming 2023), available at SSRN), https://ip.jotwell.com/blockchains-as-technosocial-systems/.

Centering Black Women in Patent History

Kara W. Swanson, Centering Black Women Inventors: Passing and the Patent Archive, 25 Stan. Tech. L. Rev. 305 (2022).

Professor Kara Swanson’s latest article is a remarkable example of legal historical scholarship that excavates stories from the past to illuminate the present. It is chock full of archival evidence and historical analysis that explains gaps and silences in the United States patent registry as evidence of marginalized inventors–particularly Black women–who should be named inventors but are not.

The article is arresting reading for anyone interested in antebellum history, intellectual property, and the intersection of racism and sexism in law. Mostly, I am grateful to Professor Swanson for doing the obviously very hard work of digging through archives, reading microfiche, and scouring other primary and secondary sources for what she calls the “whispers” of Black women inventors of our past whose stories need to be told to change the narrative of U.S. inventorship.

The main focus of Professor Swanson’s archival research is discovering the “true inventors” of certain patented inventions. Doctrinally, patents are only supposed to be issued to the “true inventor”–the person who conceived and reduced to practice the invention. Unlike in copyright law, there is no “work for hire” doctrine that constructively names the hiring entity or the supervisor as the “author,” or in this case the “inventor.” Patents issue in the name of the inventor and must be assigned afterwards to an employer or business entity or partner. Patents with incorrect inventorship–or false inventorship–are invalid.

Professor Swanson explains that the “true inventor” doctrine has been a rule honored in the breach, especially in the early years of the patent system. Many patents were filed in the name of businessmen who were not inventors and who were permitted by the true inventor to file the patent in another name in exchange for money or later benefits. That is, a true inventor would let another person–a more “believable” person–file the patent in their own name (becoming as Swanson calls them a “false inventor”). This would avoid the later hassle of an assignment and the possible trouble of being second-guessed in the patent office if you didn’t “look” like an inventor (because you were Black or female, for example).

Swanson characterizes this behavior of true inventors with marginalized identities as “situational passing,” in the vein of a marginalized identity passing as a dominant one–Black as white, or gay as straight–in order to experience benefits that should otherwise be rightly theirs, like equal treatment or fair pay. Swanson’s claim is that “situational passing” in the context of patent inventorship happened with surprising frequency and certainly much more often than the current record suggests, which is hardly ever, because “passing is intended to leave no trace” as she eloquently writes. (P. 54.)1

One true inventor whom Swanson surmises made such a strategic choice was Ellen Eglin, a Black woman who worked as a laundress, who “sometime during or before 1888,…reportedly sold the rights to her improved clothes-wringer to ‘an agent’ for $18.” (P. 2.) Eglin explained: “You know I am black and if it was known that a [Black] woman patented the invention, white ladies would not buy the wringer”.2

Elgin’s few words are “one whisper that tells a story of a Black woman who found a way to contribute to U.S. invention and participate in the patent system.” (Pp. 6-7.) Swanson’s article brings to life the whispers of dozens more, “upend[ing] our understanding of the patent archive itself.” (P. 9.)

Swanson contrasts Eglin’s story with those of Black male inventors and white female inventors, who also used race and gender passing to achieve personal and professional goals and avoid discrimination on the basis of their race and sex. For example, Swanson describes Black men who filed patents but reasonably kept their race hidden by keeping a physical distance from the patent office and relying on the presumption that all inventors were white men. One such man, Thomas Jennings, was known to be one of the first Black U.S. patent inventors (his patent dated 1821) only because of his obituary more than thirty years later in 1859. (P. 39.) As Swanson explains, it took an obituary to “giv[e] voice to a silence in the patent record.” (Id.)

Women inventors had the added problem of couverture, that is, a legal rule that denied them the right to own property, and thus have a patent in their name. For married women, one solution was including their husbands as co-inventors, even when this was not really true. This diluted their contribution while not entirely erasing their role in the creative feat. For example, Florence Layman, nee Parpart, married her financial backer Hiram Layman who was named on patents she filed both before marriage and after. (P. 50.)

Women also faced the problem of “notoriety” which Swanson describes was “incompatible with femininity…[and] womanly modesty.” (P. 52.) To counteract this, women sometimes tried to obfuscate their role or erase themselves from the patent records by omitting their names in favor of their husbands or other business agents. Swanson scoured the patent archive for traces of married couples or for husbands alone whose patents were for household inventions (dusters and other cleaning tools or sewing machines) where wives alone more likely conceived and reduced to practice the invention.

By digging into the archives this way, and by better understanding the life details of dozens of marginalized inventors, some who were Black and some who were female, Swanson schematizes the strategic circumstances of intersectional identities, here Black women like Ellen Elgin. This helps explain how and why Black women like Elgin acted as they did. Swanson posits the existence of many, many more Black women inventors like her.

What does all this history tell us about today? First, it should make us skeptical of the patent archive and its conventional narrative of who is and who is not an inventor. Swanson’s historical analysis demonstrates the very strong likelihood of systematic underrepresentation in the patent archive of Black people and women who were true inventors and overrepresentation of white men inventors who were not. Related, “each inventor who passed as a white man strengthened the plausibility of the false but widespread belief that marginalized peoples were incapable of invention.” (P. 54.)

Importantly, the absences in the patent archive do not indicate a lack of inventiveness by those not represented there, but, to the contrary, when combined with archives elsewhere strongly suggest profoundly inventive behavior of marginalized people who in the face of “bias, inequality and painful self-denial” demonstrated “agency, accomplishment, and pride.” (P. 68.) What Swanson calls “false truths” of inventorship have “consequences…because of the authority of patent records.” (P. 69.)

The story of who is or who is not an inventor influences who will become an inventor or who considers inventorship a possible or desirable status. The importance of this legal history for today is to promote widespread and inclusive innovation because false truths of white male inventorship “discourage marginalized people from invention [which] is a national loss, harming ‘America’s long-standing economic prosperity and global leadership in innovation’ by failing to include all who can contribute to technological creation. That loss is symbolic as well as actual.” (P. 71.)

After reading Swanson’s article, no one can read the patent archive the same way again.

  1. Swanson also describes in detail another form of “false inventorship” in decidedly non-consensual transactions such as when an enslaved person’s invention was patented by their enslaver. (P. 55.)
  2. Language in brackets modernized by Professor Swanson for purposes of publication.
Cite as: Jessica Silbey, Centering Black Women in Patent History, JOTWELL (November 15, 2022) (reviewing Kara W. Swanson, Centering Black Women Inventors: Passing and the Patent Archive, 25 Stan. Tech. L. Rev. 305 (2022)), https://ip.jotwell.com/centering-black-women-in-patent-history/.

Making Failure To Function A Normative Assessment

Lucas Daniel Cuatrecasas, Failure to Function and Trademark Law’s Outermost Bound, 96 N.Y.U. L. Rev. 1312 (2021).

In recent years, the Trademark Trial and Appeal Board has increasingly relied on the so-called “failure to function” doctrine to reject applications to register marks suffering from a number of quite different deficiencies. But all these varied deficiencies are said to prevent the putative marks from identifying the source of the products to which they are affixed, that is to say, from functioning as marks. The doctrine’s flexibility (and perhaps its opaqueness) has caused many scholars (most notably, Alexandra Roberts) to embrace its capacity both to reconfigure what they perceive as unduly generous approaches to trademark registration and to ensure that validity determinations take account of the context in which marks are used.

In Failure to Function and Trademark Law’s Outermost Bound, an excellent law review note, Lucas Daniel Cuatrecasas carefully and critically catalogues the different strands of failure to function case law at the TTAB, demonstrates the inconsistencies in its application by the Board, and offers a creative reform that highlights the under-appreciated multivalence of trademark law. Cuatrecasas’ organization, and careful dissection, of TTAB decisions on the doctrine is invaluable. But the reform proposal that derives from this analysis is even more illuminating.

Cuatrecasas detects more complex normative underpinnings to the failure to function case law than conventional accounts suggest. Because most scholars situate the failure to function inquiry in the Board’s assessment of consumer perception, proposed tweaks of failure to function doctrine typically play with integrating the inquiry into assessments of distinctiveness.

But Cuatrecasas suggests that the TTAB sometimes also engages in competition-based reasoning in its failure to function cases. He argues that the commendable objectives of the failure to function doctrine would be better served by enlisting other more explicitly competition-grounded doctrines of trademark law. In particular, he proposes that that the aesthetic functionality inquiry (presently relevant only to non-word marks) should be extended to all marks. Thus, all marks could be rejected if their registration “would significantly hinder competition in the relevant market.” For example, an applicant would be unable to register the slogan mark “I ♥ DC” on a T-shirt; it would hinder competition in ways that triggered the aesthetic functionality doctrine.

Cuatrecasas  recognizes that the distinctiveness inquiry (perhaps aided by inquiries about “use”) already accommodates (or should accommodate) some of what is achieved by the failure to function doctrine. A determination that a term is descriptive will preclude its registration because such a mark will not identify the source of the product on which it is affixed. Thus, it fails to function as a mark for consumers. But we might simply say that, as a result, such a mark lacks distinctiveness.

And assessments of distinctiveness need not be confined to abstract interpretations of the meaning of words. The parallel doctrinal mechanisms for determining inherent distinctiveness of non-word marks make attention to context explicit. One such test, the Seabrook test, asks how a design mark would be understood by consumers in light of the designs already existing in the marketplace and whether the use of the design mark would make a separate commercial impression on consumers in the larger context (such as the overall product packaging) in which the mark appears.

Nothing should stop any of this from happening with assessments of words. This is especially true with use-based applications for registration or in an infringement action where the context of a plaintiff’s use is on the table. Use and distinctiveness have always been intertwined, as can be seen from the definitions that courts offer about “use” (intent to use applications based upon inherent distinctiveness are the biggest challenge, and where Robert’s agenda-setting article was of greatest importance).

Cuatrecasas argues for more explicit assessment of competitiveness concerns in the analysis of a word mark’s validity. A failure to function doctrine that incorporated aesthetic functionality-like considerations might do just that, in a way that Cuatrecasas suggests would be more valuable—and, based on his analysis of TTAB caselaw, as reflective of concerns expressed by the Board.

I might take issue with the deployment of aesthetic functionality as a tool for greater certainty. That particular doctrine is notoriously enigmatic and difficult to apply. But Cuatrecasas’ proposal highlights the important point that trademark validity is not simply an empirical question about consumer association.

Although aspects of the distinctiveness doctrine historically recognized this, and incorporated competitiveness concerns, the Supreme Court’s recent 8-1 decision in Booking.com arguably casts such approaches in doubt. In that climate, for pragmatic reasons, we may need safety-valves that recognize the importance of competitiveness concerns in the judicial assessment of trademark validity. Cuatrecasas’ article intriguingly puts a revised failure to function doctrine in play as a candidate for such a safety-valve.

Cite as: Graeme Dinwoodie, Making Failure To Function A Normative Assessment, JOTWELL (October 14, 2022) (reviewing Lucas Daniel Cuatrecasas, Failure to Function and Trademark Law’s Outermost Bound, 96 N.Y.U. L. Rev. 1312 (2021)), https://ip.jotwell.com/making-failure-to-function-a-normative-assessment/.

Owning Metadata: Dispelling Copyright Misunderstandings About NFTs

The Treachery of Images: Non-Fungible Tokens and Copyright, Andres Guadamuz, 16 J. Intell. Prop. L. & Prac. 1367 (2021).

In a memorable skit on Saturday Night Live, Pete Davidson-as-Eminem inquires about what a non-fungible token (“NFT”) is and why they are selling for so much money. In this succinct article, The Treachery of Images: Non-Fungible Tokens and Copyright, Professor Guadamuz admirably answers the first question and explains why the second one is a puzzle. With respect to the market for NFTs, he explains why those using copyright law to capture the economic value in this emerging market often misunderstand how distinct the exclusive rights under copyright are from ownership of a non-fungible token on a blockchain.

Based in the United Kingdom, Professor Guadamuz applies U.K. copyright law to determine whether and when the process of creating and selling an NFT may (or may not) implicate copyright rights. For the most part, this analysis tracks with that under U.S. law, but to readers based in the United States, this aspect of the article also yields some comparative insights about when and why the differences between U.S. and U.K. copyright law may matter.

So What Is an NFT?: The question is really two: what is a “token” and what makes it “non-fungible”?l The answer to both starts with a blockchain, that is, a cryptographic distributed and decentralized ledger that records transactions in an immutable record. The immutable characteristic of the ledger enables digital assets to be unique, and cryptography is supposed to protect against copyists who would undermine a token’s “non-fungible” character by distributing counterfeit copies. Various kinds of data can be recorded on such a ledger. A “token” is one type of data—a programmable digital unit recorded on the blockchain. Functionally, a token is encryptable code that can represent any digital or physical object, and this code can be unique.

While use of NFTs in connection with copyrightable works of art is a subject of considerable interest, minting NFTs in connection with art works is merely one use for an NFT. More particularly for copyright lawyers, Professor Guadamuz nicely explains how and why in most cases the NFT is metadata about a copy of a copyrighted work, not about the intangible work of authorship itself. In the case of digital art, a digital copy of the work frequently is used in the creation of the metadata, but the resulting metadata recorded on the blockchain as a token generally is not a copy of the work of authorship. This is the central point of confusion in much copyright discourse around NFTs.

To dispel this confusion, Professor Guadamuz carefully describes the process of creating, a.k.a. “minting,” an NFT of a digital image with a reader concerned about copyright law in mind. His example uses the Ethereum blockchain that supports, among other things, the Ether cryptocurrency, and which is currently the most popular blockchain for recording data about copies of works of art. The blockchain uses technical standards for recording both fungible and non-fungible tokens.

As he explains, the two key components of the relevant standard are the tokenID and the contract address. The first is a unique value generated upon creation of the token. The contract address is the source of some copyright confusion because a copy of a digital work can be used to generate the contract address, but the address itself is metadata about that copy. In the example of Professor Guadamuz’s image, the contract address is: 0x8c5aCF6dBD24c66e6FD44d4A4C3d7a2D955AAad2, which no one would argue is a reproduction of the original expression in the source work. When combined with the tokenID and any other optional data that the relevant standard permits, the token is unique—a unique piece of metadata about a copy of a copyrighted work.

I encourage readers to work through Professor Guadamuz’s full description of how NFTs work at a technical level. For purposes of this review, the key takeaway about the facts is that, although the metadata file that comprises the NFT is unique, there is no technical constraint on the owner of a copyrighted work (or any other referent of the metadata) from minting additional tokens. Each token will be unique, but an unlimited number of these unique tokens can represent the same copy of a copyrighted work.

Since scarcity is the source of much of the economic value or speculation in markets for NFTs, the scarcity is, at a technical level, only in the metadata file. Any other constraints that would provide scarcity in relation to a copy of the copyrighted work or in the work itself must be legal constraints. If you record a song and then mint an NFT with that file and transfer the NFT to me, what do I own? Presumably I own title to the NFT. You could also represent that ownership of title to the NFT entitles the owner to also hold title to a particular copy of the song. In that case, it is the law that treats your intent to affect the transfer title in a copy of a work. Without that additional legal element, transactions in NFTs are transactions in title to metadata.

Professor Guadamuz identifies examples that show how the press and, likely, many participants in the NFT marketplace do not appear to understand this. After describing some high-profile NFT transactions—such as Jack Dorsey’s $2.9M sale of an NFT of his first tweet – Professor Guadamuz very rightly points out: “It seems difficult to believe that all these buyers are spending such amount[s] of money for what amounts to a short metadata file and a string of numbers and letters of dubious artistic value, but in fact, this is what most NFTs actually are.” (P. 1372.)

What’s Copyright Got to Do With It?: A reader knowledgeable about copyright would likely answer, “nothing,” and Professor Guadamuz mostly agrees. He discusses some transactional uses of NFTs, including a “tick box” option to indicate that transfer of the NFT also transfers ownership in the copyright in the NFT’s referent work. The law in the U.K. is similar enough to U.S. law that this issue is primarily whether ticking that box is sufficient to be a signed writing by the author. He demonstrates that using NFTs for licensing purposes is difficult but that the NFT can be programmed to act as a royalty collection vehicle, which potentially could create an automated droit de suite or otherwise as a payment mechanism.

With respect to rights enforcement, he first works through any possible claims that the act of minting an NFT, even without authorization of the copyright owner, could directly infringe the rights to reproduce, adapt or communicate the copyrighted work to the public. The only plausible claim would involve an NFT that includes a hyperlink to a copy of the source work, as many NFTs do. Under the complex jurisprudence in the European Union, one could treat the minting of an NFT that includes a hyperlink to a copy of the source as a communication to the public, but only under narrow circumstances. The analysis under U.S. law would be similar, except that inclusion of the link would not be a direct infringement of any exclusive rights and could at most give rise to secondary liability for infringement of the rights to display or perform the work publicly. This result could change if the appellate courts agree with some district courts that have held that an embedded hyperlink exercises the right of public display.

In this article, Professor Guadamuz performs an important public service by patiently walking the reader through the technical complexities of how NFTs work and then, with similar care, by considering the potential copyright implications for the creation and transfer of NFTs.

Cite as: Michael W. Carroll, Owning Metadata: Dispelling Copyright Misunderstandings About NFTs, JOTWELL (September 14, 2022) (reviewing The Treachery of Images: Non-Fungible Tokens and Copyright, Andres Guadamuz, 16 J. Intell. Prop. L. & Prac. 1367 (2021)), https://ip.jotwell.com/owning-metadata-dispelling-copyright-misunderstandings-about-nfts/.

Synthesizing Seclusion Law

Sonia Katyal & Charles Graves, From Trade Secrecy to Seclusion, 109 Geo. L.J. 1337 (2021).

In their new paper, From Trade Secrecy to Seclusion, Charles Tait Graves and Sonia Katyal have performed a monumental service for researchers interested in the ways that firms and governments have leveraged trade secret and confidentiality laws to prevent the disclosure of large swathes of information. I describe Graves and Katyal’s effort as monumental not just for its scholarly value, but also for its breadth (and length). They catalog an enormous variety of “increasingly aggressive attempts to use the law to shield information from the public eye that either does not fall within the traditional, market-competitive ambit of trade secrecy at all, or that faces a strong public interest for at least some degree of disclosure.”

Examples abound, from Merck’s attempts to shield data on the harms of its drug Vioxx, to the unwillingness of manufacturers of voting machines and criminal justice algorithms to share their source code, to employers’ claims that employee diversity and salary data are confidential trade secrets. In these and other examples, Graves and Katyal demonstrate how firms have pushed trade secrecy law beyond its traditional bounds of promoting innovation by preventing unfair competition. Coupled with the considerable deference that the government shows to corporations faced with FOIA requests, these efforts, they argue, are creating an “all-purpose seclusion doctrine.”

Now, if you’ve been following some of the scholarship in this area you may already recognize these issues from your reading of Chris Morten and Amy Kapczynski, David Levine, Rebecca Wexler, Hannah Bloch-Webha, and Jamillah Bowman Williams. But this is part of the article’s immense value. Graves and Katyal have recognized and articulated a theme running through much recent scholarship. Importantly, they have brought it all together in one paper, and, admirably, they credit no fewer than twenty scholars with above-the-line mentions (including, Hrdy, Varadarajan, Sandeen, Brauneis, Goodman, Menell, Feldman, Bone, Lemley, Samuelson, Lyndon, Calo, Citron, Chessman, and Vladek). Sadly, neither Graves and Katyal’s cataloging efforts nor their humility in recognizing so many others’ contributions are incentivized by contemporary scholarly practices.

But Graves and Katyal haven’t simply produced an annotated bibliography. Their article taxonomizes the disparate practices they identify, helpfully organizing them into three distinct categories. Examples like Vioxx they label “investigative concerns,” where journalists, whistleblowers, or other researchers attempt to report unflattering information about firms. Separately, claims of secrecy for voting machines and criminal justice algorithms are categorized as “delegative concerns,” because governments or parties acting on their behalf claim secrecy for core governmental functions. Finally, “dignitary concerns” involve situations where firms try to shield information about their workplaces, including information about salaries, diversity, and workplace injuries.

The authors also begin to assign blame for trade secrecy’s expansion into a general seclusion law. Graves and Katyal fault trade secrecy law itself, for its uncertain theoretical underpinnings and its weak claiming doctrine. They also note that firms are both more motivated to claim confidentiality and can more easily do so when so much of their value is embodied in software. Finally, the Supreme Court’s recent opinion in Argus Leader Media has broadened FOIA exemptions to include information that would not traditionally count as a trade secret. Perhaps this is a matter for a later article, but I would have valued further analysis of the relative contributions of these and other potential causes.

The article concludes with normative proposals for reining in trade secret law’s expansion, including general rules like a more robust standing doctrine, a heightened claiming requirement, and the inclusion of defenses like trade secret fair use. Graves and Katyal also mention more targeted interventions for particular misuses of trade secrecy. Again, many of these suggestions have been floated elsewhere, and the authors have helpfully brought them together for the reader. Given the synthetic nature of the paper, the task of providing a more focused analysis of the various proposals’ respective costs and benefits remains for a later paper.

As someone just making my way into this field, I’m enormously grateful for the work that Graves and Katyal have done. They demonstrate the value of synthetic, taxonomic research, and they have done so in a way that appropriately credits others’ contributions. This is an excellent model for junior scholars across the legal academy.

Cite as: Christopher J. Buccafusco, Synthesizing Seclusion Law, JOTWELL (August 1, 2022) (reviewing Sonia Katyal & Charles Graves, From Trade Secrecy to Seclusion, 109 Geo. L.J. 1337 (2021)), https://ip.jotwell.com/synthesizing-seclusion-law/.