The Journal of Things We Like (Lots)
Select Page

When Social Norms for Innovation Don’t Maximize Social Welfare

Stephanie Bair & Laura Pedraza-Fariña, Anti-Innovation Norms, 112 Nw. U. L. Rev. 1069 (2018).

For more than a decade, intellectual property scholars have demonstrated how informal social norms can efficiently substitute for formal legal protections to promote creativity and innovation. Now come Stephanie Bair and Laura Pedraza-Fariña to show that social norms can have a less happy effect. In some instances, they argue, informal social norms may impede innovation. Their forthcoming article, Anti-Innovation Norms, explores a selection of these and what might be done about them.

According to the standard economic story, IP rights exist to solve a public goods problem. In the absence of IP rights, rampant copying would reduce the ability of creators to recoup their investments, and society would face underinvestment in new works and inventions. IP rights solve this problem by enabling creators to charge supramarginal prices for those works and inventions. Although IP rights are costly, they produce net benefits by encouraging creativity that we wouldn’t otherwise have.

Yet a still-expanding list of studies has shown that in many industries, including fashion design, cooking, stand-up comedy, and tattooing, the lack of formal IP protection has not killed creativity. In these and other fields, robust social norms that promote creativity and attribution and that shun certain kinds of copying can replace formal legal protection more flexibly and at lower social cost. Norms can make free riding costly, and they support those engaged in creative work.

Professors Bair and Pedraza-Fariña do not gainsay the importance of social norms to limit the effects of free riding in many low-IP fields, but they suggest that scholars’ focus on free riding has caused them to miss other, less salutary, social norms that affect innovation. In particular, Professors Bair and Pedraza-Fariña demonstrate the existence of a group of norms that undermine an especially valuable source of innovation—disciplinary boundary crossing. While many of the most important innovations in science, technology, and the arts come from the merging of insights from previously unrelated fields, creators can face substantial disincentives to engage in boundary-crossing work.

Professors Bair and Pedraza-Fariña have, themselves, not been dissuaded from crossing disciplinary boundaries to substantiate their account. They turn first to sociological studies of science and the arts in order to identify three anti-innovation norms: research priority norms, methodology norms, and evaluation norms. These norms tell people within a given field (1) to work on projects that the field deems valuable, (2) to apply methods deemed acceptable to those projects, and (3) to judge the value of contributions to the field on terms that the field prefers. Drawing on examples from physics, medicine, and contemporary visual art, the authors demonstrate how anti-innovation norms create field-specific professional and reputational costs that discourage creators from pursuing potentially more valuable opportunities at or beyond the borders of their fields.

Continuing their own boundary crossing, Professors Bair and Pedraza-Fariña next turn to social psychology to explain possible sources of anti-innovation norms. They focus on two aspects of individual decision-making, the status quo bias and the conformity bias, as the likely culprits. For a variety of reasons, people tend to be strongly attached to the ways things are now, and it can be surprisingly difficult to get people to switch away from their present status. In addition, people generally like to conform their behavior to those in their social groups. Conformity aids group membership and solidarity and the benefits they confer. The authors speculate that these two tendencies can make it harder for individual creators to break away from accepted paths and procedures and to risk forging their own identities.

Having established the existence of anti-innovation norms, Professors Bair and Pedraza-Fariña return to their disciplinary home to consider whether law can do anything to solve the problems they’ve articulated. In large part, they are skeptical of IP law’s ability to particularly encourage boundary-crossing innovation, because copyright and patent tend to reward innovations without reference to whether they combine diverse insights or not. They argue that IP’s reliance on decentralized market signals, which is normally considered a feature, can also be a bug when those signals are filtered through a norm-inflected view of the technological or artistic environment. Although the authors suggest some tweaks to IP doctrine, their policy proposals focus instead on using grant funding and tax incentives to encourage and reward boundary-crossing creativity.

Professors Bair and Pedraza-Fariña have helpfully reminded IP scholars that social norms are not always a good thing for creativity. In addition to the norms against boundary crossing, other norms may discourage certain kinds of creators or certain kinds of creativity from emerging. As we develop a richer view of the sociology and psychology of innovation, we should pay attention to the costs of informal ordering as well as to its benefits.

Cite as: Christopher J. Buccafusco, When Social Norms for Innovation Don’t Maximize Social Welfare, JOTWELL (April 30, 2018) (reviewing Stephanie Bair & Laura Pedraza-Fariña, Anti-Innovation Norms, 112 Nw. U. L. Rev. 1069 (2018)), https://ip.jotwell.com/when-social-norms-for-innovation-dont-maximize-social-welfare/.

Designing Design Patent Subject Matter

Sarah Burstein, The Article of Manufacture in 1877, 32 Berkeley Tech. L.J. 1 (2017).

Designs for articles of manufacture are eligible for design patent protection under section 171 of the Patent Act if they are “new, original and ornamental.” As is clear on the face of this provision, design patent subject matter is limited to designs for “articles of manufacture”—other types of design, including architectural design, would seem to be excluded. But what exactly is included? What is a “design for an article of manufacture”? In an important new article, The Article of Manufacture in 1877, Sarah Burstein provides critical historical context on that question.

Courts and the Patent Office have in recent years given “article of manufacture” a very broad interpretation. They have, for instance, accepted as design patent subject matter the design of a spray pattern for a water fountain, the design of the shank portion of a drill bit, and graphical user interface (GUI) designs (so long as they are depicted in a little rectangle to indicate—nominally—that they show up on a screen). As a subject matter threshold, then, the article of manufacture requirement has in recent years excluded very little from design patent protection.

But the Supreme Court’s decision in  Apple v. Samsung made the relevant article of manufacture central to determining damages, putting new pressure on design patent law’s understanding of that concept. The issue in Apple v. Samsung was the appropriate way to calculate damages in a case involving a “partial design”—that is, a case in which the design patent claimed only part of the article to which it was applied. Design patent has a special damages rule under which an infringer is “liable to the owner to the extent of his total profit, but not less than $250.” 35 U.S.C. § 289. Congress added that rule to the Patent Act after the Supreme Court’s decision in Dobson v. Hartford Carpet Co., 114 U.S. 439 (1885), in which the Court refused to allow the owner of a patented carpet design to recover the full profits from the sale of infringing rugs. According to the Court, profits attributable to the design had to be distinguished from those attributable to the “intrinsic merits of quality and structure” of the article. Thus, in order to recover the entire profits from the sale of articles bearing the patented design, those articles must have sold for higher prices than those without the design. Dobson, 114 U.S. at 444.

Since there is little dispute that Congress intended § 289 as a rejection of the result in Dobson, courts have long understood that section to prohibit apportionment of damages. In the view of the Federal Circuit, that meant that damages in a design patent case were calculated based on sales of the entire commercial product, even if the design patent at issue claimed only a small part of the design of that product (say, the rounded corners of a device, to pick a not-so-random example). But as the Supreme Court seemed to understand, that approach implicitly attributes zero value to the function of the device or any of its other features, a consequence that seems indefensible, particularly in the context of something so complex as a smartphone. Indeed, applying its “no apportionment” rule, the Federal Circuit upheld a jury award of Samsung’s entire profit on the sale of the infringing smartphones, even though those devices employed hundreds or thousands of other patented and unpatented components.

The Supreme Court purported to solve this problem by focusing on the meaning of “article of manufacture.” While the statute requires courts to award the defendant’s “total profits,” those profits must be determined in relation to the relevant article of manufacture to which the patented design was applied. Crucially, and contrary to the Federal Circuit’s assumption, the article of manufacture need not be the entire commercial product sold to consumers; it might be something less than that. So far, so good. Unfortunately, the Court gave no guidance as to when courts should treat the relevant article of manufacture as the commercial product or as some smaller unit, nor did it explain how one should identify the relevant article if it is less than the product as a whole.

It seems likely, however, that the claim will prove important here: Because a design must be for an article of manufacture, the article of manufacture will be defined in relation to the claimed aspects of the broader article shown in the drawings. So understood, “partial design” is a misnomer—patents that claim only a component of the article are not partial designs of an article; instead the article is, at least in some cases, defined by reference to the claim.

If it can be operationalized, the Court’s reconceptualization of the article of manufacture may prove an elegant solution to the remedies problem. But as Burstein shows in her article, if history is our guide to the proper definition of “article of manufacture,” there is more the Court could have done here. For while it’s true that not all articles of manufacture are complete commercial products, neither should all components be considered articles of manufacture.

Surveying early case law, Burstein concludes that “article of manufacture” was a term of art that did not simply refer to any article that was manufactured. (P. 42.) “Instead, it referred to a tangible item—other than a machine or a composition of matter—with a unitary structure made by humans and complete in itself for use or for sale.” (P. 43.) That definition didn’t exclude all components; indeed, several of the cases Burstein cites involve design patents on parts. But only those components that were manufactured separately and identifiable as discrete parts (even if ultimately combined into a bigger commercial product) counted.

That’s obviously not a self-applying rule. Among other things, its workability depends on courts’ ability to identify the claimed design as a relevant part of a product and not just as a feature of the product. That may sometimes be quite difficult. Still, conceiving of the article of manufacture in this way frames the inquiry very differently, and the implications go far beyond remedies. As Burstein highlights, the remedies problem is illustrative of a much bigger issue: Because the PTO and the courts have paid so little attention to what should count as a relevant article of manufacture, they have extended design patent protection to a number of designs that are subject to the Apple v. Samsung problem. Rather than solving it at the remedies stage, perhaps the better approach is to reconsider the scope of design patent subject matter.

Rethinking subject matter would go a long way toward addressing a number of other problems in design patent law. Accepting that one can patent small portions of the design of an article has made claiming more mysterious. That, in turn, has made it difficult for courts to compare claimed designs to the prior art to assess novelty and obviousness. It has exacerbated the difficulties courts face in giving a valid design its proper scope. Burstein acknowledges that there may be reasons to depart to some degree from the 1877 approach, particularly in light of the range of designs now impacted by design patent. But we can do so, she argues, without surrendering the idea that the article of manufacture has limiting content—the “article of manufacture” cannot just be whatever the claimant suggests it is. That’s a lesson that could profoundly affect all of our thinking about design patents for years to come.

Cite as: Mark McKenna, Designing Design Patent Subject Matter, JOTWELL (March 22, 2018) (reviewing Sarah Burstein, The Article of Manufacture in 1877, 32 Berkeley Tech. L.J. 1 (2017)), https://ip.jotwell.com/designing-design-patent-subject-matter/.

What Is the Payoff from Public R&D Investments?

Pierre Azoulay, Joshua S. Graff Zivin, Danielle Li & Bhaven N. Sampat, Public R&D Investments and Private-Sector Patenting: Evidence from NIH Funding Rules (revised 2017), available at NBER.

Intellectual property scholars have increasingly recognized that IP is only one of many legal tools for incentivizing innovation. In addition to facilitating transfers from consumers to innovators through a “shadow tax” on IP-protected goods, the U.S. government also directly rewards innovators with public funds through R&D tax incentives, innovation prizes, and—most significantly—well over $100 billion per year in direct federal and state R&D support. This direct public science funding, which primarily goes to grants and national laboratories, has long been viewed as important to U.S. economic growth. But federal R&D spending has been declining and is the subject of an increasingly partisan divide, making this a key moment to ask: What is the public’s return on this investment?

In an outstanding empirical analysis, corresponding author Danielle Li at MIT Sloan and her three coauthors—Pierre Azoulay at Sloan, Joshua Graff Zivin at UC San Diego Economics and Public Policy, and Bhaven Sampat at Columbia Public Health (collectively, “AGL&S”)—have tackled this question for grant funding by the U.S. National Institutes of Health (NIH). With a budget of over $30 billion per year for biomedical research, the NIH is the single largest research funder in the world. But assessing the causal impact of this investment is difficult, even when focusing only on its effect on private-sector patents. How can one measure information flows in fields full of serendipity and spillovers? Are grants merely correlated with private-sector advances? Does public funding “crowd out” private investment? AGL&S’s empirical design makes progress on each of these issues, and they conclude that each $10 million in NIH funding in fact generates 2.7 additional private-sector patents.

Prior work has studied outcomes in specific areas of NIH grant funding (which misses advances in different fields) or citations to university patents (which misses grant-funded academic results that are not patented). Instead, AGL&S look to publications resulting from these grants, and then to patents that cite these publications, which have been shown in a validation against survey results to be a better measure of the influence of public sector research than patent-to-patent citations. Out of the 153,076 NIH grants funded between 1980 and 2005, about 40% led to a publication cited by a patent; and out of the 232,276 private-sector life-science patents issued by the USPTO between 1980 and 2012, about 40% directly cited an NIH-funded publication.

A second empirical challenge is that public funding may target the most promising research areas, so increased grant funding might simply be correlated with increased patenting in that area rather than causing it. AGL&S take on this problem in two ways. First, they classify grants by disease (e.g., cancer), scientific approach (e.g., cell signaling), and time (e.g., 1990), allowing them to include fixed effects to account for these potential sources of endogeneity in funding. Second, they find effectively random variation in funding at the disease/science/time level based on grant applications whose review scores were just above or just below the NIH’s funding cutoffs, and they use the “windfall” funding received by a disease/science/time above the cutoffs as an instrument for total funding. Under both approaches, they find that additional funding increases the number of patents that cite NIH-funded publications.

The third challenge noted above is that grant-funded research might crowd out private-sector funding, such that the increase in private-sector follow-on patenting is accompanied by a decrease in patents that do not rely on NIH-funded results. To be clear, substitution of patents with grants is not necessarily problematic from an economic perspective: both are costly, and neither is optimal in all circumstances. But AGL&S show that grant funding spurs rather than substitutes for private-sector R&D. To study this effect, they use a separate dataset of patents that cite publications related to NIH-funded publications (using a keyword-based similarity measure). If NIH funding merely crowded out private research, the authors argue that one would not expect the award of an NIH grant to increase the number of patents in a research area, but instead, they find an increase here as well.

These results come with a number of caveats, the most important of which is that AGL&S focus on the outcome of only private-sector patenting, which ignores the many other benefits of public biomedical research. In the life sciences, more innovations are probably captured by the patent system than in other fields, but there are certainly medical innovations such as ICU hygiene checklists that have enormous social benefit that is not easily excludable with patents. This problem would be exacerbated in other areas of research, which is one reason that it would be difficult to replicate AGL&S’s analysis for grants from other funding agencies.

Of course, even within the life sciences, turning this result into a rate of return is difficult. AGL&S’s back-of-the-envelope calculations lead to tentative estimates on the order of a $20–$30 million increase in firm market value for each $10 million NIH grant. The true social rate of return might be significantly higher (e.g., if social value is significantly higher than firm market value, or if firms often build on uncited knowledge) or lower (e.g., if patents frequently cite to papers that are not real intellectual influences). But even if one focuses only on the results that can be determined with more precision—the positive effect on private-sector patenting and the lack of measurable crowding out—the results in this paper are important data points for the increasingly polarized discussion of direct government R&D funding.

Cite as: Lisa Larrimore Ouellette, What Is the Payoff from Public R&D Investments?, JOTWELL (February 28, 2018) (reviewing Pierre Azoulay, Joshua S. Graff Zivin, Danielle Li & Bhaven N. Sampat, Public R&D Investments and Private-Sector Patenting: Evidence from NIH Funding Rules (revised 2017), available at NBER), https://ip.jotwell.com/payoff-public-rd-investments/.

Will Algorithms Take the Fairness Out of Fair Use?

Dan L. Burk, Algorithmic Fair Use, U. Chi. L. Rev. (forthcoming), available at SSRN.

As part of an invited symposium, organized by The University of Chicago Law Review, on whether artificial intelligence will spell the end of one-size-fits-all laws, Dan Burk has written a terrific essay explaining why he’s skeptical that AI or machine learning will lead to better copyright fair use decisions. In the essay, Algorithmic Fair Use, Professor Burk identifies three main bases for his concerns.

First, Professor Burk is skeptical that American fair use law, which is articulated as a relatively open-textured standard (as compared with U.K.-style “fair dealing” provisions that set out a laundry list of tightly specified circumstances in which portions of a copyrighted work may be used without permission), could ever be reproduced with much fidelity as a set of software rules. The resistance of American fair use to ruleification, and therefore to automation, runs deep – not least because the content of any fair use rule depends upon antecedent considerations that are themselves resistant to distillation into rules:

Determining the impact of the unauthorized use of a work on the actual or potential market for the underlying work requires a model of the market and decisions about the data that properly populate that model. The amount of the work used can be mapped to the percentage of lines or words or pixels or bits taken for a given use, but some weight or significance must be accorded that number, whether defined by explicit programming values or by algorithmically learned data patterns. The type of work used and the use to which the protected taking is put require some categorization of works and uses. These and a multitude of other design choices made in advance would determine the allowance or disallowance of uses for protected content; algorithms do not make judgments; they are rather the products of human judgment.

Second, and perhaps more importantly given the direction of technology at the moment, Professor Burk is skeptical of fair use automation through machine learning. Machine learning does not depend on ruleification but would instead seek to determine whether a use was fair by comparing it to patterns that correlate with uses judged to be fair within a large dataset of unauthorized uses. But a machine’s ability to produce relevant and reliable results through pattern matching presumes that the machine has been fed a dataset that is representative of the fair use determinations, and the facts underlying them, as they exist out in the world.

Getting the dataset right, Professor Burk argues, is likely to be expensive and difficult. But the problem runs deeper than just cost or the technical difficulties of assembling a reliable dataset. The fundamental conceptual difficulty is that the output of a machine learning algorithm is just a correlation. It isn’t a judgment about whether that correlation is meaningful. In an entertaining but important aside, Professor Burk refers to a famous instance where data mining showed a strong correlation between movements in the S&P 500 stock index and the production of butter in Bangladesh. In that case, he notes, “a human decisionmaker is required to designate the trend as spurious rather than meaningful.” The same would be true of fair use determinations made by a machine learning algorithm – human intervention would be required to check that the machine’s output makes any sense outside the confines of the machine’s dataset.

Third, and finally, Professor Burk is wary of proposals to automate fair use because he fears that encoding fair use into the operation of machines will shape human expectations and behavior in ways that are both difficult to predict in advance or to contest ex post. He outlines this reservation in part by quoting from a video creator’s writings describing how Google’s Content ID system has shaped creativity on YouTube:

“You could make a video that meets the criteria for fair use, but YouTube could still take it down because of their internal system (Copyright ID) [sic] which analyzes and detects copyrighted material. So I learned to edit my way around that system. Nearly every stylistic decision you see about the channel — the length of the clips, the number of examples, which studios’ films we chose, the way narration and clip audio weave together, the reordering and flipping of shots, the remixing of 5.1 audio, the rhythm and pacing of the overall video — all of that was reverse engineered from YouTube’s Copyright ID. I spent about a week doing brute force trial-and-error. I would privately upload several different essay clips, then see which got flagged and which didn’t. This gave me a rough idea what the system could detect, and I edited the videos to avoid those potholes.”1

Of course, machines are not the only mechanism for shaping behavior. That’s what law does too; indeed, that is the very point of having laws. An advantage of the conventional legal system is that when laws and legal reasoning are more readily accessible and comprehensible, they are more easily contested. The inscrutable outputs of trade-secret-protected algorithms or invisible data sets, by contrast, are likely to obscure the ways in which law shapes behavior. In the end, Professor Burk is profoundly pessimistic: “[I]mplementation of algorithmic fair use,” he says, “will inevitably, and probably detrimentally, change the nature of fair use.”

I am not so sure that we know enough yet to judge whether Professor Burk’s intuition is right. It does seem likely that automation will create pressure to “ruleify” fair use, that is, to turn it into a more elaborated version of U.K.-style fair dealing. But what is our normative takeaway if that happens? Is ruleified fair use, where enforcement is done cheaply by machines, necessarily worse than our current fair use standard?

Current American fair use law is more flexible than any set of imaginable fair use rules, yet (in part because of that flexibility) enforcement is expensive and undertaken only in the comparatively rare occasions where a user has both the incentive and means to engage in federal court litigation. Thus, fair use as we know it in the U.S. is flexible, but inaccessible.

Ruleified fair use administered by machines promises to solve the accessibility problem. But will that gain come only at the expense of a bowdlerized set of fair use rules? That depends in part on who would be making the rules that automation demands, and what the process looks like for creating new rules. Would the rule-maker be open to input from users as well as content owners? And would the rule-maker be obliged to periodically revisit the rules to make sure that new exceptions could be added as needed, and exceptions that had proved ill-advised removed?

These are among the important questions that Professor Burk’s provocative essay raises, and they should command the attention of the copyright academy in the years to come.

  1. Tony Zhou, Postmortem: Every Frame a Painting, Medium (Dec. 2, 2017), https://medium.com/@tonyszhou/postmortem-1b338537fabc. []
Cite as: Christopher J. Sprigman, Will Algorithms Take the Fairness Out of Fair Use?, JOTWELL (January 15, 2018) (reviewing Dan L. Burk, Algorithmic Fair Use, U. Chi. L. Rev. (forthcoming), available at SSRN), https://ip.jotwell.com/will-algorithms-take-fairness-fair-use/.

Questioning Trade Secret Privileges in Criminal Cases

Rebecca Wexler, Life, Liberty, and Trade Secrets: Intellectual Property in the Criminal Justice System, 70 Stan. L. Rev. (forthcoming 2018), available at SSRN.

How pervasive are software-based information technologies in the criminal justice system these days? And what role does trade secrecy law play in blocking defendants’ access to technical information about the design of these technologies from which to assess their objectivity and accuracy?

The short answer to both questions is much more than you may have expected, as you will learn when you read Wexler’s fascinating and powerful new article on intellectual property in the criminal justice system. Among other things, the article shows just how ubiquitous these technologies have become in virtually every stage of the criminal justice system: from police investigations to risk assessment for bail setting to evidence prepared for or presented at trial to sentencing and parole decisions. When defense lawyers raise questions about the accuracy of the outputs of these technologies and seek access to source code or other technical documentation, courts have been quite receptive to arguments that non-disclosure is justified because the information is a trade secret. Wexler’s thesis is that there should be no trade secrecy privilege in criminal cases. Period. Protective orders can adequately protect any trade secrets implicated by a criminal defense discovery request.

An especially interesting part of the article comes from its revelations about the evolution of trade secrecy privileges in civil and criminal cases. Wexler did considerable archival research and, among other things, discovered that courts initially recognized no such privilege. Getting at the truth was more important than protecting a witness’ secrets. When courts did begin to take trade secrecy into account, the initial rationale for doing so was to shield witnesses from self-incrimination (i.e., if revealing a commercial secret in court would put the witness at risk for civil or criminal liability).

In the early twentieth century, a lively debate cropped up between evidence treatise author John Wigmore and Judge Learned Hand over whether a trade secrecy privilege should be recognized in civil litigation. Hand opposed such a privilege because he valued truth-seeking over protection of commercial secrets. Wigmore, by contrast, supported legal recognition of a trade secret privilege. During the course of an American Law Institute (ALI) model code project on evidence, members of the ALI were split on the issue, some favoring Hand’s view and some favoring Wigmore’s. Eventually the model code adopted Wigmore’s position, perhaps in part as a result of technology industry lobbying. But even then, the privilege extended only to privilege non-disclosure of trade secrets in civil litigation.

There is obviously a much stronger rationale for recognizing a trade secret privilege in civil cases. These lawsuits often involve disputes between competitors or would-be competitors in the commercial marketplace. Fair competition would be undermined if, for example, the very trade secrets at issue were dissipated in a litigation aimed at protecting them. But as Wexler points out, that risk does not exist in criminal cases because defendants and defense lawyers who seek access to secret information about technologies that police or prosecutors are using to arrest, convict, and punish these defendants are unlikely to become commercial competitors of the trade secret claimants.

Not until 1991 did any state have a trade secrecy privilege for evidence proffered in criminal cases. In that year, the California legislature modified its evidence statute to recognize such a privilege. The goal of this change, though, was to prevent public disclosure of the secrets in the course of criminal proceedings, not to serve as a limit on what information defendants could obtain about technologies being used to convict or punish them.

The first appellate court decision applying this trade secret privilege to block defense efforts to obtain disclosure of information used to convict a defendant was not rendered until 2015. A lawyer for Martell Chubbs asked a court to order disclosure of the source code of a forensic program that had produced evidence used to support Chubbs’ conviction in a death penalty case. The trial court agreed with Chubbs’ lawyer that unless the defense had access to this code, Chubbs would be denied the right to confront and cross-examine relevant witnesses against him. The developer of the software, however, refused to turn over the source code on the ground it was a trade secret. On appeal, a California court overturned the disclosure order on the ground that the source code was a trade secret, which could be disclosed only if the defense met a high standard to justify such a disclosure. The Chubbs case has influenced courts in some other jurisdictions, although a few courts have been more receptive to source code disclosure requests.

Those who support the existence of a trade secret privilege for source code and other technical documentation argue that without it, there will be too little incentive for developers to invest in the creation of software that can produce objective evidence that will make criminal trials more accurate and effective. Wexler challenges this argument on several grounds. She explains various limits that already substantially curtail discovery in criminal cases, the discretion courts have to deny abusive discovery requests, and the availability of protective orders to protect secrets that have commercial value. She argues that greater transparency of technical information about forensic and other technologies used in the criminal justice system might even spur more innovation, as companies will no longer be able to overprotect their programs through aggressive trade secret claims. Wexler points to other mechanisms, such as prizes and government grants, that, as Natalie Ram has shown in another stimulating article, Innovating Criminal Justice, could support the development of criminal justice system software.

Wexler argues that recognition of the trade secret privilege in the criminal justice system is both unnecessary and harmful. Because this privilege is a creature of the common law in most states, the ongoing common law process can rectify the mistaken belief that the trade secret privilege is sound and has a long pedigree. She offers a vision about how to move forward with better public policy on this important but mostly unnoticed dimension of our criminal justice system. This article opens up a welcome new line of research in the intellectual property field.

Cite as: Pamela Samuelson, Questioning Trade Secret Privileges in Criminal Cases, JOTWELL (December 1, 2017) (reviewing Rebecca Wexler, Life, Liberty, and Trade Secrets: Intellectual Property in the Criminal Justice System, 70 Stan. L. Rev. (forthcoming 2018), available at SSRN), https://ip.jotwell.com/questioning-trade-secret-privileges-in-criminal-cases/.

Creative Communities and Intellectual Property Law

Betsy Rosenblatt, Belonging as Intellectual Creation, 82 Mo. L. Rev. 91 (2017).

In late November 2016, recreational knitters Krista Suh and Jayna Zwieman conceived of The Pussyhat Project — a way for knitters and crocheters to participate in the January 21, 2017, Women’s March on Washington by creating a simple hat for marchers to wear. To facilitate the project, there was a website (featuring several patterns for free download, the first created by yarn store owner Kat Coyle), an Instagram account, and a hashtag. There wasn’t, however, a focus on a particular level of output. Rather, the goal of the project was to foster community through creative work, building on existing networks of knitters and highlighting the ways in which knitting circles are often “powerful gatherings of women, a safe space to talk.”

The community’s boundaries were porous and self-policed. Anyone was welcome to claim membership; the only requirement was to create or be the recipient of creation. Although the basic form of the hat was loosely defined — pink in color and rectangular in shape — individual knitters were free to stylize their hats in any way they wished. Patterns were freely shared, and distribution took place via a voluntary infrastructure. The community that resulted produced tens of thousands of hats in two months, and representative hats now reside in the collections of major museums across the country.

Scholars will undoubtedly have much more to say about this movement as its history is written, including critiques involving, inter alia, race, class, gender identity, and the sociology of protest movements. For now, the project is worth adding to our consideration of other organic communities that have inspired creativity without a focus on commercialization — even if they also feature stronger policing mechanisms (Wikipedia), more reliance on traditional IP inputs (fan edits and cosplay), or more emphasis on reputation building (message boards and Facebook posts). What do these community gardens of creativity — unburdened by concerns about monetization or propertization — tell us about what the goals of intellectual property law should be?

Professor Betsy Rosenblatt suggests in her recent article that the law has too narrow a focus. Creating with and for others, research shows, promotes a sense of belonging, which, in turn, motivates and improves the results of creativity. Indeed, for the pussyhat knitters, a sense of belonging to a social movement likely provided the entire motivation to create. (I should make clear here that the example throughout of the Pussyhat Project is mine, not Professor Rosenblatt’s.) So if the law focuses only on the tangible results of creativity — what Professor Rosenblatt refers to as “stuff’ — and fails to consider the importance of belonging, it might incentivize less creativity than it otherwise would.

What does it mean for a creator to belong? Professor Rosenblatt writes that a sense of belonging is both personal and “contextually mediated.” It arises when an individual feels “included, valued, and respected” by members of a group to which she is connected and with which her “values are in harmony.” (P. 96.) Belonging may be, at times, officially determined, but a sense of belonging can be only a psychological and emotional phenomenon. One cannot experience a sense of belonging unless one perceives oneself to belong, depending on “the individual’s subjective experience of interactions with in-group and out-group members.” (P. 100.) Whether a group offers official validation in the form of an award or membership criteria, unofficial acceptance in the form of discussions and ratings of one’s creative efforts, or simply the ability to become part of the group through an act of creativity, the emotion that is likely to spur creativity is the (scientifically tested) feeling of connection these activities inspire.

So if a sense of belonging both inspires and is inspired by creativity, and if creativity is what our intellectual property system aims to incentivize, at least in part, shouldn’t our system take belonging more into account, along with other noneconomic motivations? If it did so, what might such a system look like?

We would start by determining what conditions facilitate a sense of belonging. Research on organizational membership might help us to understand the psychology behind individuals’ decisions to join a community, commit to that community, or take a leadership role in that community. Shared semiotics and discourse no doubt create the bonds that allow an individual to feel included. Reputational and other benefits from a creative community may create a desire (or perceived obligation) to return those benefits in kind. We might also consider, as Professor Rosenblatt suggests, the importance of shared endeavors, participation in decision-making, psychological reward, and community values. (Pp. 101-03.) (The success of the Pussyhat Project, for example, seemed to result from all four — indeed, the freedom participants were given to make their creativity manifest was probably a significant validating factor.)

This is an important, albeit challenging, project, and, to her credit, Professor Rosenblatt remains cautiously optimistic about intellectual property law’s ability to respond fully to these questions. Focusing primarily on copyright law, she suggests that greater attention should be paid to attributional and reputational interests and to rules that encourage collaboration and playfulness — all areas that align with the inherently psychological nature of belonging. Yet, even starting modestly, we cannot ignore some important threshold questions that will help to anchor foundational principles. Should intellectual property law remain agnostic if, for example, the sense of belonging it promotes results in a group that is socially undesirable and/or dangerous to others? Should it matter whether intellectual property law is used to foster a sense of belonging or to assert a preference for disassociation? Without a firm sense of our values regarding these questions, our ability to foster the conditions for socially beneficial creativity will be hampered.

So what might we learn from the Pussyhat Project, insofar as it serves as one case study for Professor Rosenblatt’s proposal? The creative productivity generated by the pussyhat knitters would not have resulted without a sense of belonging. It also would not likely have resulted if the project had been bound by formal rules, by a sense of ownership, by exclusion, or by a definition of “the work” — all intellectual property law’s vocabulary. Not all communities, of course, are creative (or creative in the same way), and not all creators benefit from community (or benefit from it in the same way). But Professor Rosenblatt’s thoughtful article reminds us that the intersection of creativity and community is a subject worthy of our continued attention. It asks us to consider whether the law should pivot to do more to actively engage inspiration, focusing on the psychological motivations that, for many individual creators, are central to creativity — or whether the law should instead recognize its limits and simply do its best not to get in the way.

Cite as: Laura A. Heymann, Creative Communities and Intellectual Property Law, JOTWELL (November 1, 2017) (reviewing Betsy Rosenblatt, Belonging as Intellectual Creation, 82 Mo. L. Rev. 91 (2017)), https://ip.jotwell.com/creative-communities-and-intellectual-property-law/.

Three Strikes for Copyright

Abhishek Nagaraj, Does Copyright Affect Reuse? Evidence from Google Books and Wikipedia, Mgmt. Sci. (forthcoming 2017), available at abhishekn.com.

How should copyright law change to take account of the internet? Should copyright expand to plug the internet’s leakiness and protect content that the internet would otherwise make more freely available? Or, should copyright relax its strict liability regime given diverse and productive reuses in the internet age and the benefits networked diffusion provides users and second-generation creators? Answering these questions depends on what we think copyright is for and how it is used and confronted by creators and audiences. In a new article studying these questions in the very focused setting of Wikipedia articles about baseball and baseball players (there are more than you might imagine!), Professor Abhishek Nagaraj demonstrates that where production of new knowledge depends on pre-existing information, strong copyright law can reduce both the quality and quantity of new content.

Professor Nagaraj studies the intersection of digital access and information diffusion. In the paper reviewed here, Nagaraj takes advantage of the lack of automatic renewal of copyrighted works published before 1964, rendering many to the public domain, to estimate the effect of access to public domain material on the quality of Wikipedia pages. His findings both confirm other studies in this area and raise new lines of inquiry.

This paper tells several stories. The first starts in 2008, when Google Books digitized all of the issues of Baseball Digest between 1940 and 2008. Of these, the pre-1964 issues are in the public domain; the rest remain under copyright. This first story explains how Wikipedia articles about baseball cite to the public domain Baseball Digest issues twice as much as the in-copyright sources. It is no surprise that digitization of an important source of information about baseball enables access and encourages the reuse of this resource on Wikipedia, the fifth-most visited website (with about 10 billion page views monthly). After digitization, citation to Baseball Digest increased 300% over pre-digitization levels. What surprises more is that public domain sources (which are also older) are cited more frequently than in-copyright sources, despite both being digitized fully by Google Books. As Nagaraj demonstrates, relying on quantitative analysis of citation frequency and open-ended survey questions with Wikipedians, copyright is a barrier to citation and reuse of the digitized material, and Wikipedians are paying attention to those barriers.

This paper tells another story about the consequence of the copyright barrier – i.e., that it diminishes the quality of the Wikipages about certain baseball players who played after 1964. For pages about baseball players who are neither famous nor obscure (e.g., the average player about whom a Wikipedia page would come in handy), those players who made their debut appearances before 1964 have higher quality pages than those who began playing after 1964. How does Nagaraj measure quality? By measuring what he argues is circumstantial evidence of higher quality content: citation to Baseball Digest, the number of images on the page, and the number of visitors to the page (as a measure of reader utility). Pre-1964 player pages for well-known (but not superstar) players have almost twice the number of citations to Baseball Digest, 1.78 images as compared to 0.92 for in-copyright player-pages, and they attract about forty-seven more visitors per month on average.

Nagaraj describes this effect in terms of a welfare impact, suggesting that pages negatively affected by copyright are unable to fully capture and deliver value to end users. In intellectual property debates, we often worry about quality over quantity, whether the “progress” to which the intellectual property clause of the Constitution aims is more stuff or better stuff, “better” being a tricky term. We also worry about the relevance of citation counts. A helpful and intriguing feature of Nagaraj’s paper is his metric for quality that is both quantitatively measurable and qualitatively significant for the community the content serves (baseball fans).

The third story this paper tells is the most interesting of all. It concerns the differential impact of copyright restrictions on images versus text, which difference is driving the first story described above. Generally, digitization should lower the costs of reuse for both types of media, but Nagaraj shows that text is cited to at a significantly higher rate than images, leading to the reuse of and reliance on text at a much higher rate than images from the in-copyright Baseball Digest sources. This means that the digitization of content benefits textual content more than images (photographic or otherwise). Or, more precisely, the negative effects of copyright on citation and use disappear for text and are driven by a lack of reuse of images post-1964. Nagaraj hypothesizes and then demonstrates that because images require more “transformation” (under copyright fair use) than textual content to avoid infringement liability, the gains of access won by digitization are mitigated, for the in-copyright images, by copyright’s imposition of greater transaction costs. It follows that the “reuse of out-of-copyright content is likely to be higher for images … than for text” (P. 16), suggesting that the public-domain status of images is more important for diffusion and reuse of this kind of information and content. In fact, Nagaraj shows that the “likelihood that an image will be reused from a post-1964 [in-copyright] issue of Baseball Digest is very close to zero, even after digitization [whereas] … the pattern for text citations is quite different. … In-copyright and out-of-copyright text citations track each other pretty closely, suggesting that copyright has very little impact on preventing the reuse of digitized textual material.” (Pp. 17-18.) His chart reproduced below illustrates this point.

This last story raises fascinating questions. First, in a networked digital age in which photographs, logos, memes, and other graphic images have been shown to circulate more swiftly and broadly than text, in which images dominate the semantic web (including, importantly, journalism and social media), it is surprising that copyright may control the use of images as forcefully as Nagaraj documents about Wikipedia. This is a significant threat to veritable and free speech.

Second, the collection and management of photographs online is growing in concentration and expense. There are a lot of free images on the web, but they serve to entertain or illustrate, not to inform or contribute facts for debate.  (There is a reason we see the same pictures of Mitch McConnell, Antelope Canyon, or the Lincoln Memorial on news sites). New, timely, and authenticated photos are harder to source; there are fewer photojournalists and even fewer aggregators and news agencies managing the collection and distribution of news. If, as Nagaraj’s study proves, internet users rely on photographs to anchor and explicate information sought, copyrighted photographs circulate less freely than text online (a statement many photographers would dispute, but that’s another story), and because news outlets now less frequently pay for and distribute photojournalistic images, we have a significant information problem in the digital age.

Third, is it possible that Wikipedians are so closely hewing to the copyright fair use analysis that it is harder to claim transformative fair use of photographs than text? In my research, I found creative and innovative communities followed idiosyncratic norms of copying (or not copying) that did not align with intellectual property law. The story Nagaraj tells about the non-use of photographs versus the use of text under copyright on Wikipedia is a story of behavior arguably aligning with copyright doctrine. Reusing copyrighted photographs verbatim is harder under copyright fair use than quoting or paraphrasing parts of text. But if you surf the internet with its seemingly uncontrolled reproduction of photographs, you would be forgiven for thinking that copyright law doesn’t act as a barrier to copying and distribution of photographs at all. So what explains the Wikipedian’s careful non-use of photographs under copyright? Banners on Wikipedia pages indicate editors are indeed knowledgeable about complex copyright rules. But, perhaps more importantly for the welfare question, Nagaraj asks: is the Wikipedian’s behavior that may be copyright compliant (although arguments exist on both sides) good for their encyclopedic project to produce and disseminate free high-quality and comprehensive information to world readers? These are big and important questions Nagaraj tackles admirably. For anyone interested in a model for robust quantitative experimentation in intellectual property with qualitative implications and analysis for further study, I highly recommend Nagaraj’s newest paper.

Cite as: Jessica Silbey, Three Strikes for Copyright, JOTWELL (October 13, 2017) (reviewing Abhishek Nagaraj, Does Copyright Affect Reuse? Evidence from Google Books and Wikipedia, Mgmt. Sci. (forthcoming 2017), available at abhishekn.com), https://ip.jotwell.com/three-strikes-for-copyright/.

Copyright Law and the Utility of Style

Christopher Buccafusco & Jeanne C. Fromer, Fashion’s Function in Intellectual Property Law, 93 Notre Dame L. Rev. (forthcoming 2017), available at SSRN.

Copyright is meant to protect artistic and literary works (books, paintings, sculptures, poems, songs, etc.) against copying. Copyright is not, however, meant to protect useful things – that is the job of patent law. A lot of useful things, however, are also pretty (and the other way around). Think about attractive kitchen appliances, or apparel, or furniture, or dining utensils, or indeed just about any artifact of industrial design. A well-designed ladies’ dress can keep its wearer warm, but most consumers probably value it primarily for its style. Does copyright protect the design of the dress? Or does the dress’s utility rule it out of copyright?

Christopher Buccafusco of Cardozo Law and Jeanne Fromer of the NYU School of Law have written a fascinating take on this question, titled Fashion’s Function in Intellectual Property Law, forthcoming in the Notre Dame Law Review, and available here. I liked the Buccafusco/Fromer article a lot, so much so that I wish the Supreme Court had paid more attention to it in its decision in Star Athletica, L.L.C. v. Varsity Brands, Inc., a recent case in which the Court established a test for determining whether particular pictorial, graphic, or sculptural (“PGS”) elements of useful articles are “separable” from the article’s useful function, and thus copyrightable. Buccafusco and Fromer presented the article’s arguments to the Court in a terrific amicus brief, which you can find here.

Precisely what that Star Athletica separability test is, and how it applies to actual cases, are fascinating questions, but they are not the focus of this review. Rather, the key question that the Buccafusco/Fromer article explores is a necessary antecedent to the Supreme Court’s separability analysis. If we are inquiring whether particular PGS features are “separable” from an article’s useful function, we have to understand what that useful function is.

Of course, apparel is useful in covering our nakedness and keeping us warm. That is the sort of utility that the Sixth Circuit majority identified in its decision in the case, and, although the Supreme Court was silent on that particular issue, the opinion is best understood as adopting implicitly the Sixth Circuit’s understanding. But is that apparel’s only function? Both a dissenting Sixth Circuit judge and the district court in the case thought that the utility of the cheerleading uniform was broader – in addition to covering the body, the uniform functioned to identify the wearer as a cheerleader. And Profs. Buccafusco and Fromer see an even broader utility. They argue that certain design elements of the cheerleader uniform function to accentuate or deemphasize particular physical features of the wearer, and that this function of altering appearance is in fact “useful.”

That “altering appearance” form of utility is easy to see in the case of a very specialized form of apparel – military camouflage. When we look at a soldier in camo, we understand that the function of that outfit is to “alter appearance” by helping the wearer blend into the background. The question is perhaps less clear when it comes to cheerleading outfits, which, unlike camouflage, are not meant to conceal. And yet our understanding of how, precisely, a cheerleading uniform is “useful” is crucial because it shapes our analysis regarding whether particular PGS elements of a garment are “separable,” and therefore potentially copyrightable. Consider “Design 078,” pictured below, which is one of the cheerleading uniform designs that was at issue in the Star Athletica case.

If you believe, as the Sixth Circuit majority held explicitly, and the Supreme Court held implicitly, that the function of this garment is to “cover the body, wick away moisture, and withstand the rigors of athletic movements,” then you might conclude that the claimed PGS design elements – i.e., the patterns of stripes, chevrons, zigzags, and color-blocking – are separable from that useful function (and, if they meet copyright’s originality standard, protectable) because they do not contribute to it.

On the other hand, if you believe, as Profs. Buccafusco and Fromer argue, that the design elements just mentioned “are simultaneously both expressive and functional, serving to accentuate and elongate the cheerleader’s body and make it appear curvier in particular areas,” then the claimed design elements are themselves functional, are part of the garment’s utility, cannot be separated from that utility, and are therefore not copyrightable.

Profs. Buccafusco and Fromer marshal a lot of evidence from the fashion design literature in support of their argument, and these sections are a big part of why reading the article is so much fun. I won’t go into the details now (I’ll save them for your reading of the article), but I’ll say that in my view Buccafusco and Fromer have it right overall.

Just look at the uniform pictured above. Any designer will tell you that the function of the color-blocking (i.e., the contrast between the white triangular side panels and the dark-colored hourglass shape on the front of the uniform) is to accentuate the wearer’s bust and narrow the appearance of her waist. Now look at the inverted “V” pointing up toward the wearer’s neck, and the downward-pointing “V” that makes up the hem of the skirt – the combination of these two shapes exploits something called the Müller-Lyer Illusion to make the wearer’s body appear longer. The same is true of the smaller “V” notched into the skirt hem – that shape functions to lengthen the appearance of the wearer’s leg.

Buccafusco and Fromer’s article is full of observations that transport us inside the mind of the apparel designer, but the article is most valuable for its exploration of the intertwining of aesthetics, functionality, and competitive concerns:

The stripes, chevrons, zigzags, and color-blocking that form the heart of Varsity’s uniform designs … are the reason that the designs work as appropriate designs for garments meant to emphasize the fitness, athleticism, and attractiveness of those who don them. It is no accident that the plaintiff here chose to design its uniforms in this fashion, and it is no accident that many cheerleading uniforms share similar design features. Granting copyright protection for these designs would enable Varsity to monopolize functional aspects of garments without satisfying the exacting demands of patent law. Although other designers might be free to produce cheerleading uniforms incorporating features that flatten the bust, widen the waist, and shorten the legs, we suspect that they are unlikely to find a vibrant market for such products. This is not the sort of competitive advantage that copyright law is intended to foster.

As I mentioned earlier, the Buccafusco/Fromer arguments regarding the broader utility of apparel designs were ignored by the Court, which said nothing in its Star Athletica opinion about how to define the utility of any particular useful article. That’s lamentable, but it’s also an opportunity. In the coming years, courts will be considering the separability – and the copyright eligibility – of PGS elements embedded in an enormous range of useful articles. Many of the Buccafusco/Fromer insights regarding the intertwining of form and function in apparel design could be relevant to cases involving useful articles other than apparel. And even in instances where those insights don’t apply directly outside apparel, they point the way for a richer analysis of the relationship of form and function across the field of industrial design.

Cite as: Christopher J. Sprigman, Copyright Law and the Utility of Style, JOTWELL (September 7, 2017) (reviewing Christopher Buccafusco & Jeanne C. Fromer, Fashion’s Function in Intellectual Property Law, 93 Notre Dame L. Rev. (forthcoming 2017), available at SSRN), https://ip.jotwell.com/copyright-law-and-the-utility-of-style/.

Courtesy Without Copyright

It is now a familiar point that positive law accounts for only part of the systems that regulate human behavior. Robert Ellickson’s cornerstone treatise, Order Without Law, showed how closely-knit groups construct norm-based rule structures that govern behavior more efficiently than state-created law. In the past decade or so, scholars have investigated a number of areas in which individuals engaged in creative production similarly opt for norm-based systems in lieu of copyright or trademark law.

Professor Robert Spoo’s recent article, Courtesy Paratexts: Informal Publishing Norms and the Copyright Vacuum in Nineteenth-Century America, represents a fascinating and important contribution to this growing literature. Spoo’s article harkens back to the mid- and late 1800s, a time when foreign authors received no copyright protection in the United States. In the absence of formal legal protection for foreign authors’ works, domestic publishers created a series of agreements with each other that the first house to print copies of a foreign author’s novel would be able to do so without competition—even in the absence of enforceable copyright law. These informal agreements were expressed and furthered to a large extent by courtesy paratexts: Brief written passages in the front matter of a book in which the book’s author would affirm that the publisher was acting within the courtesy of the trade to print their work, and exhorting readers to buy only authorized editions to assure the author and publisher alike their fair remuneration.

This brief description cannot do justice to the richness of Spoo’s account of courtesy paratexts, and the courtesy of the trade generally, during this fascinating and sometimes chaotic moment in the history of publishing. Part of the great merit of this article lies in the opportunity it affords to hear the voices of famous nineteenth-century authors speaking directly to readers outside the context of their familiar writings. All of them are fascinating in their own way, but the most memorable may be Rudyard Kipling’s paratext. While most authors spoke graciously about the courtesy of the trade and the remuneration it permitted them, Kipling’s “discourteous courtesy” (in Spoo’s felicitous phrasing) chided that “your country takes the books of other countries without paying for them.” Kipling went on to emphasize that “I object to the system altogether,” but consented to the courtesy of the trade “because I am helpless.”

Beyond the absorbing stories themselves, Spoo’s work makes significant substantive contributions to the study of norm-based systems of creative property. One feature of such systems is that they will always be more fragile than traditional law, since they lack state sanction and are stitched together largely by mutual consent. Spoo illustrates the implications of this fragility in detail with the story of Charles Dickens. Dickens, who like Kipling harbored deep frustration with the lack of U.S. copyright protection for foreign authors, decided quite suddenly that he wanted to switch to a new publisher. This violated the courtesy of the trade (the initial publisher was to have exclusive rights to all of that author’s works), but since Dickens was a literary giant, he got his way, courtesy be damned. But fragility may function as a feature, not just a bug, of norm-based systems. For example, Spoo points out that while the courtesy of the trade was essentially an anticompetitive scheme, it did not result in publishers charging vastly supracompetitive prices. The reason is that publishers likely knew that if they charged exorbitant prices, readers might object, rogue publishers would be emboldened, and the whole system could collapse.

Spoo’s discussion of the impact of the informal trade courtesy system on prices invites even more investigation about its normative implications. The conjecture that publishers did not charge excess prices for fear of reprisal from consumers and rogue printers is an important part of the story. But the entire discussion of price only hints at the bigger question: was the trade courtesy system on balance welfare-enhancing for consumers of information? It may not have been. Foreign authors understandably loathed their lack of U.S. copyrights, but this absence was unlikely to have decreased their incentives to create, given the robustness of European markets. And U.S. readers did not need higher-end publishers to issue pricey editions of foreign books given the profusion of rogue publishers willing to crank out cheap books at low prices. It is not clear whether this system arose to remedy a market failure at all (at least from readers’ perspectives), or whether it was just a collusive arrangement that allowed publishers to extract value that would otherwise been unavailable to them. And it thus also remains a question still worth asking whether the public—which is, after all, meant to be the primary beneficiary of U.S. copyright law—was made better off by the trade courtesy system.

Like all good historical analysis, Spoo’s work also helps to shed light on modern problems. He notes that courtesy paratexts persisted in attenuated form into the twentieth century, but neglects to mention that they persist, alive and well, on the internet. Writers of fan fiction, for example, typically preface their works with disclaimers that, among other things, disclaim any intent to infringe the author’s copyright, indicate their lack of any copyright ownership in the underlying material, and warn the reader in the event of possibly offensive content. These are courtesy paratexts in modern guise, though the direction of dialogue is different: While the nineteenth-century authors beseeched readers to purchase only authorized editions despite any legal compulsion to do so, modern-day fan fiction are written by readers imploring authors not to take issue with the unauthorized use, despite their (possible) legal entitlement to do so.

While I could continue extolling the virtues of Spoo’s article, I will conclude instead by saying that beyond its substantive merits, this article was a joy to read. Let’s be honest: Many—perhaps most—law review articles are like kale. They’re good for you but consuming them can be a chore. Spoo’s sparkling account of courtesy paratexts, and the system of informal norms that governed the publication of foreign novels in the late 1800s generally, is as elegantly written as it is meticulously researched and academically significant. Courtesy paratexts provide yet another reminder of the varied ways that closely-knit groups create extralegal systems of regulation to govern creative production when they find law unavailable or impractical.

Cite as: David Fagundes, Courtesy Without Copyright, JOTWELL (July 20, 2017) (reviewing Robert Spoo, Courtesy Paratexts: Informal Publishing Norms and the Copyright Vacuum in Nineteenth-Century America, 69 Stan. L. Rev. 637 (2017)), https://ip.jotwell.com/courtesy-without-copyright/.

Sometimes, Information Wants to be Expensive

Jonathan M. Barnett, Three Quasi-Fallacies in the Conventional Understanding of Intellectual Property, 12 J.L. Econ. & Pol’y 1 (2016), available at SSRN.

In the last twenty years or so, numerous IP scholars have questioned – indeed, many have vigorously doubted – the need for robust intellectual property rights in order to generate significant incentives to spur innovative and creative activity. In an incisive analysis, Jonathan Barnett offers an important riposte to what he views as the now “conventional wisdom” that society would be better off with much-diminished IP rights.

To do so, he dispels what he considers three key assumptions – so-called quasi-fallacies – underlying the wisdom of IP minimalism. Instead of simply resorting to a conclusory “property-rights-are-valuable” mode of critique, Barnett develops his compelling, economically grounded arguments using a variety of original case studies, data, and theoretical insights. Instead of attacking hypothetical strawmen, Barnett directly confronts the empirical claims of the minimalist camp.

First, Barnett rebuts the common claim that, in the absence of IP, the market will generally price goods and services at a competitive cost (usually assumed to be marginal cost, but no higher than average cost). As Barnett properly contends, because markets often reorganize in the absence of IP to use other mechanisms to inflate price well beyond the competitive equilibrium, IP rights may actually reduce deadweight losses and lower entry barriers relative to alternatives. This is especially so in disaggregated industries, such as biotech and consumer electronics, that rely upon IP rights to reduce transaction costs in contracting.

Barnett’s quintessential example is Linux, which is often trumpeted in IP scholarship as a paradigmatic example of innovation without IP. Linux is nominally free (at least many versions of it) and thus priced at marginal cost. Indeed, given its “open source” and “free” nature, many scholars argue that strong IP rights (patent and copyright) reduce social welfare by thwarting the development of Linux and similarly “free” and “open” technologies.

However, as Barnett ably explains, Linux is largely funded and developed by for-profit entities to sell other products and services at arguably supracompetitive prices. Specifically, because Linux is free, this diminishes direct competition from firms that earn revenue solely from producing and selling operating systems. This diminished competition, in turn, very likely results in supracompetitive prices for complementary products and services. Although the presence of robust IP rights may make Linux itself more costly, these rights may actually spur competition from startups and other entrants in the operating system software market, improving overall prices for the entire package of products and services needed by consumers. Linux is not an isolated example, as many instances of putatively “free” innovation are financially driven, including by IP rights.

This recognition leads to Barnett’s second “quasi fallacy,” that a substantial amount of innovation can occur in the absence of IP rights or at least some market substitute for IP rights, like complementary assets (e.g., marketing muscle, capital reserves, network effects, and the like), noncompete agreements, trade secrecy, and the like. Certainly, there is a non-trivial amount of self-motivated, “altruistic” innovation and creativity, and other non-market-based innovative activity that is spurred without barriers to entry, and Barnett does not doubt as much. And, within this sphere, IP rights are often pernicious. Rather, Barnett’s legitimate concern is over the extent of these non-market phenomena, especially as they relate to the commercialization of intellectual goods. For example, although it is now relatively straightforward to record a song or code a mobile phone app in one’s bedroom, IP rights can play an essential role in the production, marketing, and distribution of innovative and creative works from songs to apps to vaccines. Barnett again provides a series of case studies and leverages a variety of data that – while certainly not definitive – highlights the need for more systematic research on the extent of innovation that has been driven without IP rights (or some market-substitute for IP rights).

Barnett’s last “quasi-fallacy” of the academic literature is the view that IP rights tend to aid large company incumbents. Here I disagree with Barnett that this quasi-fallacy is the current wisdom, as most scholars now recognize that large, especially high tech, companies (like Google and Facebook) tend to prefer weaker IP rights (pharma being an exception). Perhaps a way to re-characterize Barnett’s quasi-fallacy is the common view among academics that “IP rights tend to hurt startups and small companies.” Add to this a related quasi-fallacy that “non-practicing entities (NPEs),” which tend to be small firms that primarily assert and license patents, “are bad for innovation.”

As to the former claim, although the IP system could help startups more, there are many tangible benefits of IP rights to startups, especially in terms of garnering investment. Whether these benefits outweigh the costs is an open question, but there is no solid evidence that on balance, IP rights hurt startups or small companies.
As to the latter claim, similarly, there are no sound studies showing that NPEs are a major problem for innovation. Although there are certainly some “bottom feeder” NPEs that assert frivolous claims inside and outside of litigation, there is no reliable evidence showing that they account for more than a small share of NPE activity and related costs. Indeed, controlling for a variety of patent- and district-related characteristics, the most reliable studies show that litigated NPE patents appear to be very similar in quality to litigated operating company patents.

Why Barnett’s article is worth reading for IP minimalists and maximalists, alike, is illustrated well by how it helped me to explain a puzzle that I like to present in my IP seminars. In class discussion, I typically ask whether “using Google’s search engine is free.” Most students answer “yes,” and the rest usually look at me as if I were asking an incredibly stupid question. I typically reply, “If Google is free, how is the company worth over $500 billion?”

The students respond, “advertisers pay Google, not users.” I assert, “if advertisers pay Google, then in many cases, it seems these companies are charging well over the competitive price for their goods and services anytime a user clicks on a paid ‘ad’ link and makes a purchase. For those users, Google isn’t free.”

“In fact,” I note, “in many cases, the advertiser appears in the first (paid) ad link and immediately below it in the first generic (unpaid) link. For instance, type ‘Nike’ into Google. ‘Shop Nike.com’ appears first as an ad link followed by ‘Nike, Inc.’ as a generic link” (see below).

I conclude, “If Nike appears first in the generic links, which is clearly visible when a user performs a search, why does Nike pay Google to appear first in the ad links?”

By applying Barnett’s analysis, we can explain this conundrum, and – more importantly – its economic implications. Specifically, Nike, like many other advertisers on Google, purchases paid links for at least three reasons. First, to crowd out other search results from appearing towards the top of the list, Nike can divert more users to its website. Second, presumably Google privileges its advertisers in its generic search results—even if indirectly via increased traffic from when users click on Google’s paid ad links. Last, paid links can help identify consumers who click those links and provide customized online experiences (including pricing) to those consumers.

All of these reasons point in one direction—that consumers purchasing a product from Google’s advertisers (whether through a paid or unpaid link) are likely pay more than the competitive price. And the usual efficiency rationale for advertising – lowering consumer search costs – applies with little force on a search engine that is designed to perform that very function, especially when the paid and unpaid links are first and second in a search. Because Google has massive market power – which is cemented by its “free” price – it seems pretty likely that its ads are priced well over the competitive level, too. And this same market power – especially in the presence of weak intellectual property rights – arguably reduces the incentives society affords for innovation, resulting in a search engine that has seemingly become staid over the years.  (In this regard, by “weak intellectual property rights,” I refer to patent and copyright, which emerged as regulatory tools to combat the power of the monopolistic medieval guilds that relied heavily on trade secrecy, much like Google.)

In sum, Google, by relying on its “free price” and massively complementary assets probably reduces competition not only for search but also for online advertising, especially in the face of weak IP rights that – like Linux – may very well make its search service nominally cheaper on the front-end, but much more expensive on the back-end.

The moral of this story is simple and returns us to Barnett’s salient theme—even in the absence of traditional intellectual property rights (patent, copyright, and trademark), markets will often dynamically reorganize to squeeze as much profit out of their innovative products and services as possible, all the while diminishing innovation incentives. Whether this reorganization is better or worse for society than a world of robust IP rights is a thorny empirical question, but one that should be at the forefront of IP scholarship.

Cite as: Ted Sichelman, Sometimes, Information Wants to be Expensive, JOTWELL (June 26, 2017) (reviewing Jonathan M. Barnett, Three Quasi-Fallacies in the Conventional Understanding of Intellectual Property, 12 J.L. Econ. & Pol’y 1 (2016), available at SSRN), https://ip.jotwell.com/sometimes-information-wants-to-be-expensive/.