The Journal of Things We Like (Lots)
Select Page

Sometimes, Information Wants to be Expensive

Jonathan M. Barnett, Three Quasi-Fallacies in the Conventional Understanding of Intellectual Property, 12 J.L. Econ. & Pol’y 1 (2016), available at SSRN.

In the last twenty years or so, numerous IP scholars have questioned – indeed, many have vigorously doubted – the need for robust intellectual property rights in order to generate significant incentives to spur innovative and creative activity. In an incisive analysis, Jonathan Barnett offers an important riposte to what he views as the now “conventional wisdom” that society would be better off with much-diminished IP rights.

To do so, he dispels what he considers three key assumptions – so-called quasi-fallacies – underlying the wisdom of IP minimalism. Instead of simply resorting to a conclusory “property-rights-are-valuable” mode of critique, Barnett develops his compelling, economically grounded arguments using a variety of original case studies, data, and theoretical insights. Instead of attacking hypothetical strawmen, Barnett directly confronts the empirical claims of the minimalist camp.

First, Barnett rebuts the common claim that, in the absence of IP, the market will generally price goods and services at a competitive cost (usually assumed to be marginal cost, but no higher than average cost). As Barnett properly contends, because markets often reorganize in the absence of IP to use other mechanisms to inflate price well beyond the competitive equilibrium, IP rights may actually reduce deadweight losses and lower entry barriers relative to alternatives. This is especially so in disaggregated industries, such as biotech and consumer electronics, that rely upon IP rights to reduce transaction costs in contracting.

Barnett’s quintessential example is Linux, which is often trumpeted in IP scholarship as a paradigmatic example of innovation without IP. Linux is nominally free (at least many versions of it) and thus priced at marginal cost. Indeed, given its “open source” and “free” nature, many scholars argue that strong IP rights (patent and copyright) reduce social welfare by thwarting the development of Linux and similarly “free” and “open” technologies.

However, as Barnett ably explains, Linux is largely funded and developed by for-profit entities to sell other products and services at arguably supracompetitive prices. Specifically, because Linux is free, this diminishes direct competition from firms that earn revenue solely from producing and selling operating systems. This diminished competition, in turn, very likely results in supracompetitive prices for complementary products and services. Although the presence of robust IP rights may make Linux itself more costly, these rights may actually spur competition from startups and other entrants in the operating system software market, improving overall prices for the entire package of products and services needed by consumers. Linux is not an isolated example, as many instances of putatively “free” innovation are financially driven, including by IP rights.

This recognition leads to Barnett’s second “quasi fallacy,” that a substantial amount of innovation can occur in the absence of IP rights or at least some market substitute for IP rights, like complementary assets (e.g., marketing muscle, capital reserves, network effects, and the like), noncompete agreements, trade secrecy, and the like. Certainly, there is a non-trivial amount of self-motivated, “altruistic” innovation and creativity, and other non-market-based innovative activity that is spurred without barriers to entry, and Barnett does not doubt as much. And, within this sphere, IP rights are often pernicious. Rather, Barnett’s legitimate concern is over the extent of these non-market phenomena, especially as they relate to the commercialization of intellectual goods. For example, although it is now relatively straightforward to record a song or code a mobile phone app in one’s bedroom, IP rights can play an essential role in the production, marketing, and distribution of innovative and creative works from songs to apps to vaccines. Barnett again provides a series of case studies and leverages a variety of data that – while certainly not definitive – highlights the need for more systematic research on the extent of innovation that has been driven without IP rights (or some market-substitute for IP rights).

Barnett’s last “quasi-fallacy” of the academic literature is the view that IP rights tend to aid large company incumbents. Here I disagree with Barnett that this quasi-fallacy is the current wisdom, as most scholars now recognize that large, especially high tech, companies (like Google and Facebook) tend to prefer weaker IP rights (pharma being an exception). Perhaps a way to re-characterize Barnett’s quasi-fallacy is the common view among academics that “IP rights tend to hurt startups and small companies.” Add to this a related quasi-fallacy that “non-practicing entities (NPEs),” which tend to be small firms that primarily assert and license patents, “are bad for innovation.”

As to the former claim, although the IP system could help startups more, there are many tangible benefits of IP rights to startups, especially in terms of garnering investment. Whether these benefits outweigh the costs is an open question, but there is no solid evidence that on balance, IP rights hurt startups or small companies.
As to the latter claim, similarly, there are no sound studies showing that NPEs are a major problem for innovation. Although there are certainly some “bottom feeder” NPEs that assert frivolous claims inside and outside of litigation, there is no reliable evidence showing that they account for more than a small share of NPE activity and related costs. Indeed, controlling for a variety of patent- and district-related characteristics, the most reliable studies show that litigated NPE patents appear to be very similar in quality to litigated operating company patents.

Why Barnett’s article is worth reading for IP minimalists and maximalists, alike, is illustrated well by how it helped me to explain a puzzle that I like to present in my IP seminars. In class discussion, I typically ask whether “using Google’s search engine is free.” Most students answer “yes,” and the rest usually look at me as if I were asking an incredibly stupid question. I typically reply, “If Google is free, how is the company worth over $500 billion?”

The students respond, “advertisers pay Google, not users.” I assert, “if advertisers pay Google, then in many cases, it seems these companies are charging well over the competitive price for their goods and services anytime a user clicks on a paid ‘ad’ link and makes a purchase. For those users, Google isn’t free.”

“In fact,” I note, “in many cases, the advertiser appears in the first (paid) ad link and immediately below it in the first generic (unpaid) link. For instance, type ‘Nike’ into Google. ‘Shop’ appears first as an ad link followed by ‘Nike, Inc.’ as a generic link” (see below).

I conclude, “If Nike appears first in the generic links, which is clearly visible when a user performs a search, why does Nike pay Google to appear first in the ad links?”

By applying Barnett’s analysis, we can explain this conundrum, and – more importantly – its economic implications. Specifically, Nike, like many other advertisers on Google, purchases paid links for at least three reasons. First, to crowd out other search results from appearing towards the top of the list, Nike can divert more users to its website. Second, presumably Google privileges its advertisers in its generic search results—even if indirectly via increased traffic from when users click on Google’s paid ad links. Last, paid links can help identify consumers who click those links and provide customized online experiences (including pricing) to those consumers.

All of these reasons point in one direction—that consumers purchasing a product from Google’s advertisers (whether through a paid or unpaid link) are likely pay more than the competitive price. And the usual efficiency rationale for advertising – lowering consumer search costs – applies with little force on a search engine that is designed to perform that very function, especially when the paid and unpaid links are first and second in a search. Because Google has massive market power – which is cemented by its “free” price – it seems pretty likely that its ads are priced well over the competitive level, too. And this same market power – especially in the presence of weak intellectual property rights – arguably reduces the incentives society affords for innovation, resulting in a search engine that has seemingly become staid over the years.  (In this regard, by “weak intellectual property rights,” I refer to patent and copyright, which emerged as regulatory tools to combat the power of the monopolistic medieval guilds that relied heavily on trade secrecy, much like Google.)

In sum, Google, by relying on its “free price” and massively complementary assets probably reduces competition not only for search but also for online advertising, especially in the face of weak IP rights that – like Linux – may very well make its search service nominally cheaper on the front-end, but much more expensive on the back-end.

The moral of this story is simple and returns us to Barnett’s salient theme—even in the absence of traditional intellectual property rights (patent, copyright, and trademark), markets will often dynamically reorganize to squeeze as much profit out of their innovative products and services as possible, all the while diminishing innovation incentives. Whether this reorganization is better or worse for society than a world of robust IP rights is a thorny empirical question, but one that should be at the forefront of IP scholarship.

Cite as: Ted Sichelman, Sometimes, Information Wants to be Expensive, JOTWELL (June 26, 2017) (reviewing Jonathan M. Barnett, Three Quasi-Fallacies in the Conventional Understanding of Intellectual Property, 12 J.L. Econ. & Pol’y 1 (2016), available at SSRN),

Unevenly Cooked: Raw Materials and Fair Use

Andrew Gilden, Raw Materials and the Creative Process, 104 Geo. L.J. 355 (2016).

Over the past decade or two, intellectual property scholars have learned to pay attention to the rhetoric that people use when arguing and advocating. In particular, many scholars have challenged the use of property rhetoric about “owning” and “stealing” by those seeking expanded IP protection. According to these scholars, this rhetoric has the potential to pump inappropriate moral intuitions and to distort the utilitarian analysis that is supposed to guide IP policymaking.

Andrew Gilden’s recent article shares this interest in the language of IP, but he trains his sights instead on the rhetoric used by those seeking to limit the scope of copyright protection and to expand fair use. Although Gilden is, I think, sympathetic with their project, he demonstrates how one metaphor—whether the plaintiff’s work was used as “raw material” in the defendant’s work—can have pernicious effects on the kinds of artists that are insulated by fair use law.

Gilden begins by establishing the descriptive analytic claim that courts and scholars have adopted a “raw-material framework” for determining whether the defendant’s use of the plaintiff’s work constitutes fair use in copyright law. Gilden also discusses the raw material metaphor in right-of-publicity cases, but I will focus on the copyright cases. According to the logic of the metaphor, the defendant’s work should be protected by fair use when he uses the plaintiff’s work as merely raw material for his own creativity. For example, Patrick Cariou’s photographs of Jamaican men are figured as merely raw materials in Richard Prince’s Canal Zone collages, no different than paint or charcoal.

Gilden traces the raw materials metaphor to Judge Pierre Leval’s Harvard Law Review article on transformative use, and he tracks its implementation by a range of different courts and scholars. They use the metaphor to “emphasize the costs that copyright imposes on future cultural participants” and to “underscore that creative works are not just goods: they are resources.” (P. 364.) In doing so, the raw materials framework seems to have liberalized the fair use doctrine by excusing defendants even when they were not targeting plaintiffs’ works with criticism or commentary.

Although Gilden seems to support the general direction that the raw materials framework takes us, he argues that this rhetoric has troubling normative implications. According to Gilden’s analysis of the cases, whether the defendant is using the plaintiff’s work as raw material seems to depend largely on the status of the parties. Jeff Koons and Richard Prince can successfully employ the metaphor in their defense, but the street artist Thierry Guetta (a.k.a. Mr. Brainwash) was found liable for copyright infringement for manipulating photographs of the Sex Pistols’ Sid Vicious. According to the court, the defendant’s artworks “remain at their core pictures of Sid Vicious.”

Conversely, high-status plaintiffs and artworks incorporating images of high-status people seem better able to defeat raw materials claims than do those of lower status. The works of J. K. Rowling, the creators of Seinfeld, and the photographer of the above-mentioned Sex Pistols’ bassist were not treated as raw materials, while works of lower-status artists and the bodies of women and racial minorities can be appropriated as raw materials in others’ works. In Prince, for example, the court highlighted the vast differences in market prices for the two artists’ works, and it described the Jamaican man in one of the plaintiff’s photographs as “a human being in his natural habitat.”

The raw materials framework does more than reward high-status artists at the expense of low-status artists, according to Gilden. It also obscures an important aspect of the creative process. The raw materials approach validates subsequent works that either criticize those that come before or entirely sideline prior works’ creative cultural value. As Gilden argues, however, the latter is rarely, if ever, the case, even for works by Koons and Prince. Despite these artists’ assertions during litigation, their choices of which works to appropriate were based on the expressive and social significance of those works.

Gilden’s solution to the distortions caused by the raw materials framework is not to abandon it entirely, but rather to supplement its application to fair use law with a richer understanding of the defendant’s creative process. Instead of focusing on the social value that emerges from defendants’ activities (something that is likely to be skewed by the factors discussed above), courts should “give greater attention to the subjective experience of creation.” (P. 398.) Thus, the first fair use factor should favor the defendant whenever she “uses the work…as a bona fide aspect” of her creative process. (P. 396.) Doing so, Gilden suggests, will generate a less biased and more realistic account of the nature of sequential creativity.

Gilden’s article recognizes an important aspect of contemporary fair use rhetoric, and he demonstrates its distributive effects without insisting that those effects will always arise. Finally, Gilden proposes a solution to this difficulty, and he explores the solution’s limitations. He has produced a thoughtful article that I like lots.

Cite as: Christopher J. Buccafusco, Unevenly Cooked: Raw Materials and Fair Use, JOTWELL (May 26, 2017) (reviewing Andrew Gilden, Raw Materials and the Creative Process, 104 Geo. L.J. 355 (2016)),

Do Patents Work?

Heidi L. Williams, How Do Patents Affect Research Investments?, Ann. Rev. Econ. (forthcoming), available at NBER.

As everyone who has taken a patent law course knows, the reason we have patents is to increase private incentives for knowledge production. But do patents actually work? Based on her review of the existing evidence, MIT economist (and MacArthur “genius”) Heidi Williams isn’t sure; she concludes that “we still have essentially no credible empirical evidence on the seemingly simple question of whether stronger patent rights—either longer patent terms or broader patent rights—encourage research investments.”

This bottom line will not be a surprise to those who have followed the empirical literature, but Williams’s careful identification and modeling of the relevant empirical parameters and her discussion of the most relevant evidence on each point makes her review a must-read for anyone interested in patent policy.

To understand how patents affect research investments, Williams argues that we must understand (1) the effect of patents’ disclosure function; (2) the ex ante effect of strengthening patent rights; and (3) the ex post effect of patents on existing technologies. As an illustration, consider some concrete invention—say, a method of treating cancer with a protein-based drug.

Williams’s parameters translate into three questions: First, do patent law’s disclosure requirements either make patenting less attractive or increase investment by making more information available to stimulate new research ideas? Second, does the prospect of a patent stimulate investment in finding this cancer treatment method, and how does this incentive vary as the patent right is made stronger or weaker (through changes to term or breadth)? Third, how is this incentive affected by patents on existing technologies, such as on the protein or the gene encoding it? (And how do the answers to these three questions change if our example innovation switches to a new battery or Wi-Fi protocol?)

There is relatively little evidence on the disclosure function of patents. Survey results indicate that firms sometimes choose not to patent because of the disclosure requirement and that at least some researchers do find useful technical information in the patent literature. Perhaps most interestingly, a couple of papers have looked at a U.S. policy change that caused applications filed after November 2000 to be automatically published after eighteen months unless the applicant was filing in only the United States and opted out of publication; they found that only fifteen percent of applicants who could opt out actually did (suggesting that there may be private benefits to disclosure) and that this earlier disclosure leads to earlier licensing of biomedical patents. But Williams is not aware of any work that has attempted to quantify the effect of disclosure on research investments.

On the second question, the ex ante incentive effect of stronger patent rights, there is again survey evidence, though it is useful primarily for indicating that the pharmaceutical and chemical industries value patent protection much more than other industries. To empirically estimate the relationship between patent strength and research investments, some researchers have looked at the impact of national patent law changes and found little effect. But one would expect such studies to understate patents’ impact: increasing protection in a small economy will not noticeably increase innovation in that economy if domestic firms were already innovating for the global market.

More promisingly, in work with Eric Budish and Ben Roin, Williams has taken advantage of variation in effective patent term for cancer treatments (given variations in the length of clinical trials) to find an R&D investment distortion away from projects with shorter effective patent protection, although they are not able to isolate the effect of patents compared with other factors (such as the increased cost of longer clinical trials). And even if longer patents are shown to increase research investments for therapeutics, this effect may not be generalizable to other industries.

Researchers have made the most progress on the third parameter—the impact of existing patents on follow-on technologies—for which one might expect either a positive effect (due to efficient coordination of follow-on research by the initial rightsholder) or a negative effect (due to transaction costs in licensing agreements).

In the biomedical context, evidence from survey results and some clever instrumental variables studies of patent applications on human genes and patents invalidated in court have suggested that upstream patents have little effective (positive or negative) on the quantity of downstream innovation. But invalidation of patents in fields such as computing and electronics appears to increase the number of innovators subsequently citing that patent.

The bottom line is that despite the vast number of empirical patent studies—Williams notes the 3000 citations to a foundational patent-citation paper—very few studies have convincingly tackled the causal link between patent policy and research investments.

One of the main problems has been the push toward centralization and global harmonization, since empirical progress depends on policy variation, whether across time, across jurisdictions, or across similar technologies. In the absence of greater real-world variation (natural or intentional), laboratory experiments can play an important role, and there is a growing body of qualitative work on patent incentives (although drawing inferences from qualitative work can be challenging).

I hope Williams and other empiricists will also continue to discover sources of effective variation, as with the study of cancer drugs needing different clinical trial lengths. Although it may be depressing for patent law scholars to realize how far we remain from answering the foundational question of our field, Williams’s review is an important reminder of what parameters are needed, what the most promising approaches so far have been, and how much more still needs to be done.

Cite as: Lisa Larrimore Ouellette, Do Patents Work?, JOTWELL (May 10, 2017) (reviewing Heidi L. Williams, How Do Patents Affect Research Investments?, Ann. Rev. Econ. (forthcoming), available at NBER),

Is It Time to Examine the Concept of Originality in Musical Works?

Emma Steel, Original Sin: Reconciling Originality in Copyright with Music as an Evolutionary Art Form, 37 Eur. Intell. Prop. Rev. 66 (2015).

Copyright often makes little sense, particularly when you explain it to people who are not familiar with its concepts. Jessica Litman expresses this problem well in her book Digital Copyright by stating that people “find it very hard to believe that there’s really a law out there that says the stuff the copyright law says.” Anyone who has had to talk to members of the public about copyright will have similar experiences.

One area of copyright that has been receiving quite a lot of coverage recently is originality of music, especially in various high profile cases in which famous artists have been sued for copyright infringement. The most visible perhaps is the recent case of Williams v. Bridgeport Music, Inc, in which the estate of Marvin Gaye sued Robin Thicke and Pharrell Williams for copyright infringement in the hit song “Blurred Lines,” alleging that the music was too similar to that of Gaye’s famous “Got to Give It Up.” The jury decided in favour of Gaye, and the estate was awarded $7.4 million in damages. That ruling is on appeal at the time of writing, and it has proven controversial amongst copyright experts and music industry insiders. Some have argued that the ruling could have a negative effect on musicians trying to explore music from previous eras, incorporating sounds and styles from famous artists. Similarly, over 200 musicians have supported the ongoing appeal, arguing that the decision could have a chilling effect on creativity.

In Original Sin: Reconciling Originality in Copyright with Music as an Evolutionary Art Form, Emma Steel does not address the case of Williams v. Bridgeport Music as such, but she explores the question of originality in music in an interesting and noteworthy manner that is relevant to that litigation. Steel first describes the evolution of music’s component elements, paying special attention to the evolution of rhythm and melody as the basic building blocks of musical creations. Rhythm takes the form of tempo, metre, and rhythmic pattern. These provide a repetition of timing that tends to be common in various styles and genres. For example, the 4/4 metre is the most popular timing, while 3/4 is found in waltzes and country music. Melody, on the other hand, is where most of the originality in music is manifest, and it is “the relationship between musical tones of various pitch and duration.” Steel comments that in Western musical traditions melodies tend to be repetitive in nature and shared across music genres.

This is important because when one breaks down music to its most basic components, it becomes clear that originality is more limited than might be supposed. Steel comments that “the use of prescribed scales, keys and structures to fashion melodic lines gave rise to a listener’s dependency on Western tonality in order to make auditory sense of the sounds.” The result is a set of rhythms and melodies that can be often found across several songs in a genre or time, as audiences become used to specific combinations that are in fashion at the time. Steel argues that cultural experiences affect both the creative process and the consumption of music, and universal themes emerge during certain periods.

Given the extensive similarity of musical composition across an era, the originality requirement in copyright law becomes difficult to satisfy in musical works. Copyright protects the expression of an idea, not that idea. The problem is that many common elements in musical creation could be considered ideas if they are widely shared across compositions of a similar genre. To the untrained ear, all music of one genre sounds much the same, so it becomes the task of the trier of fact to try to draw the line of where a work has passed from using the ideas of a genre into the infringement of the expression of the ideas. Blurred lines, if you may. But drawing these lines is not the only challenge because judges and juries must also determine if the copying has been substantial, and therefore worthy of being declared copyright infringement.

Steel analyses several famous music cases to try to discern whether the application of the originality test was properly applied in music. Of particular interest is Francis Day & Hunter v. Bron, where the Court of Appeal of England and Wales ruled against infringement as to two compositions that appeared to share some similarity. The similarity alone was not enough to warrant infringement because the court decided that there also had to be a causal connection between the works – in other words, the defendant had to have been familiar with the claimant’s work. The evidence did not prove this, so the court dismissed the appeal.

Another interesting case is Larrikin v. EMI Songs Australia, which involved the famous song “Down Under” by the Australian band Men at Work. The rights holders of a folk song called “Kookaburra Sits in the Old Gum Tree” sued the owners of “Down Under” for copyright infringement. The judge presiding over this case found for the plaintiffs because the two songs shared objective similarity, the composers admitted that they were familiar with the “Kookaburra” song, and there was substantial copying taking place. The “Down Under” decision was met with a level of public scepticism similar to that shown with the “Blurred Lines” case. Steel comments that some music experts even argue that “Kookaburra” is itself a copy of an older Welsh folk song.

Steel concludes that the current test for originality in music does not recognize the often derivative nature of musical creation. It also fails to accept that some basic musical elements tend to be shared across genres and times. She proposes that the examination of originality should take into account “commonality in musical works” and advocates for a much wider range of unprotected musical ideas.

This is an excellent article that proposes a new exploration of how courts approach originality in music. When so many people disagree with the state of the law, there is a good chance that current standards may not reflect reality. This article proposes an interesting retake on an old standard. A remix, if you may.

Cite as: Andres Guadamuz, Is It Time to Examine the Concept of Originality in Musical Works?, JOTWELL (March 29, 2017) (reviewing Emma Steel, Original Sin: Reconciling Originality in Copyright with Music as an Evolutionary Art Form, 37 Eur. Intell. Prop. Rev. 66 (2015)),

Are Intellectual Property Retaliations Against Violators of WTO Agreements Ineffective?

Sarah R. Wasserman Rajec, The Intellectual Property Hostage in Trade Retaliation, 76 Md. L. Rev. 169 (2016), available at SSRN.

Twenty-some years ago, there was much speculation about how well the World Trade Organization (WTO) dispute resolution process would work, and in particular, whether developed countries would be more likely to comply with their WTO obligations in respect of developing nations because the latter would have the right, subject to approval by the relevant WTO Dispute Settlement Body (DSB), to retaliate against violations of WTO obligations by suspending enforcement of intellectual property rights (IPRs) affecting the violator’s industries.

A central premise of creating the right to retaliate against IPRs was that developed countries’ interests in ensuring respect for its nationals’ IPRs would create a more powerful inducement to treaty compliance than the opportunity to retaliate only against similar types of goods (e.g., bananas or cotton).

So here we are in 2016. After more than two decades of experience with dispute settlements under the WTO agreements, there is a tale to be told about IPR cross-retaliation, and Rajec tells that tale very well. The WTO agreements established a dispute resolution procedure under which nations can formally complain about another nation’s claimed violations to a DSB that will then adjudicate the dispute. If the complaint has merit, the DSB will consider what remedial measures the complainant should be able to take against the violator if it does not respond by coming into compliance. Rajec reports that in a substantial majority of cases, nations decide to comply with their treaty obligations once the DSB has ruled that a violation has occurred, although in about nine percent of cases, violators have remained “unabashed[ly]” noncompliant.

In three cases, the DSB has approved a complainant’s request to retaliate against a violator by suspending IPR enforcement. Yet in none of these cases has the complainant nation followed through by actually going forward with the IPR suspension. Rajec is curious to understand why and what lessons might be learned by understanding what happened.

One case involved Antigua and the United States over the latter’s online gambling ban, which harmed Antigua’s industry. The DSB found that the U.S. was in violation of WTO treaty obligations by imposing this ban. The U.S. refused to change its law to comply with the treaty. Recognizing that it was infeasible for Antigua to retaliate effectively in the same sector as that in which the U.S. violation had taken place, the DSB approved cross-retaliation against U.S. film and music industries through a suspension of Antigua’s obligation to enforce U.S. copyrights.

Despite the DSB approval of this measure, Antigua has not actually suspended copyright enforcement. Rajec offers some possible explanations for the non-suspension, including some concern that Antigua might have about possible U.S. retaliation against the Antiguan tourism industry. She notes that it would also be costly for Antigua to set up a website to allow users to get access to infringing copies of U.S. films and sound recordings. Another consideration is that the DSB did not permit unlimited infringements to occur, but only at a level of $21 million annually, which might be difficult for an infringement-enabling site to calibrate. (Quite recently, however, Antigua announced it is planning to suspend copyright enforcement, so the drama continues.)

A second dispute was between Brazil and the United States over U.S. subsidies to its cotton industry. A DSB found the U.S. in violation of its treaty obligations because of these subsidies. It approved a Brazilian proposal to retaliate against the U.S. in part by suspending enforcement of U.S. pharmaceutical patents. Because Brazil has a domestic pharmaceutical industry and a large market for pharmaceutical products, a suspension of these IPRs posed a credible threat to U.S. industry interests. While the U.S. chose to remain noncompliant with its treaty obligations, it offered a financial settlement that, in effect, provided subsidies to Brazilian cotton producers. Rajec points out that this type of resolution is consistent with the theory that DSB remedy rulings should aim to provide compensation for violations rather than only inducing compliance with treaty obligations, as some commentators have assumed.

A third case involved a dispute between Ecuador and the European Union over the latter’s tariffs on bananas. DSB arbitrators recommended $201 million annual suspension of EU rights in sound recordings, industrial designs, and geographic indicators as retaliation for violating WTO agreements. Because the EU negotiated a settlement for tariff reductions over time, Ecuador did not go forward with retaliating against EU nationals’ IPRs.

The Ecuadorian and the Brazilian cases exemplify IPR cross-retaliation as a useful mechanism for encouraging nations to find a way to settle their differences, even if the settlement does not result in bringing the violator into full compliance with treaty obligations.

Much of Rajec’s article is devoted to a theoretical exposition about the purpose(s) of the WTO remedial scheme. Some commentators endorse compliance-inducement as the proper purpose of the remedies scheme, while others think that the WTO regime works as well as it does because nations can opt out of compliance on some occasions when their idiosyncratic national interests make noncompliance a better option than compliance, as long as the nations are willing to provide some compensation when they violate WTO agreements in a manner that harms other nations’ industries. Rajec’s treatment of these issues is nuanced and well-developed.

As a pragmatist, it struck me that the two purposes Rajec discusses may be more compatible than they might initially seem: The dispute settlement process may aim mainly to encourage compliance with treaty obligations, but it also provides a mechanism for inducing violators to provide some compensation for harms caused to other nations’ industries, albeit obliquely, when violators choose to remain noncompliant.

Rajec’s main thesis is that the ability to engage in IPR cross-retaliation is not as effective in inducing compliance with WTO treaty obligations as some have posited. This is partly due to inherent structural imbalances among nations. Small developing nations (e.g., Antigua) often lack the bargaining power and infrastructure to make the threat of IPR retaliation seem powerful enough to change a large developed nation’s practices and willingness to violate treaty norms.

In addition, IPR retaliation is not as simple to achieve as tariff-based retaliations are. Tariff retaliations are easy because the government must only establish the higher tariff rate and enforce it. However, to retaliate as to IPRs, the complaining nation must motivate other actors (e.g., domestic pharmaceutical manufacturers) to take advantage of an IPR suspension, even though that suspension may not last all that long, so an investment in authorized infringement may be undermined once the authorization to infringe ceases, as the WTO scheme assumes will happen.

Having followed the debate over IPR retaliations in the 1990s and wondered what happened, I was very pleased to have Rajec’s report on its conditional success.

Cite as: Pamela Samuelson, Are Intellectual Property Retaliations Against Violators of WTO Agreements Ineffective?, JOTWELL (March 2, 2017) (reviewing Sarah R. Wasserman Rajec, The Intellectual Property Hostage in Trade Retaliation, 76 Md. L. Rev. 169 (2016), available at SSRN),

Whittling Away at Trademark Law’s Notions of Harm

In recent decades, numerous scholars have challenged trademark law’s various conceptions of harm. Unlike copyright and patent law, trademark law positions itself as a harm-avoidance regime, rather than a mechanism for capturing economic rents. At least under the dominant theoretical model, the law seeks to promote competition by ensuring the accuracy and reliability of source-indicating symbols in markets. In practice, however, the harm narrative often breaks down under scrutiny. Recent articles have taken issue with the assorted harms that trademark law purports to prevent. From dilution by blurring to “irrelevant” confusion, critics have argued that at least some of the injuries targeted by trademark law are illusory.

In What Can Harm the Reputation of a Trademark?, Michael Handler adds to this literature with a critical look at dilution by tarnishment. Tarnishment, defined in the Lanham Act as “association arising from the similarity between a mark or trade name and a famous mark that harms the reputation of the famous mark,” explicitly addresses itself to harm. On its face, it requires not only proof of some association between the famous mark and the diluting one, but a demonstrable risk that the challenged use is likely to harm the famous mark’s reputation. Yet courts have suggested (and some have held) that they will presume such a risk when marks resembling famous ones appear on unsavory products. Tarnishment, in other words, assumes that creating a mental association between a famous mark and some distasteful product can sully the trademark’s reputation, even when consumers realize that there’s no relationship between the two parties. Handler questions that presumption. In particular, he “quer[ies] whether this form of dilution – to the extent it encompasses conduct beyond the boundaries of the traditional, confusion-based, trademark infringement action – is, in fact, a ‘harm’ of which the law should take cognizance.”

After a readable, informative, and insightful journey through history, theory, and doctrine, Handler answers his query with a confident “no.” The harms presumed from tarnishment, he concludes, have no more basis in experience or reason than those of its counterpart, blurring. At the end of the day, he sees tarnishment, like blurring, as an excuse to regulate “the morality of trade behavior.”

Handler makes his case in three steps. First, he demonstrates that the roots of dilution law in the U.S. and Europe were more equivocal than commonly believed, and provided no clear mandate for protection against tarnishment. For U.S. readers, the historical account is especially interesting in explaining the history of Benelux trademark law and the way that it shaped European Community-wide trademark doctrine.

Second, he deconstructs the notion of “reputation” and the law’s assumption that it correlates positively with dilution’s concept of fame. This inquiry has both normative and doctrinal implications. Normatively, it raises questions about whether the law should support producers’ attempts to curate the meaning associated with marks, rather than recognizing marks as complex informational vessels to which producers, consumers, commentators, and others all contribute. Doctrinally, it suggests that, by treating all famous marks as venerable and avoiding inquiry into their actual reputations, courts are missing the ultimate question in tarnishment cases – “whether the defendant’s conduct causes an association likely to damage [that] reputation.”

Finally, Handler turns to the question at the heart of his article: even assuming that the law seeks to protect producers’ curated brand identity, do non-confusing uses on unsavory products in fact “harm” the mark’s reputation? After reviewing case law and literature, Handler finds scant support for the notion that tasteless but non-confusing uses have any impact on famous brands’ identity, and even less for the idea that any such impact could have economic consequences. Especially given the threat that tarnishment claims can pose for parody and other forms of speech, Handler views the specter of harm-avoidance as an inadequate justification for the doctrine.

So why do we have tarnishment law, if not to avoid harm? Like critics of blurring and some forms of confusion, Handler suggests that tarnishment is “ultimately more about enforcing moral standards than regulating economic behavior.” And he saves a critical look at that justification – and its implications – for another day. “For now,” he writes, “it is enough to note that there are real dangers in maintaining a normatively hollow cause of action” for tarnishment, “given that it is not at all clear that, in the absence of confusion,” third-party uses of trademarks can cause any real harm.

Handler’s article is a welcome addition to the growing body of scholarship that questions trademark law’s narrative of harm. Although his punchline may not be surprising for readers familiar with that literature, Handler’s treatment of the topic is careful, thoughtful, and rigorous, and offers an historical and comparative context that I found informative and interesting.

Cite as: Stacey L. Dogan, Whittling Away at Trademark Law’s Notions of Harm, JOTWELL (January 30, 2017) (reviewing Michael Handler, What Can Harm the Reputation of a Trademark? A Critical Re-Evaluation of Dilution by Tarnishment, 106 Trademark Rep. 639 (2016)),

Copyright Law’s Origin Stories

Shyamkrishna Balganesh, The Questionable Origins of the Copyright Infringement Analysis, 68 Stan. L. Rev. 791 (2016).

One particularly engaging genre of legal scholarship is the deep historical dive into an appellate opinion that has become a classic in a field. In volumes such as Torts Stories, Contracts Stories, and Intellectual Property Stories, scholars resurrect the history leading to landmark cases: the cast of characters involved in the dispute, the lower court wrangling that led to the more famous appeal, the aftermath of the case, and the lasting impact of the court’s opinion.

While we must constantly remind ourselves that each case we analyze or teach involves real individuals with real disputes that affected real lives, there is a certain fictional quality to these stories precisely because the judicial opinion is the lead character. Judicial opinions can never be more than an abstract, a description of events that then becomes the accepted narrative. Paul Robert Cohen’s expletive-bearing jacket was expression serving an “emotive function,” according to the Court, not an “absurd and immature antic,” as the dissent would have it, and that made all the difference. Opinions have authors, and authors are necessarily engaged in a project of crafting narratives with a result in mind.

Yet knowing more about how an opinion came to be does give us a richer understanding of its context and, perhaps, some guidance on how to interpret the opinion going forward. This is the project that Shyamkrishna Balganesh undertakes in his compelling and entertaining article The Questionable Origins of the Copyright Infringement Analysis.

The article tells the story of the landmark copyright case Arnstein v. Porter, 154 F.2d 464 (2d Cir. 1946). Ira Arnstein, a frequent plaintiff in copyright infringement cases, representing himself pro se, brought suit against Cole Porter, alleging infringement of several musical compositions. Porter, through deposition, denied having any access to the plaintiff’s works. The district court, relying only on the parties’ depositions, concluded that the plaintiff’s theory of copying was appropriately characterized as “fantastic” and so granted summary judgment for Porter. (P. 803.) On appeal, the U.S. Court of Appeals for the Second Circuit held that because copyright infringement is a question of fact for the jury, the lower court should have given Arnstein the opportunity to present evidence and cross-examine witnesses at trial. Summary judgment based on the judge’s own view of the facts was inappropriate.

Under Arnstein, determining whether a defendant infringed a plaintiff’s protected work requires resolution of two questions: (1) did the defendant copy from the plaintiff rather than independently create the work (since one must copy to infringe), and (2) did the defendant copy a legally relevant amount of what was covered by the plaintiff’s copyright? The first of these inquiries is predominantly a question of fact, although, like all factual questions, it may be legally shown by circumstantial evidence. (For example, a plaintiff can show that the available evidence — the defendant’s access to the work and the degree of similarity between the two works — would reasonably lead to the conclusion that the defendant did not independently create the work.) The second of these inquiries, usually phrased as whether the two works are “substantially similar,” should be viewed as predominantly a question of law, since it asks whether the defendant’s copying is legally significant. For example, it may be the case that the defendant copied from the plaintiff, but if all that was copied were elements of the plaintiff’s work that were drawn from the public domain, no legally cognizable harm under copyright law exists.

Despite the fact that the second question asks whether the copying is legally relevant, Arnstein holds — and subsequent courts have generally agreed — that this question is also one of fact, to be given to the jury, without the benefit of expert opinion. The rationale, ostensibly, is that because the harm caused by the defendant is presumed to be one of economic substitution, whether the two works are similar enough such that one is substantially the equivalent of the other should be determined via a lay audience’s perspective.

Balganesh’s project is to find out why the Arnstein court assigned this essentially legal question to the jury, and to do so, he looks back at how the opinion developed. Balganesh tells the story elegantly and deftly, taking the reader first through the majority and dissenting opinions; then providing descriptions of the philosophies and personalities of each of the judges involved (Jerome Frank, Learned Hand, and Charles Clark); then moving to a narrative, using judicial correspondence and memoranda, of how the judges’ philosophies and personalities led to the opinions in Arnstein; and concluding by considering what this exegesis should tell us about the continuing influence of the case. Along the way, Balganesh demonstrates an admirable diligence in interrogating the historical record. For example, in recounting the nature of a memorandum that Frank distributed to his co-panelists outlining a series of cases supporting his view, Balganesh goes back to each of those authorities and notes that they do not provide the level of support that Frank suggests in his memo. (P. 843.)

The author of the majority opinion, Jerome Frank, was “an outspoken and acerbic critic of the jury system” who believed that it was inappropriate to allow juries to determine the rule of law. (P. 797.) Nevertheless, as a legal realist, Frank also believed that legal results were indeterminate and factfinding inherently subjective. Given the choice between having a district court judge rule on an issue as a matter of law (and possibly shading his or her factual conclusions to do so) and a lay jury deciding questions as a matter of fact, Frank preferred to send such decisions to the jury. Balganesh contends that Arnstein therefore became less about the copyright issues in the case and more about the opportunity the case presented Frank to put his fact skepticism into practice.

Thus, Frank emphasized that to his ears — and those of his secretary, “who improvise[d] music” (P. 832) — Arnstein’s and Porter’s compositions could be found to be similar enough. Charles Clark, in dissent, bolstered his conclusion that no copying had occurred, and that there was no need for a trial, by citing the opinion of his friend, “a Yale University organist,” who had played each of the pieces for him. (Id.) (It was no coincidence that Clark was a drafter of the Federal Rules of Civil Procedure and so had a personal investment in the procedure for summary judgment.)

This dispute proved Frank’s point: the issue needed to be resolved by a jury, without reliance on expert opinion, and so the case was remanded. (As Balganesh notes, the district court’s opinion focused only on copying, not on substantial similarity, but that seems not to have mattered to Frank’s agenda.) Frank did not waver from this view even when the jury, on remand, reached the same result as had the district court, finding in favor of Porter. (P. 841-42.) Thus, Balganesh suggests persuasively, what could have been a narrow opinion focusing only on the decision of the district court as to copying became a more sweeping opinion on the nature of decisionmaking regarding copyright infringement more generally, motivated by Frank’s desire to leave as much of the decisionmaking as possible to the jury.

Subsequent courts have generally accepted Arnstein’s structure without question (although it has been the target of considerable scholarly critique), holding that both the question of copying and the question of substantial similarity are questions of fact, the latter to be decided by the jury without the benefit of expert advice. As some modern cases have demonstrated, however, this is a difficult and highly unpredictable approach. Whether the defendant copied from the plaintiff or created independently is a factual question that juries can more easily decide; whether one work holds the legal status of “infringing” is a question rife with interpretative and analytical complexities more properly considered questions of law. Indeed, as Frank suggested in an earlier work with respect to legal rights and duties generally, one might say that copyright infringement cannot even exist prior to a judicial ruling; infringement, as a legal status, comes into being only when a court declares it thus.

Having revealed that Arnstein was driven more by procedure than by copyright doctrine, Balganesh concludes by suggesting that this new understanding of Arnstein’s origin story can free courts from its influence, giving way to new approaches to infringement cases. He proposes one such approach as an example: that the infringement analysis start with the legal question of “whether there is sufficient similarity between the two works at issue to render the claim cognizable for copyright infringement purposes,” which, he argues, will result in more highly developed copyright jurisprudence than the current jury-focused system. (P. 859.)

But the judge-made nature of much of copyright law means, of course, that many courts have always been free to do this. Arnstein’s holding was derived from, but is no longer dependent on, its ostensible factual history. If it now turns out that Ira Arnstein wasn’t the troubled individual the district court believed him to be, or Cole Porter really did copy Arnstein’s work when he composed “Night and Day,” that matters not to the authority of the Second Circuit’s opinion or the nature of its influence on other courts. The revelation that Frank was motivated not by what was best for copyright law but rather by his legal realist views on factfinding more generally doesn’t itself free courts to challenge Arnstein; rather, it starkly reveals the influence of the appellate narrative as truth.

None of this, of course, is to diminish one bit the significant contributions of Balganesh’s wonderfully engaging article. Rather, it is simply a reminder that appellate opinions are often a narrative means to an analytical end, even when they are not as strategically authored as Balganesh has shown Arnstein v. Porter to be. We should be prepared to deploy our own fact skepticism as we interpret them.

Cite as: Laura A. Heymann, Copyright Law’s Origin Stories, JOTWELL (December 16, 2016) (reviewing Shyamkrishna Balganesh, The Questionable Origins of the Copyright Infringement Analysis, 68 Stan. L. Rev. 791 (2016)),

Registration and its Discontents

Rebecca Tushnet, Registering Disagreement: Registration in Modern American Trademark Law, 130 Harv. L. Rev. (forthcoming), available at SSRN.

Much work has been done on the theoretical foundations of trademark law generally, but very little on trademark registration specifically (at least in the U.S.). The reason is that, for most of the last fifty years, courts have been telling us that, with a few exceptions, registration really doesn’t matter. Courts evaluate the validity of an unregistered mark under essentially the same standards as registered marks, and they use the same likelihood-of-confusion analysis to determine infringement.

But it turns out to be hard to maintain a rule that registration means nothing when the Lanham Act clearly was intended to create some substantive rights that did not previously exist. It’s also difficult to ignore the elaborate regulatory apparatus the PTO has constructed to evaluate applications to register – one that includes detailed rules about the format in which a mark is claimed and the goods and services are described, and that provides for administrative proceedings to oppose or cancel registrations. Why would any of that exist, and why would companies spend so much time and money dealing with registration, if it was meaningless?

So, not surprisingly, registration does sometimes matter to courts – indeed, in its recent B&B Hardware decision, the Supreme Court described it as significant. But how is it significant, and when? As Rebecca Tushnet wonderfully demonstrates in her terrific new article Registering Disagreement: Registration in Modern American Trademark Law, there is no consistent answer to that question, because trademark law has no theory of registration.

We should start with a little background here, because registration’s insignificance was not inevitable. It is, rather, the result of changes courts have made to trademark and unfair competition law over the last half century. Once upon a time, registration was clearly understood to be procedural. Only certain signs (“technical trademarks”) were federally registrable, and only federally registrable marks could be enforced in federal court. But registrations did not create trademark rights – those were always understood to arise through use, and registration simply acknowledged those rights and provided a federal enforcement mechanism. Parties that lacked a federal registration might have had a cause of action for unfair competition, but that was a common law claim (not federal) with additional proof requirements.

After Congress passed the Lanham Act, however, courts began assimilating unfair competition law into trademark law by equating registered and unregistered marks. They did so (without any clear statutory authority) because they wanted to create a federal cause of action for unregistered marks – which is to say that they denied registration’s significance in cases in which the question was whether unregistered marks should have some lower status than registered marks. Put simply, courts assimilated unfair competition law into trademark law by leveling up – by giving to unregistered marks essentially the same status as registered marks. And they did so without giving much consideration to the policy reasons behind the particular requirements of registration.

The project was enormously successful – so much so that, with the exception of priority and the geographic scope of rights, registration is largely irrelevant to civil litigation. Parties spend significant resources crafting trademark applications and resolving disputes over registrability even though the resulting register has almost nothing to say about the scope of any party’s rights. Even incontestability, probably the most controversial concept when the Lanham Act was drafted, doesn’t often make much difference, simply shifting the focus of a case from validity (where incontestability can be dispositive) to infringement (where it makes virtually no difference).

And yet, the question of the significance of registration and the registration process keeps popping up, precisely because it’s hard to believe registration is actually so inconsequential. The problem, as Tushnet persuasively argues, is that registration seems increasingly a conceptual misfit; trademark law’s modern emphasis on confusion over everything else makes it that much harder to attribute distinctive importance to registration. Because courts tend to think of trademark law nearly exclusively as a tool for eliminating consumer confusion, they conceive of its ends in empirical terms, focusing on the contexts in which consumers actually encounter marks. But registration isn’t about consumer understanding. The registration system is motivated primarily by nation-building and market regulation goals that fit uneasily with an empirical, consumer-confusion orientation. As Tushnet argues, “even if trademark is a matter of consumer protection, trademark registration is a matter of industrial policy, contributing to a national unified market by providing incentives for registration, including nationwide priority over other users.” (P. 10.)  

The consequence has been deep disagreement over the role of registration, and a remarkable lack of clarity. For example, decades of post-Lanham Act case law hadn’t worked out the extent to which decisions of the PTO and its administrative tribunal the Trademark Trial and Appeal Board (TTAB) have preclusive effect in civil litigation. While the standards used in both settings often look (at least superficially) the same, those standards are applied in very different ways. Thus, when the Supreme Court took up the question in B&B Hardware, it insisted that registration is indeed significant and that TTAB decisions can have preclusive effect. But it then limited the practical significance of that conclusion, acknowledging that preclusion will not apply in “some or even many cases” because the TTAB so often approaches questions differently than do courts. That result seems plausible enough, given the differences between the approaches in the TTAB and courts. But left entirely unanswered is why the approaches differ so much. If courts recognized the primary business regulation function of registration, they might well put more emphasis on registration status and the content of registrations despite registration’s disconnect from consumer understanding.

Perhaps even more remarkably, it remains unclear some seventy years after passage of the Lanham Act what effect refusal or cancellation of a registration has on the protectability of a mark. On the one hand, unfair competition doctrine once existed precisely to offer some protection for unregistrable subject matter. Thus, historical practice suggests that bars on registration have nothing to say about the availability of claims based on unregistered rights. On the other hand, when courts assimilated unfair competition into trademark law, they eviscerated the important differences between trademark infringement and unfair competition more generally, so it’s hard to see anymore why unregistered marks should get (mostly) the same treatment as registered marks but not be subject to the same limitations. On that reasoning, in its recent In re Tam decision, the Federal Circuit suggested that marks that are unregistrable under section 2(a) likely are unprotectable.

This state of affairs seems intractable, and Tushnet candidly acknowledges that there are good arguments both for making registration more substantive, such that the system does more to “manage relationships between businesses regardless of consumer confusion or nonconfusion,” and for making registration purely procedural. (P. 5.) Indeed, she admits that “none of the tensions in current law can be entirely resolved to favor only one side.” (Id.)

But that doesn’t stop Tushnet from offering a number of ideas about how a system that took registration more seriously might look. Her discussion here is characteristically thoughtful and nuanced. One of the things I appreciate most about the paper is the way Tushnet demonstrates that taking registration more seriously would not necessarily mean broader protection; indeed, it might provide some new tools for limiting the scope of trademark rights – a conclusion many readers might find counterintuitive.

In the end, consideration of any of the possible solutions requires frank discussion of political choices that mostly have been excluded from conversation about trademark law. How do we rank priorities in nation-building (or global economy-building) against concerns about consumer interests? Surely the answer isn’t that we pretend that those interests are the same. But resolution of that issue is critical to thinking about the importance of registration in at least some contexts. In my view, this paper is the beginning of a sustained inquiry into the role of registration in various settings, and the paper is going to be cited heavily. Not because it has all the answers, but because it asks all of the right questions. It’s an excellent piece of work.

Cite as: Mark McKenna, Registration and its Discontents, JOTWELL (November 16, 2016) (reviewing Rebecca Tushnet, Registering Disagreement: Registration in Modern American Trademark Law, 130 Harv. L. Rev. (forthcoming), available at SSRN),

Patenting the Social: A Non-Economic Take on Alice

Laura R. Ford, Patenting the Social: Alice, Abstraction, & Functionalism in Software Patent Claims, 14 Cardozo Pub. L. Pol’y & Ethics J. 259 (2016).

“Where does technology stop and humanity begin?” This is the weighty opening question in Laura Ford’s recent article Patenting the Social. Ford, a sociologist and lawyer, offers a novel contribution to the debates raging in the courts and law reviews after the Supreme Court opinion in Alice v. CLS Bank about what constitutes a patent-ineligible abstract idea and, relatedly, why abstract ideas should be patent-ineligible. She proposes that claims describing novel computer-mediated social relationships and interactions (“the social”) are core examples of claims to abstract ideas, but that claims to novel means of achieving those social ends are not. Ford then draws on sociological concerns and moral theory to defend her interpretation of Alice. She argues that patents that privatize social progress, as opposed to the technological progress, are bad policy based on concerns about human flourishing, politics, and culture—i.e., reasons other than the conventional, economically oriented reasons for limits on patentability that focus on innovation incentives.

I found Patenting the Social to be both interesting and timely for two reasons. First, I believe that defining the abstract with reference to the social offers a plausible story for explaining, at least in part, why the Supreme Court reached the conclusion that it did in its Alice opinion and, perhaps more importantly, its earlier opinion in Bilski v. Kappos, on which Alice relies. The Court’s choice not to even attempt to define an abstract idea in these opinions is by now infamous. Whether you personally agree with it as a policy matter or not, this hypothesis that the Court’s discomfort with the privatization of new patterns of contractual commitments—which are nothing but legally enforceable patterns of social obligations—is grounded in part in non-economic reasoning should not be lightly dismissed. Patenting the Social gives voice to this hypothesis more thoroughly than other academics have to date managed to do. Second, I find the notion that privatization of the social is problematic to be an interesting counterpoint to the message of the Supreme Court’s other opinions on patent-ineligibility in Association for Molecular Pathology v. Myriad Genetics and Mayo v. Prometheus. In these biomedical cases, the Court focused on the privatization of the natural as the crux of the problem that limits on patent-eligibility can solve. Under Ford’s interpretation, Bilski and Alice provide an intriguing bookend to Myriad and Mayo: both the social and the natural are off limits.

To illustrate what patents on computer-mediated social relationships and interactions might look like, Patenting the Social offers a deep dive into a number of Facebook patents. It is unclear whether these patents are representative of the bulk of patents that the courts have invalidated in the wake of Alice, but the laser-like focus of Patenting the Social on such patents only goes to show that the concept of an abstract idea does not have a single definition. Rather, it is a poly-nodal or multiply ambiguous concept. It means different things when used in reference to different patents.

Patenting the Social offers a line-drawing proposal under which social ends are patent-ineligible but “efficient causal means of achieving social ends” are patent-eligible. This proposal clearly echoes the Court’s line articulated in Alice—namely that improvements in the functioning of a computer itself are patent-eligible—but its intellectual origins are, curiously, entirely different. The proposal draws from sociology’s response to the now-discredited functionalist explanation of social phenomena, rather than the economics of patent scope, to suggest this focus on causal means rather than social ends as the locus of patent-eligible invention.

A number of additional, well-articulated arguments fill out the pages of Patenting the Social. But, rather than plumb any specific, individual argument in greater depth, I want to conclude by reiterating my principal point. You may or may not believe that sociological concerns about patenting the social should limit the reach of patent-eligibility. However, given the plausibility of the hypothesis that such concerns did in fact motivate, at least in part, the Supreme Court’s Bilski opinion that set the course for the doctrine of patent-eligibility on which we find ourselves today, I believe that Patenting the Social is a worthwhile read for anyone thinking about what courses can and should be charted for tomorrow.

Cite as: Kevin E. Collins, Patenting the Social: A Non-Economic Take on Alice, JOTWELL (October 18, 2016) (reviewing Laura R. Ford, Patenting the Social: Alice, Abstraction, & Functionalism in Software Patent Claims, 14 Cardozo Pub. L. Pol’y & Ethics J. 259 (2016)),

Speaking from the Grave. Should Copyright Listen?

Eva E. Subotnik, Artistic Control After Death, 92 Wash. L. Rev. (forthcoming 2017), available at SSRN.

Should authors be able to control the use of their work after they die? It’s a question that touches deep personal and public concerns. It resonates with longstanding debates in literary studies over the “death of the author” and “authorial intent,” and is an issue that Professor Eva Subotnik tackles in her latest article, Artistic Control After Death (forthcoming in the Washington Law Review).

Currently, U.S. copyright expires 70 years after the author’s death so that control of an author’s copyrights extends far into the future. Long after an author creates a work, often decades after publication and the work’s integration into artistic or literary culture, under the law, heirs and literary estates have the power to exercise control over the work’s continued use and dissemination.

This enduring control may be troublesome for reasons related to the special contours of intellectual property shaped by both private rights of exclusion and public rights of access to culture and knowledge. The longer that exclusive control over works is exercised into the future, the slower the public domain is enriched to promote the constitutional prerogative of “progress of science and the useful arts” through copyright.

This is not just a problem of copyright duration, which has continued to lengthen since its U.S. origins of fourteen years in 1790. It is also a problem for an author’s purposes and hopes for a work, which can shift over a lifetime with changes to cultural production, aesthetics, and business practices, to say nothing of personal predilections and personality.

Many authors (e.g., J.D. Salinger) are deeply attached to their works and exercise particularized and sometimes parent-like control during their lifetimes. Others are more liberal with copying and transformation by fans and other noncommercial or noncompeting uses (e.g., J.K. Rowling and Neil Gaiman).

Upon death, how much should authorial intent and past practices guide the living who now own the copyrighted works? How should authorial intent as opposed to the concerns of the living (be they monetary, reputational, charitable, or cultural concerns) shape the stewardship of inherited copyrighted works? These questions become particularly acute in the context of authors who have left specific instructions and for works that remain unpublished or unfinished.

Professor Subotnik’s article addresses just this debate in the contemporary context of extended copyright terms and copyright heirs’ legal rights and obligations. This article couldn’t be more timely given concerns raised over the estates of Harper Lee and Prince. Both Harper Lee and Prince intentionally left work unpublished, work that contemporary audiences would love to read and hear. Harper Lee’s agent and lawyer in control of Lee’s finances for some time published the work under suspicious circumstances. The book received negative reviews and arguably affected Harper Lee’s literary reputation, although it certainly enriched her heirs. Many recordings in Prince’s musical estate remain unreleased as his heirs debate what to do. How should copyright law resolve these tensions?

Professor Subotnik’s contribution introduces to the intellectual property literature the rich debate in trusts and estates law concerning “dead hand control” of assets through trust instruments and fiduciary relations. In trusts and estates law, this debate has long raged: how much can grantors of property control its use after their deaths, especially in light of changing circumstances, both societal and personal?

For someone (like me) who knows next to nothing about trusts and estates law, but regularly follows developments in copyright law, the article is informative and provocative. The article does not propose a balancing test, but rather a rule. Professor Subotnik argues that enforceability of post-mortem instructions over copyrighted work should be guided by federal copyright policy, which favors the living, even if that means overriding the author’s wishes. She makes an especially strong case for this rule “where [the] authors seek to bar entire categories of uses of their works and where … enforcement [of post-mortem instructions] is … not needed to protect against the premature destruction of the work by the author.” This rule would arguably support the publication of Harper Lee’s prequel to To Kill a Mockingbird and of Prince’s unreleased recordings, despite the sequestration of both during the authors’ lifetimes.

Subotnik’s article proceeds in three parts. It begins by recounting the diverse ways authors control their work through their heirs, by means of formal mechanisms (residuary gifts, wills, and trusts) and informal mechanisms (leading by example and discussions with the living). This part is full of engaging accounts from various authors’ estates as Professor Subotnik digs into the literary history and details of famous authors.

Animating this first part is a photograph of the will of Beastie Boys’ Adam Yauch, displaying his hand-written directive that “in no event may my image or name or any music or any artistic property created by me be used for advertising purposes.” A year after Yauch’s death, the remaining Beastie Boys invoked this directive to enjoin the toymaker Goldiblox from using the 1987 Beastie Boys song “Girls” in an advertisement that celebrated young girls as aspiring engineers. The toymaker altered the lyrics of the song to describe the company’s mission (“Girls to build a spaceship / Girls to code the new app / Girls to grow up knowing / That they can engineer that”).

Surely, this kind of social commentary and transformation of “Girls” is fair use under copyright law (the 1994 Supreme Court case of Campbell v. Acuff Rose strongly suggests as much). And yet because of Yauch’s will, and the devoted heirs of Yauch’s musical copyrights, the lawsuit persisted until Goldiblox withdrew the commercial. Is this what someone like Yauch intended? The legal and ethical ambiguity surrounding such instructions, in light of copyright’s purposes of promoting cultural conversations consistent with First Amendment principles, leads us inevitably to a discussion of what to do.

In the second part of the paper, Subotnik discusses the debates within trusts and estates law about the proper extent of dead-hand control. Arguments for control include natural law, incentivizing wealth accumulation, promoting industry and productivity, self-expression and personal satisfaction, and reinforcing essential relationships. These mirror arguments from an author-centered view of copyright, contending that moral rights, personal control, and private incentives are critical to the making and dissemination of creative work.

Arguments in favor of diminishing post-mortem control include changed circumstances, intergenerational equity, imperfect information, negative externalities, and the welfare of the beneficiaries. These arguments reflect concerns in copyright doctrine (and in intellectual property doctrine generally) that dead-hand control not fetter IP’s mission of enriching science and art for the benefit of society and the public domain. Authors’ rights are an intermediary step and can be trumped by the ultimate goal of society-wide human flourishing.

Subotnik concludes by embracing the “benefit the beneficiaries” rule, highlighting the policy of benefiting the living in both trust and intellectual property law. Subotnik persuasively argues that living decision-makers should be given more deference to manage artistic assets. If our concern is whether the work will be read, viewed, or put in contexts in which it can be maximally appreciated – all copyright interests to be sure – “[t]here is no reason to think that a dead author is in a better position to track a work’s success in the marketplace and ensure its place in history than are the living.”

And when the author’s wishes were to restrain use of the work – quashing its enjoyment by audiences as well as its financial benefits for heirs – post-mortem instructions are antithetical to copyright interests of dissemination and cultural progress. “[Indeed] an author’s stringent controls on access and use of copyrighted materials can sound a death knell for a work” if not also an author’s continued literary existence. And so Professor Subotnik puts a thumb on the scale for heirs to override authorial control.

Subotnik provides one caveat, however: when there is a risk that artists will destroy works before their deaths out of fear their heirs won’t shepherd the work as desired, following post-mortem instructions makes better sense. Because society has an interest in preserving work – unpublished or published, personal documents of all sorts (letters and journals) – dead-hand guided preservation, which could last up to 70 years after death, is better than the alternative. Given that the public interest never dies, but authors do, I applaud this conclusion. Eventually the work will enter the public domain, and then dead-hand control ends.

Of course, control by heirs (as opposed to dead authors) is not necessarily in the public interest. But by favoring freer use by the living, Professor Subotnik refers to living successors as well as living audiences and fair users. Indeed, this article ably engages various interdisciplinary dimensions (of literary studies, trusts and estates, and intellectual property) relying on important prior work by Deven Desai, Ray Madoff, and Robert Spoo, whose scholarship grounds these debates in the public interest and social welfare. Subotnik’s contribution enriches these debates and should guide future policy in this area.

Cite as: Jessica Silbey, Speaking from the Grave. Should Copyright Listen?, JOTWELL (September 19, 2016) (reviewing Eva E. Subotnik, Artistic Control After Death, 92 Wash. L. Rev. (forthcoming 2017), available at SSRN),