Are Patent Damages Uncertain?

Michael Mazzeo, Jonathan Hillel, & Samantha Zyontz, Explaining the “Unpredictable”: An Empirical Analysis of Patent Infringement Awards, 35 Int’l Rev. of L. & Econ. 58 (2013).

A recent American Bar Association “Corporate Counsel” seminar styled itself as “The Uncertain Arena: Claims for Damages and Injunctive Relief in the Unpredictable World of IP Litigation.” The seminar began by recounting the seemingly surprising, $1 billion-plus damage awards in the patent infringement actions, Carnegie Mellon v. Marvell Technology, Apple v. Samsung, and Monsanto v. DuPont. These blockbuster awards stand in stark contrast to the usual awards of $20 million or less in a typical case.

By and large, in-house counsel have chalked up much of these differences to the luck of the draw. Such a sentiment is all-too-common not only among practitioners, but also among policymakers and academics. No less than the eminent IP scholar Mark Lemley has remarked, “Patent damages are unpredictable because the criteria most commonly used are imprecise and difficult to apply.”

Mazzeo, Hillel, and Zyontz make an impressive contribution to the literature by casting substantial doubt on such views. Specifically, in their recent empirical study of district court patent infringement judgments between 1995 and 2008, they show that patent damages can be explained in a large part by a fairly small number of patent-, litigant-, and court-related factors.

The authors assembled a set of over 1300 case outcomes from the PricewaterhouseCoopers database, which they boiled down to 340 judgments in favor of the patentholder in which award details were available. Although this number of judgments may seem low, based on independent work of my own for a study on the duration of patent infringement actions, these counts represent a high percentage of the total number of actions and judgments. Thus, it is unlikely that including the unavailable judgments and awards in the dataset would substantially change their results.

Mazzeo, Hillel, and Zyontz begin their exposition by noting—contrary to the widespread view that patent damages awards are shockingly high—that the median damage award has remained fairly constant from 1995 through 2008, at roughly a low $5 million. The billion-dollar damage awards in Apple v. Samsung and other cases are thus extreme outliers. Indeed, during the time period at issue, only eight awards came in over $200 million, yet those awards accounted for 47.6% of collective damages of all cases (other than generic-branded pharmaceutical disputes under the Hatch-Waxman Act). So, outside of a small number of highly publicized, blockbuster cases, damages awards are (perhaps shockingly) low – a fact that flies in the face of current rhetoric about outsized awards in patent cases.

The most impressive aspect of the article is the authors’ empirical models explaining roughly 75% of the variation among damages awards. In particular, they assemble various factors—including the number of patents asserted, the age of the patents, the number of citations to the patents, whether the defendant is publicly traded, and whether a jury or judge assessed damages—and construct a regression model that shows statistically significant relationships between these factors and the amount of damages awarded.

For example, in one model, if the defendant was publicly traded, damages were roughly 1.5 times higher than when the defendant was privately held, controlling for other factors. What is particularly striking is that the outlier awards—namely, those above $200 million—fall squarely within the model’s explanatory power. Thus, rather than being the random results of rogue juries, these large damage awards likely reflect a variety of measurable factors that point in favor of larger awards across the large number of cases confronted by the courts.

These findings have important public policy implications. As the authors point out, stable, predictable damage awards are essential for a properly functioning patent system. Otherwise, the careful balance of incentives to patentees to innovate and incentives to third parties either to acquire licenses to patented inventions or invent around would be thwarted.

On the other hand, Mazzeo, Hillel, and Zyontz overreach by concluding that their “findings thus bolster the core tenets of the patent system” that exclusive patent rights are an appropriate means for protecting inventions. Specifically, the authors’ argument that “several of the driving factors correspond to accepted indicators of patent quality” is insufficient to support such an assertion, because these factors—such as forward citations, number of claims, and number of patents—are accepted indicators of a patent’s economic “value,” not a patent’s “quality,” which concerns its validity. (Although there is very likely a relationship between the two notions, no study has resoundingly linked patent value to patent quality.) And typically these value indicators have been derived from studies of patent litigation. Thus, to argue that high damages in litigation justify the patent system on the basis of such metrics is essentially circular. Indeed, as I have argued elsewhere, it is very likely that patent damages as they stand should be reengineered to provide more optimal innovation incentives.

Nonetheless, despite this study’s inability to “bolster the core tenets of the patent system,” its result that damages awards are fairly predictable is a very important contribution to the literature. Moreover, this work provides the starting point for more comprehensive investigations of damages in patent cases, such as the follow-on study the authors recently undertook regarding non-practicing entity (NPE) and non-NPE suits. Additionally, their explanatory approach could be extended to the more basic win/loss determinations on infringement and validity. One cannot ask for much more in any empirical study, and Mazzeo, Hillel, and Zyontz deserve kudos for their exacting labors and notable insights.

 
 

Intermediary Trademark Liability: A Comparative Lens

Graeme B. Dinwoodie, Secondary Liability for Online Trademark Infringement: The International Landscape, 36 Colum. J.L. & Arts (forthcoming 2014), available at SSRN.

Although we live in a global, interconnected world, legal scholarship – even scholarship about the Internet – often focuses on domestic law with little more than a nod to developments in other jurisdictions.  That’s not necessarily a bad thing; after all, theoretically robust or historically thorough works can rarely achieve their goals while surveying the landscape across multiple countries with disparate traditions and laws.  But as a student of U.S. law, I appreciate articles that explain how other legal systems are addressing issues that perplex or divide our scholars and courts.  Given the tumult over intermediary liability in recent years, comparative commentary on that topic has special salience.

In this brief (draft) article, Graeme Dinwoodie explores both structural and substantive differences in how the United States and Europe approach intermediary trademark liability in the Internet context.  To an outsider, the European web of private agreements, Community Directives, CJEU opinions, and sundry domestic laws can appear daunting and sometimes self-contradictory.  Dinwoodie puts them all into context, offering a coherent explanation of the interaction between Community law, member state law, and private ordering, and situating the overall picture within a broad normative framework.  And he contrasts that picture with the one emerging through common law in the United States.  The result is a readable, informative study of two related but distinct approaches to intermediary trademark law.

Dinwoodie begins by framing the core normative question:  how should the law balance trademark holders’ interest in enforcing their marks against society’s interest in “legitimate development of innovative technologies that allow new ways of trading in goods”?  This tension is a familiar one: from Sony through Grokster, from Inwood through eBay, courts and lawmakers have struggled with how to allocate responsibility between intellectual property holders, those who infringe their rights, and those whose behavior, product, or technology plays some role in that infringement.  Dinwoodie identifies the tension but does not resolve it, purporting to have the more modest goal of exposing the differences between the American and European approaches and discussing their relative virtues.  But the article barely conceals Dinwoodie’s preference for rules that give intermediaries at least some of the burden of policing trademark infringement online.

Structurally, there are some significant differences between the European and American approaches.  Whereas courts have shaped the U.S. law primarily through common law development, Europe has a set of Directives that offer guidance to member states in developing intermediary trademark liability rules.  Europe has also experimented with private ordering as a partial solution, with stakeholders recently entering a Memorandum of Understanding (MOU) that addresses the role of brand owners and intermediaries in combating counterfeiting online.  In other words, rather than relying exclusively on judge-made standards of intermediary liability, European policymakers and market actors have crafted rules and norms of intermediary responsibility for trademark enforcement.

Whether as a result of these structural differences or as a byproduct of Europe’s tradition of stronger unfair competition laws, the substantive rules that have emerged in Europe reflect more solicitude for trademark owners than provided by United States law.  Doctrinally, intermediaries have a superficial advantage in Europe, because the Court of Justice limits direct infringement to those who have used the mark in connection with their own advertising or sales practices.  They also benefit from Europe’s horizontal approach to Internet safe harbors.  Unlike the United States, Europe includes trademark infringement, unfair competition, and other torts in the “notice-and-takedown” system, offering service providers the same kind of immunity for these infractions as they receive under copyright law.  The safe harbor law explicitly provides that intermediaries need not actively root out infringement.

Other features of European law, however, temper the effects of these protections.  Most significantly, Article 11 of the European Enforcement Directive requires member states to ensure that “rights holders are in a position to apply for an injunction against intermediaries whose services are used by third parties to infringe an intellectual property right.”  In other words, even if they fall within the Internet safe harbor (and thus are immune from damages), intermediaries may face an injunction requiring affirmative efforts to reduce infringement on their service.  In Germany, at least, courts have ordered intermediaries to adopt technical measures such as filtering to minimize future infringement.  The threat of such an injunction no doubt played a role in bringing intermediaries to the table in negotiating the MOU, which requires them to take “appropriate, commercially reasonable and technically feasible measures” to reduce counterfeiting online.

This explicit authority to mandate filtering or other proactive enforcement efforts finds no counterpart in U.S. law.  On its face, U.S. contributory infringement law requires specific knowledge of particular acts of infringement before an intermediary has an obligation to act.  And while scholars (including myself) have argued that intermediaries’ efforts to reduce infringement have played an implicit role in case outcomes, the letter of the law requires nothing but a reactive response to notifications of infringement.  Dinwoodie suggests that this “wooden” approach to intermediary liability may miss an opportunity to place enforcement responsibility with the party best suited to enforce.

In the end, while professing neutrality, Dinwoodie clearly sees virtues in the European model.  He applauds the horizontal approach to safe harbors, welcomes the combination of legal standards and private ordering, and praises the flexibility and transparency of Europe’s largely least-cost-avoider model.  Whether the reader agrees with him or prefers the United States’ more technology-protective standard, she will come away with a better understanding of the structure and content of intermediary trademark law in both the United States and Europe.

 
 

Open and Close Innovation Regimes: An Experiment on Institutional and Organizational Design

Kevin J. Boudreau & Karim R. Lakhani, How Disclosure Policies Impact Search in Open Innovation, Harv. Bus. Sch. Tech. & Operations Mgmt. Working Paper (2013).

One of the central tensions in the institutional design of innovation regimes is the trade-off between incentives and disclosure. Innovation systems, including intellectual property systems, are created to optimize creative output by balancing ex ante incentives for initial creators with ex post disclosure of the innovation to follow-on creators and the public. According to accepted theory, the more rigorous the disclosure—in terms of when and how it occurs—the weaker the incentives. But a fascinating new experiment by Kevin Boudreau and Karim Lakhani suggests that differences in disclosure regimes can affect not just the amount of innovation but also the kind of innovation that takes place.

Boudreau and Lakhani set up a tournament on the TopCoder programming platform that involved solving a complicated algorithmic task over the course of two weeks. All members of the community were invited to participate in the tournament, and contest winners would receive cash prizes (up to $500) and reputational enhancement within the TopCoder community. The coding problem was provided by Harvard Medical School, and solutions were scored according to accuracy and speed. Importantly, the top solutions in the tournament significantly outperformed those produced within the medical school, but that’s a different paper.

Boudreau and Lakhani randomly assigned participants into different conditions based on varying disclosure regimes and tracked their behavior. The three disclosure conditions were:

  • Intermediate Disclosure – Subjects could submit solutions to the contest, and, when they did, the solutions and their scores were immediately available for other subjects in the same condition to view and copy.
  • No Disclosure – Subjects’ solutions to the contest were not disclosed to other subjects until the end of the two-week contest.
  • Mixed – During the first week of the contest, submissions were concealed from other subjects, but, during the last week of the contest, they were open and free to copy.

For the Intermediate and Mixed conditions, subjects were asked to provide attribution to other subjects’ whose code they copied.

Cash prizes were given out at the end of the first and second weeks based on the top-scoring solutions. For the Intermediate condition, the prizes were split evenly between the subject who had the highest scoring solution and the subject who received the highest degree of attribution.

The subjects were about equally split between professional and student programmers, and they represented a broad range of skill levels. 733 subjects began the task. Of them, 124 submitted a total of 654 intermediate and final solutions. The solutions were determined to represent 56 unique combinations of programming techniques.

The authors predicted that mandatory disclosure in the Intermediate condition would reduce incentives to participate because other subjects could free-ride on the solutions of initial inventors. The data are consistent with this hypothesis: Fewer people submitted answers in the Intermediate condition than in the No Disclosure condition, and the average number of submissions and the number of self-reported hours worked were also lower by significant margins. The Mixed condition generally produced data that were between the other two conditions. Ultimately, scores in the Intermediate condition were better than those in the other conditions because subjects could borrow from high-performing solutions.

More importantly, the data also disclosed differences in how subjects solved the problem. Consistent with the authors’ hypotheses, subjects in the Intermediate condition tried fewer technical approaches and seemed to experiment less than did those in the No Disclosure condition. Once significant improvements were disclosed, other subjects in the Intermediate condition tended to borrow the successful code leading to a relatively smooth improvement curve. In the No Disclosure condition, by contrast, although new submissions were generally better than those the subjects had submitted before, they were more variable and less consistent in their improvement.

In summary, when subjects can view each others’ code, innovation tends to be more path-dependent and to happen more rapidly and successfully than when there is no disclosure. But when innovation systems are closed, people tend to participate more, and they tend to try a wider variety of solution strategies.

In previous research, these authors have explained how open-access innovation systems succeed in the face of diminished extrinsic incentives. This experiment provides valuable insight into the relative merits of open- and closed-access systems. Open-access systems will, all else equal, have advantages when creators have significant intrinsic incentives and when the innovation problem has one or few optimal solutions.

Closed-access systems, by contrast, will prove comparatively beneficial when the system must provide independent innovation incentives and when the problem involves a wide variety of successful solutions. The experiment’s contribution, then, is not to resolve the debate about open versus closed innovation but rather to help policymakers and organizations predict which kind of system will tend to be most beneficial.

The experiment also suggests helpful ways of thinking about the scope of Intellectual Property rights in terms of follow-on innovation. For example, strong derivative-works rights in copyright law create a relatively closed innovation system compared to patent law’s regime of blocking patents. If we think of the areas of copyright creativity as exhibiting a large variety of optimal solutions, then the closed-innovation system may help prevent path-dependence and encourage innovation (evidence from the movie industry not withstanding). Future research could test this hypothesis.

As with any experiment, many questions remain. Boudreau and Lakhani’s incentives manipulation is not as clean as could be hoped, both because payouts in the Intermediate condition are lower and because attribution in the No Disclosure condition is effectively unavailable. Accordingly, it is difficult to make causal arguments about the relationship between the disclosure regime and incentives. In addition, although the Intermediate condition produces lower participation incentives for subjects who expect to be high performing, it creates higher participation incentives for subjects who expect to be low performing because they can simply borrow from high-scoring submissions at the end of the game.

Interestingly, there seems to be surprisingly little borrowing, which could suggest a number of curious features about the experiment: Perhaps only high-skill subjects are capable of borrowing and/or there may be social norms against certain kinds of borrowing even though it is technically allowed. And, as always, there are questions about the representativeness of the sample. Subjects were likely disproportionately men, and they were also likely ones with significant open-source experience where they may have internalized the norms of that community. On the other hand, TopCoder bills itself as “A Place to Compete,” which may have primed competitive behaviors rather than sharing behaviors.

Ultimately, Boudreau and Lakhani have produced an exciting new contribution to intellectual property and innovation research.

 
 

How Copyright Prevents Us From Getting the Books We Want

Paul Heald, The Demand for Out-of-Print Works and Their (Un)Availability in Alternative Markets (2014), available at SSRN.

Back in mid-2013, Paul Heald posted to SSRN a short paper that already has had far more impact than academic papers usually have on the public debate over copyright policy. That paper, How Copyright Makes Books and Music Disappear (and How Secondary Liability Rules Help Resurrect Old Songs), employed a clever methodology to see whether copyright facilitates the continued availability and distribution of books and music. Encouraging the production of new works is, of course, copyright’s principal justification. But some have contended that copyright is also necessary to encourage continued exploitation and maintenance of older works. We find an example in the late Jack Valenti, who, as head of the Motion Picture Association of America, in 1995 made the argument before the Senate Judiciary Committee that it was necessary to extend the copyright term in part to provide continued incentives for the exploitation of older works. “A public domain work is an orphan,” Valenti testified. “No one is responsible for its life.” And of course if no one is responsible for keeping a creative work alive, it will, Valenti suggests, die.

Is that argument right? Enter Paul Heald. Heald’s 2013 article employs a set of clever methodologies to test whether copyright did, indeed, facilitate the continued availability of creative works—in Heald’s article, books and music.  With respect to books, Heald constructed a random sample of 2300 books on Amazon, arranged them in groups according to the decade in which they were published, and counted them. Here are his findings:

 

© 2012 by Paul Heald. All rights reserved. Reprinted with permission of Paul Heald.

© 2013 by Paul Heald. All rights reserved. Reprinted with permission of Paul Heald.

If you hadn’t already seen Heald’s article, the shape of this graph should surprise you.  You would probably expect that the number of books from Amazon would be highest in the most recent decade, 2000–2010, and would decline continuously as one moves to the left in the graph—i.e., further into the past. On average, books are, all things equal, less valuable as they age, so we should expect to see fewer older books on Amazon relative to newer ones.

But that’s not what we see.  Instead, we see a period from roughly 1930 to 1990, where books just seem to disappear. And we see a large number of quite old books on Amazon. There are many from the late-19th century and the first two decades of the 20th century.  Indeed, there are far more new editions from the 1880s on Amazon than from the 1980s.

What on earth is causing this odd pattern? In a word: copyright. All books published before 1923 are out of copyright and in the public domain. And a variety of publishers are engaging in a thriving business of publishing these out-of-copyright works—and so they’re available on Amazon. In contrast, a large fraction of the more recent works—the ones under copyright—simply disappear. Maybe they’ll spring back to life when (or if?) their copyright expires. But for now, copyright doesn’t seem to be doing anything to facilitate the continued availability of these books. In fact, copyright seems to be causing some books to disappear.

Heald does a similar analysis for music, and this analysis too shows that copyright causes music to disappear, relative to music in the public domain. The effect is less pronounced than in the case of books, but it is still there.

In short, Heald’s paper placed a big question mark after the “continued availability” justification for copyright. If we care about works remaining available, then copyright, in fact, seems to be hurting and not helping.

Now Heald is back with a follow-up paper, The Demand for Out-of-Print Works and Their (Un)Availability in Alternative Markets, that takes on the most important question raised by his first: Should we be concerned that copyright appears to make works disappear? If there is no consumer demand for these disappeared works, then possibly not. But if there is consumer demand for the works that copyright kills, then we should care because that demand is not being met.

Heald employs a number of tests to determine whether there is consumer demand for the books that copyright makes disappear. Read the article if you want a full account, but it is worthwhile to give a couple of highlights.  In a particularly nifty part of the paper, Heald compares books available on Amazon with those available on the biggest used books website. The graph is instructive:

 

© 2014 by Paul Heald. All rights reserved. Reprinted with permission of Paul Heald.

© 2014 by Paul Heald. All rights reserved. Reprinted with permission of Paul Heald.

That gap between the red (Amazon) and blue (used book) curves suggest that used book sellers take advantage of a market in many books that copyright has made disappear from new book shelves, which suggests that there is consumer demand for these books.

Heald then examines other possible ways that the market may provide access to works that copyright has made disappear. For music, Heald looks to see whether copyright owners are digitizing out-of-print records and either selling them on iTunes or posting them on YouTube. The answer, hearteningly, appears to be yes. Unfortunately, the picture for books is much less reassuring. As usual, Heald’s chart speaks more clearly than words:

 

© 2014 by Paul Heald. All rights reserved. Reprinted with permission of Paul Heald.

© 2014 by Paul Heald. All rights reserved. Reprinted with permission of Paul Heald.

Look at the number of popular songs from 1923–32 that are on iTunes—almost all of them. But then look at the number of popular books from the same period that are offered as eBooks—less than 40%. Many of these books are not available on Amazon in paper form. Nor are they distributed digitally.

So why the difference between the music and book publishing industries when it comes to the availability of older titles still under copyright? I’ll leave that as a mystery—and I hope your unslaked curiosity will lead you to read Heald’s article. It is well worth your time.

 
 

Copyright as a Chill on Innovation

Michael A. Carrier, Copyright and Innovation: The Untold Story, 2012 Wisc. L. Rev. 891 (2012).

Sony’s Betamax was the first reprography technology to attract a copyright infringement lawsuit. Little did copyright experts back then realize how much of a harbinger of the future the Betamax would turn out to be. Countless technologies since then designed, like the Betamax, to enable personal use copying of in-copyright works have come to market. Had the Supreme Court outlawed the Betamax, few of these technologies would have seen the light of day.

The most significant pro-innovation decision was Supreme Court’s Sony Betamax decision. It created a safe harbor for technologies with substantial non-infringing uses. Entrepreneurs and venture capitalists have heavily relied on this safe harbor as a shield against copyright owner lawsuits. Yet, notwithstanding this safe harbor, copyright owners have had some successes in shutting down some systems, most notably, the peer-to-peer file-sharing platform Napster.

It stands to reason that decisions such as Napster would have some chilling effect on the development of copy-facilitating technologies. But how much of a chilling effect has there been? Some would point to products and services such as SlingBox and Cablevision’s remote DVR feature and say “not much.”

Antitrust and innovation scholar Michael Carrier decided to do some empirical research to investigate whether technological innovation has, in fact, been chilled by decisions such as Napster. He conducted qualitative interviews with 31 CEOs, co-founders and vice presidents of technology firms, venture capitalists (VCs), and recording industry executives. The results of his research are reported in this Wisconsin article, which I like a lot.

One reason I liked the article was because it confirmed my longstanding suspicion that the prospect of extremely large awards of statutory damages does have a chilling effect on the development of some edgy technologies. Because statutory damages can be awarded in any amount between $750 and $150,000 per infringed work and because copy-facilitating technologies can generally be used to interact with millions of works, copyright lawsuits put technology firms at risk for billions and sometimes trillions of dollars in statutory damages. For instance, when Viacom charged YouTube with infringing 160,000 works, it exposed YouTube and its corporate parent Google to up to $24 billion in damages. While a company such as Google has the financial resources to fight this kind of claim, small startups are more likely to fold than to let themselves become distracted by litigation and spend precious VC resources on lawyers.

But a better reason to like the article is the fascinating story Carrier and his interviewees tell about the mindset of the record labels about Napster and the technology “wasteland” caused by the Napster decision.

The lesson that record labels should have learned from Napster’s phenomenal (if short-lived) success was that consumers wanted choice—to be able to buy a single song instead of a whole album—and if it was easy and convenient to get what they wanted, they would become customers for a whole new way of doing business. Had the record labels settled with Napster, they would have benefited from the new digital market and earned billions from the centralized peer-to-peer service that Napster wanted to offer.

The labels were used to treating record stores as their customers, not the people who actually buy and play music. Radio play, record clubs, and retail were the focus of the labels’ attention. They thought that the Internet was a fad, or a problem to be eradicated. They were unwilling to allow anyone to create a business on the back of their content. They believed that if they didn’t like a distribution technology, it would go away because they wouldn’t license it. They coveted control above all. When the labels began to venture into the digital music space themselves, they wanted to charge $3.25 a track, which was completely unrealistic.

Some of Carrier’s interviewees thought that the courts had reached the right decision in the Napster case, but questioned the breadth of the injunction, which required 100% effectiveness in filtering out infringing content and not just the use of best efforts, thereby making it impossible to do anything in the digital music space. One interviewee asserted that in the ten years after the Napster decision, iTunes was the only innovation in the digital music marketplace. Many more innovations would have occurred but for the rigidity of the Napster ruling and the risk of personal liability for infringement by tech company executives and VCs.

The role of copyright in promoting innovation was recently highlighted in the Department of Commerce’s Green Paper on “Copyright Policy, Creativity and Innovation in the Digital Economy” (July 2013). It aspires to present a balanced agenda of copyright reform ideas that will promote innovation. It is an encouraging sign that the Green Paper identifies statutory damage risks in secondary liability cases as a policy issue that should be addressed. Reforming statutory damages would not entirely eliminate the risks that copyright would chill innovation, but it would go a long way toward that goal.

 
 

Quantifying the Copyright-Innovation Interference

Joel Waldfogel, Copyright Protection, Technological Change, and the Quality of New Products: Evidence from Recorded Music Since Napster, 55 J.L. & Econ. 715 (2012), available at the University of Minnesota.

The constitution empowers Congress to promote the useful and the expressive arts, which Congress does through the laws governing patents and copyrights. But, promoting one may sometimes retard the other. This happens in the context of new technologies of copying and dissemination, such as the photocopier, VTR, the MP3 player, and file-sharing networks. Imposing copyright liability on the makers and users of these technologies encourages copyright owners but may discourage innovators. Shielding such makers and users from liability encourages technological innovation but may retard expressive creativity. How should we strike this trade-off, either in general or in particular cases?

This question has long been a major issue in copyright law and scholarship.1 To know what the right policy is, we should have some sense of the degree to which incentives to create content are diminished, if at all, in the face of the new technology. Indeed, much empirical work surrounding the file-sharing litigation has studied the effect file-sharing had on music sales. This body of literature contains diverse views, and the debate on the empirics is sometimes as heated as the one on the theory and policy side.2

Joel Waldfogel’s paper is a recent and valuable contribution to the empirical literature. I like it lots because it takes a new approach to quantifying the digital age’s net effect on incentives to create expressive works. Waldfogel does not believe that quantifying the reduction in copyright owners’ revenue is the most important question. Technological change may reduce the amount that copyright owners can receive for their content, but technological change may also reduce copyright owners’ cost of producing and disseminating content. If the latter effect is greater than the first, technological change may actually enhance incentives to create. To know whether and what kind of legal intervention is needed, we need to determine which effect is greater. The paper tries to quantify the technology’s net effect on incentives to create by looking at what has happened to the quality of sound recordings produced since the advent of online file-sharing in 1999.

The paper does so by constructing three original measures for the quality of music over time. One is an index of the volume of high quality music since the 1960s. It is based on critics’ retrospective lists of the best works over various time periods (e.g., Rolling Stone’s 2004 list of the 500 best albums based on 273 critics and expert musicians opinions, or Pitchfork Media’s 200 best albums of the 2000s etc.). It contains 88 rankings of either songs or albums from Anglophone countries. Two additional indices track the quality of music from different vintages using data on record sales (relating to RIAA’s gold and platinum certifications awarded between 1958-2010) and airplay data (during 2004–2008 of songs originally released in previous years). Here, the assumption is that higher quality music should generate more sales and airplays over time, surviving longer in the market. These two indices evaluate vintages of music by the extent to which records from a particular year continue to sell, or to be played, years later.

Below are graphs of two of the music quality indices over time (the certifications graph is very similar to the airplay one, both being measures of public enjoyment):

© 2012 by The University of Chicago & Joel Waldfogel. All rights reserved. Reprinted with permission of the University of Chicago and Joel Waldfogel.

© 2012 by The University of Chicago & Joel Waldfogel. All rights reserved. Reprinted with permission of the University of Chicago and Joel Waldfogel.

© 2012 by The University of Chicago & Joel Waldfogel. All rights reserved. Reprinted with permission of the University of Chicago and Joel Waldfogel.

The paper finds that these indices—measures of music quality—are consistent with each other, and that there’s no evidence that the quality of music declined in the years since Napster. The paper’s certifications and airplay data indices suggest that music quality has rather increased substantially since 1999 (as is shown on the last graph above). The paper concludes by suggesting that its findings are relevant to policymakers setting the strength of copyright protection.

Several assumptions need to be made before using the paper for policymaking, and I would like to note two. First, one would have to accept the indices as good measures for industry output. One could question, however, whether this is so. Clearly, altering the measure for quality may alter the findings of what happened to that quality over time, and thus may alter the policy implications. For example, the finding that music quality increased significantly post-Napster is borne by two of the paper’s indices (the airplay and the certification indices) but not by the third (the experts’ index). It would thus be interesting to see whether the finding of non-decreasing music quality is robust to the use of other measures of quality.

But even assuming that the paper’s findings are robust to alternative measures of quality, another issue remains: What policy implications is one to draw from the paper’s findings? Here, the paper provides qualified guidance. One possibility, implicitly suggested at the conclusion of the paper, is that policymakers should not strengthen copyright protections since copyright owners’ reduced revenue (and potentially profit) was not associated with a decline in the quality of the music industry’s output. The paper’s findings, however, cannot support such a conclusion unequivocally. The years just prior to Napster’s advent and since were characterized not only by technological change, but also by legal change. By and large, copyrights have been strengthened over the time period. We cannot know whether incentives would have remained the same but for the legal change. Perhaps enhancing copyright owners’ rights was necessary to keep production levels constant. It is also possible that a lesser, or a greater, invigoration of copyrights would have enhanced music quality. Largely abstracting away from legal change, the paper leaves these possibilities open.

But these two remarks should not take away from Waldfogel’s substantial contribution to the literature. Rather, they are to acknowledge how much he has pushed the ball forward and how much more insight can be gained on the intersection of copyright law and technological innovation if more research is conducted following his approach.



  1. For a few notable treatments of the topic, see Paul Goldstein, Copyright’s Highway: From Gutenberg to the Celestial Jukebox (2d ed. 2003); Jessica Litman, Digital Copyright (2d ed. 2006); William Fisher, Promises to Keep: Technology, Law, and the Future of Entertainment (2004); Jane C. Ginsburg, Copyright and Control Over New Technologies of Dissemination, 101 Colum. L. Rev. 1613 (2001), Tim Wu, Copyright’s Communications Policy, 103 Mich. L. Rev. 278 (2004), available at SSRN. For my treatment on the topic, see Dotan Oliar, The Copyright-Innovation Tradeoff: Property Rules, Liability Rules, and Intentional Infliction of Harm, 64 Stan. L. Rev. 951 (2012). []
  2. There is a growing scholarly empirical literature on the effect of file sharing on the sale of sound recordings. For two summaries of the literature, compare Stan Liebowitz, The Metric is the Message: How Much of the Decline in Sound Recording Sales is due to File-Sharing?, CAPRI Publication 11-02 (2011) (suggesting that file sharing has caused the entire enormous decline in record sales over the past decade) with Felix Oberholzer-Gee & Koleman Strumpf, File Sharing and Copyright, Harvard Bus. Sch. Working Paper 09-132 (2009) (suggesting that file sharing has caused no more than 20% of the decline in music sales). []
 
 

Fair Use in Context

Michael C. Donaldson, Refuge From The Storm: A Fair Use Safe Harbor For Non-Fiction Works, 59 J. Copyright Soc’y U.S.A. 477 (2012), available at SSRN.

When is a use of a copyrighted work a fair use? This issue has grown in significance with the increase in the economic value of copyrighted works and in the ways in which users can distribute, rework, or otherwise borrow from copyrighted works. The fair-use inquiry is contextual, formally focusing on the nature and purpose of a use, the creative nature of the work, the amount of the work used, and the effect of the use on the copyright owner’s ability to economically exploit the work. For some, fair use’s attention to context renders it an unreliable ally for the diligent user.

However, a number of commentators, including this one,1 have argued that the multifactor inquiry does not lead truly to “case-by-case” adjudication. Instead, the principles of fair use protect certain identifiable patterns or bundles of uses with soft rules while remaining sufficiently open textured to balance interests implicated by new or emerging patterns of use. Others have gone further. My colleagues Peter Jaszi and Patricia Aufderheide have worked with creative communities to identify and articulate best practices in fair use in the context of their patterns of use as described in their recent book Reclaiming Fair Use.

Comes now Michael Donaldson to articulate the soft rule—or in his words the “safe harbor”—that applies when one seeks to make a fair use of a copyrighted work in a new work of non-fiction. Donaldson’s analysis flows not only from his reading of judicial opinions but also from his practice counseling clients and providing formal opinions of counsel that make reliance on fair use an insurable risk for documentary filmmakers, among others.

His article is a worthwhile read for many reasons. On its own terms, the article yields an important and useful insight into the unstated rules of decision that courts use when applying fair use in this context. Donaldson helpfully, and in my view, correctly, identifies the real concerns that animate decision-making. He argues that a fair-use decision-maker is likely to ask the following three questions about the use of a copyrighted work or “asset” in a new work of non-fiction:

  1. Does the asset illustrate or support a point that the creator is trying to make in the new work?
  1. Does the creator of the new work use only as much of the asset as is reasonably appropriate to illustrate or support the point being made?
  1. Is the connection between the point being made and the asset being used to illustrate or support the point clear to the average viewer?

Donaldson argues that when the answer to all three questions is affirmative, the use is within the fair use “safe harbor.” He is careful to also argue that a use may still be a fair use even when the answer to one or more of the questions is “no,” but then the fair-use analysis becomes more context-specific. Additionally, he addresses a number of issues that frequently arise in fair-use decision-making—such as whether the user or copyright owner is acting in good faith or whether the parties discussed a possible license for the use—and argues that these usually serve as distractions. Finally, Donaldson provides an extensive appendix that identifies the cases on which he relies and summarizes how they fare under his three-question test. This is a thoughtful and thought-provoking piece, and scholars and practitioners would do well to engage with Donaldson’s arguments even if they disagree with his particular reformulation of the fair-use inquiry.

This article also is useful to scholars and teachers who seek to better understand the real decision-making process masked by the mechanical jurisprudence that multifactor tests like Section 107 of the Copyright Act or the likelihood-of-confusion test under the Lanham Act sometimes produces (particularly in the latter case). Donaldson presents a model for translating a test that purports to weigh and balance a range of considerations into a more directive inquiry that focuses attention on the facts that really matter in the analysis.

Finally, this article demonstrates how open-ended standards designed to allocate entitlements between private and public interests in intellectual property law can be, and have been, tailored through interpretation to provide a more fine-grained balance than could be readily achieved through legislation. As a result, this article should have appeal both for those interested in the specific application of fair use in the context of non-fiction adaptations and for those who may be inspired to adapt this mode of analysis for other multi-factor legal tests.



  1. See Michael W. Carroll, Fixing Fair Use, 85 N.C. L. Rev. 1087, 1106 (2007) (“There are a range of cases in which the question of fair use recurs. In a few settings, litigation has provided ex ante certainty through the emergence of soft fair use rules.” []
 
 

Trademark As Promise

Jeremy N. Sheff, Marks, Morals, and Markets, 65 Stan. L. Rev. 761 (2013).

The primary theory of trademark law in the academic literature is an economic one. Trademarks are a shorthand, the theory goes, for a number of observable and unobservable qualities of products. The trademark PEPSI, to take one example, is an easy way for a consumer to identify the cola that she enjoys without having to investigate other types of information, such as the location or corporate identity of the manufacturer. Indeed, some types of investigation in this regard—tasting the cola before purchase to confirm that it is the preferred drink—are frowned upon, to say the least. So the law regulates the use of trademarks in order to reduce search costs for consumers and, relatedly, to encourage producer investment in goodwill. When an unauthorized producer uses another’s trademark, the consumer is deceived into purchasing an unwanted product or forced to engage in additional efforts to find the product that she desires, both of which are inefficient. Although economic theory may not map neatly onto all areas into which trademark law extends (dilution law being one such example), it appears to be fairly well accepted in the scholarly literature that economic theory provides the predominant justification for trademark law’s existence.

But consumers obviously do not always act in ways consistent with economic theory. The relationships that some consumers have with some brands transcend a mere economic transaction; they involve identity construction and signaling motivated not by a product’s objective qualities but by intangible, emotional responses to the brand. The fact that some consumers are willing to pay many hundreds of dollars for a designer handbag or watch beyond the price that could be justified by the item’s materials or workmanship are a testament to the limits of economic theory.

This suggests that alternate theories of trademark law are required, and Jeremy Sheff, in his thoughtful and sophisticated article, aims to provide one. Sheff begins by noting that although a deontological framework in the Lockean tradition is typically the intellectual property counterpart to the law-and-economics framework, the Lockean justification cannot tell the whole story in trademark as it might for its adherents in copyright law or patent law. Lockean labor theory, to the extent one favors it, maps best onto intellectual property schemes where the goal is to incentivize the production of intellectual property, which trademark law does not. Indeed, although early trademark doctrine focused, as Mark McKenna has detailed, on the moral notion of unfair competition, modern trademark doctrine is primarily concerned with consumer confusion, which Lockean labor theory, with its focus on harms committed by one producer to another, doesn’t address. Thus, the economic or consequentialist justification can identify both a producer-side wrong (free riding) and a consumer-side wrong (enhanced search costs), but a deontological justification typically relates only to a producer-side wrong (misappropriation of the fruits of one’s labor).

Sheff therefore proposes a Kantian contractualist theory to fill this gap, in which actors are motivated not by consequentialist notions but by a moral imperative to act in a particular manner—as one might characterize it, motivation by “good will,” not by “goodwill.” Sheff notes that under this theory, individuals are treated as ends, not means, which requires an acknowledgement of “the unique capacity of rational beings to choose what ends they will pursue and to settle on actions to achieve those ends” (P. 777) and a commitment not to interfere with those choices by lying or deception. In essence, this is a theory of “trademark as promise.”1

Sheff then turns to an initial application of a contractualist theory to trademark law. Significantly, his theory addresses a hypothetical with which a consequentialist theory has difficulty: What happens when a consumer is misled into buying a product from producer X because of X’s use of Y’s trademark, but X’s and Y’s goods are of identical quality, such that no harm is done? Under a contractualist theory, there is indeed still harm done: To the extent that the use of a trademark is a promise from the producer to the prospective consumer that the producer is who he says he is (and assuming that the consumer is purchasing goods based on this representation), the harm is the very breaking of that promise through deception because it deprives the consumer of autonomy in the marketplace.

To be sure, not every doctrine in trademark law can be explained this cleanly. Later in the article, Sheff turns to post-sale confusion, a more complicated application of his premise, in which the confused consumer is not the point-of-sale purchaser but some later observer, an application that raises the question of whether it is the defendant producer or the status-seeking consumer who is the breaching party. In other words, when Buyer A displays a fake Gucci handbag, thereby deceiving Bystander B into thinking it is genuine, Buyer A may be the morally questionable individual (at least in some circles). But it is the producer of the fake Gucci handbag that is the putative defendant in trademark law—arguably only an accessory to Buyer A’s deception—not Buyer A herself.

The fact that Sheff’s article leaves many questions not fully answered, however, is not a failing but rather a testament to the creativity it sparks, and Sheff assists the reader by highlighting many of these areas for further exploration. What, for example, would a contractualist theory have to say about trademark infringement cases that don’t involve deceptive behavior but instead involve an innocently adopted trademark that turns out to be somewhat similar to the mark of another producer? Does the same notion of a broken promise exist in that instance? Is it a moral violation for a company to change its trademark after a public relations disaster, or for a company to obscure the connections between its brands by using different trademarks for different products? Does a theory of “trademark as promise” limit the extent of any changes a company can make to its ingredients or formulation while maintaining the same brand? Put otherwise, do brand owners also have a claim to autonomy in their choices that is incompatible with a theory of trademark as a promise to a consumer?

Sheff does not purport to set forth an all-encompassing theory, but his proposal is highly compatible with the way we now talk about brands. We are ever more in a world in which consumers engage with many brands as personas. Brands are trusted confidants and comforting companions. They find allegiances with different social groups at different times in their development; they uplift us and betray us. These brands are not simply a way of finding goods in the marketplace; they are also a way of announcing or defining one’s identity, creating relationships with others, signaling wealth, or engaging in any one of a number of expressive functions. Companies respond in kind, by creating advertising or affinity groups that foster this type of engagement, and by aggressively using trademark law as a kind of corporate defamation law, pushing back at uses that offend their view of their brands. If these are our relationships with brands today, then perhaps we should be characterizing their relationships with us as ones of promise, representations, and trust. The difficulty will then be in determining which promises we truly expect brands to keep.



  1. Sheff later offers illustrations of a contractualist theory of markets by examining contract law and the prohibition of insider trading by securities laws, both of which regulate the flow of information among individuals in the market. []
 
 

Creative Incentives

Katharina Eckartz, Oliver Kirchkamp, & Daniel Schunk, How Do Incentives Affect Creativity (CESifo Working Paper Series, Paper No. 4049, 2012), available at SSRN.

The classic justification for intellectual property laws was perhaps stated best by Abraham Lincoln, who, in speaking of the patent system, characterized its function as “adding the fuel of interest to the fire of genius.” Put less poetically, IP aims to encourage creativity by granting creators exclusive property rights in their creations. That way, if a patented invention or copyrighted work turns out to be worth money, the creator will benefit, rather than a copyist.

That sounds entirely sensible in theory. We think that people generally respond to incentives. Make gasoline more expensive by taxing it, and people generally use less of it. Give people a tax break on home mortgages, and they build more and bigger houses. Make creativity a little less risky, and the payoff a bit more certain, and we’ll get more investment in creative labor.

But is creativity really like gas and houses? Can the amount that we get of creativity be raised by providing incentives? The answer is bound to differ a lot depending on what sort of creativity we’re talking about. Poetry and drug discovery are both creative endeavors, but they are more different than alike. It is difficult to imagine piracy leading to poets throwing down their pens. But we may more readily imagine a drug company cutting its research budget if others are permitted to copy the pills the company spends hundreds of millions to develop. In between poetry and pharma there is a lot of territory. Where in this spectrum of creativity do intrinsic incentives to create leave off? And where do we need to start to introduce extrinsic incentives—either through prizes, or IP laws, or other schemes like government funding of basic research? On these questions, we know surprisingly little. Do incentives work? If so, when? And what sort of incentives are best?

A new paper by three European economists sheds some light on those questions. In How Do Incentives Affect Creativity?, Katharina Eckartz and Oliver Kirchkamp of Friedrich-Schiller-University Jena, and Daniel Schunk of the University of Mainz present the results of an carefully-designed and admirably creative experiment (hereinafter, the “E-K-S Experiment”).

The authors are interested in whether compensation schemes that pay people more when they perform better provoke creative effort, relative to payment of a flat fee that does not vary based on performance. That’s a narrow question, but, as so often is the case, one which is surprisingly difficult to test.

The E-K-S Experiment employs a word task as its measurement of creativity. Participants were presented with a series of alphabetically-ordered lettersets consisting of 12 letters—an example would be “accdeeeginst”—and asked to construct as many words as they could, using only the letters in the letterset, in five minutes. Longer words were deemed more valuable than shorter, in an amount disproportionate to the number of letters in the word.

Is assembling words from a letterset really a “creativity task”? The authors contend that the task is measuring a specific sort of creativity; namely, the ability to combine known elements within a set of pre-determined rules. Of course this is far from the only form of creativity, and it is possible that different sorts of creators might respond to incentives differently. But that’s not really a criticism of the E-K-S Experiment. No one experiment can capture all the facets of the huge range of activities we group as “creativity”. To be a valuable contribution, the E-K-S Experiment must only describe how incentives affect subjects’ performance in a credible model of one sort of creativity, and then later papers, or other researchers, can expand the findings to model many different types of creative work.

Back to the design of the E-K-S Experiment. The experiment was run with 216 student subjects. Subjects were first required to pass a German-language pre-test. Subjects were also asked to self-report their interest in participating in a range of creative tasks. This was done to assess subjects’ intrinsic motivation to engage in creativity.

Once these preliminaries were completed, subjects performed the creativity task three times, in “rounds” lasting five minutes. Each time the subject performed the task, the payment scheme shifted. The experimenters tested a flat fee scheme, a pay-for-performance scheme—where the amount of money earned increased along with performance—and a “tournament” scheme, in which subjects were divided into groups of four and the highest-scoring in the group earned the lion’s share (but not all) of the prize. Subjects undertook the creativity task within each of these conditions. The order in which the conditions were presented was varied to make sure that performance was not dependent on which condition the subjects completed first (that is, before they either got better at the task, or became tired and bored with it). After performing the creativity task under one of the payment conditions, the subjects were asked to perform a “control” task—i.e., a task, such as a number-adding test, that involves work but not creativity—under the same payment condition. In the final stage of the E-K-S Experiment, subjects were asked to pick one of the payment schemes they wished to repeat. This last stage was included to see whether there would be any significant differences among subjects regarding their choice of “favorite” payment scheme.

So, what were the results? They can be stated very simply: Variation in subjects’ performance was driven almost entirely by subjects’ individual characteristics—i.e., how good they were at the task. The payment incentive schemes had almost no effect on performance in either the creativity or control tasks. Subjects in the flat fee payment condition performed about the same as when they were in the pay-for-performance or tournament schemes. This was true whether subjects’ performance was assessed for complexity of the words they produced, or their originality. Aside from a small, positive effect on word length produced by the pay-for-performance scheme, all effects were too small to be statistically significant.

Subjects’ general indifference to incentive payment schemes carried through, moreover, into the final “self-selection” round. Flat payment schemes—i.e., those not related to performance—were subjects’ favorite on average, with 40.74% of subjects choosing this scheme. (Females favored flat payment more than males, who actually favored pay-for-performance very slightly.) The tournament condition was the least popular, with approximately 27% of subjects choosing it overall, and with women and men choosing tournament at roughly the same rate. This cuts against previous findings that male subjects, supposedly more risk-seeking than their female counterparts, favor tournament settings where most if not all of the rewards go to a winner, leaving the losers with little or nothing. So if the conventional wisdom says that men are from Mars and women from Venus, the results of this experiment suggest that, at least as far as creativity incentive preferences go, the sexes occupy pretty much the same territory right here on Earth.

What did stand out in the E-K-S Experiment was the tendency of higher-performing subjects to sort themselves in the “self-selection” stage into either the pay-for-performance or tournament scheme. This means that average performance was marginally higher in these two conditions vs. flat fee. Not, however, because the conditions created some differential incentive effect. Rather, because subjects had a relatively accurate sense of their own relative performance (even though the experiment was not designed to give them information about relative performance) and chose performance-based schemes when they were confident of their relative ability. But there is very little normative significance to higher-performing subjects’ self-selection. When these subjects were confined to a flat payment condition, they performed about the same. In short, this subgroup’s preference has little effect on their creative output.

So, what’s the payoff of the E-K-S Experiment for our thinking about IP law? Well, it’s complicated, but here’s a take. You can think about both patent and copyright as analogous to the E-K-S Experiment’s “tournament” payment condition. That is, if you set aside for a moment other compensation options—in particular the prospect of being paid a salary to engage in creative work—an inventor who patents his new machine, or a writer who copyrights her screenplay, both are entering into a market where there are a few (potentially big) winners and a lot of losers. Put differently, only a few patented inventions and copyrighted works are worth real money. The vast majority are worthless. Most people probably realize this. Does it blunt their willingness to enter into tournament markets?

There is a good deal of research into these “tournament” markets, and most of it suggests that people are quite optimistic about their chance of success, and are attracted to the prospect of a big payoff, even if any individual competitor is very unlikely to actually win it. The E-K-S Experiment, however, suggests otherwise. Subjects were no more creative in the tournament market setting. Nor, in general, did subjects favor that form of compensation over flat fee or linear performance-based incentives.

Which leads to the following hypothesis—and it is only a hypothesis, because the E-K-S Experiment suggests it but does not test it directly: In at least some settings—and at least for individual creators (the behavior of corporate creators is not modeled in the E-K-S Experiment)—getting paid a flat salary to do creative work is every bit as effective an incentive to perform as the prospect of striking it rich via a valuable patent or copyright. The hard part is figuring out when this is true. And why.

 
 

The Interpretation-Construction Distinction in Patent Law: Is It Just a Matter of Semantics?

Tun-Jen Chiang & Lawrence B. Solum, The Interpretation-Construction Distinction in Patent Law, Yale L. J. (forthcoming), available at SSRN.

Claim construction is the meat and potatoes of a patent litigator’s diet: it is performed early and often in patent infringement litigation, and it is often outcome determinative. Claim construction’s notoriously uncertain and unpredictable nature is therefore highly problematic. In The Interpretation-Construction Distinction in Patent Law, Tun-Jen Chiang and Lawrence B. Solum argue that courts and commentators have misdiagnosed the root cause of this problematic unpredictability, and they lay out a new route forward for courts seeking to make claim construction more predictable. At the end of the day, I am unconvinced that the patent community should follow this route. Nonetheless, I think that The Interpretation-Construction Distinction is a provocative read that forces the reader to clarify what are sometimes implicit, unarticulated assumptions about the nature of claim construction in order to mount an effective rebuttal.

Drawing on a literature that explains how courts give legal effect to other legal documents (including the Constitution and contracts), Chiang and Solum offer a new perspective on claim construction. They argue that many courts that perform claim construction are not employing a single process but, rather, are in fact employing two distinct processes. First, there is interpretation, or the process of determining the linguistic meanings of words. For Chiang and Solum, linguistic meaning is entirely determined by the understandings of an audience, and it is therefore factual and objective. “The ideas and concepts that the intended audience will comprehend from a certain text is simply a fact of the world.” (P. 15.) Second, there is construction, or the process of imbuing claim language with legal import in order to achieve particular policy outcomes. For Chiang and Solum, any process in which courts consider the policy of optimal claim scope cannot be interpretation and must be construction. “[L]inguistic meaning is the domain of interpretation, and it is factual, and there is no ‘should’ in that question.” (P. 22.) “[L]inguistic meaning is beyond the control of, and thus not dependent upon, the normative preferences of a third-party interpreter such as a judge.” (P. 15.) For example, any attempt of a court “to tailor patent scope to the real invention” is an act of construction because it requires a court to consider normative, patent-policy concerns to identify the level of abstraction at which the “real invention” should be identified. (P. 4.)

With the interpretation-construction distinction in place, Chiang and Solum draw a sharp distinction between what conventional wisdom identifies as the source of uncertainty of claim construction and what they identify as its source. Conventional wisdom, they argue, is that the uncertainty derives predominantly from interpretation and thus uncertainty in the descriptive breadth of words’ meanings. In contrast, Chiang and Solum argue that the uncertainty derives primarily from a disagreement among judges concerning the legal import with which courts should imbue claim language in order to achieve particular policy outcomes. They argue that courts frequently use claim construction to override the legitimate range of interpretation of a word’s meaning and to give a legal effect to a word that is beyond the bounds of what the word actually means as a linguistic matter:  “[T]he cause of uncertainty in claim analysis is typically not a linguistic defect, but rather normative disagreement.” (P. 11).

I do not want to suggest that Chiang and Solum are per se wrong when they draw a categorical distinction between interpretation from construction. In fact, I believe that they offer one theoretically plausible way of describing what goes on when a court engages in claim construction. However, I worry that they oversimplify the conventional wisdom on claim construction and that the distinction between the extant debate and the Chiang/Solum thesis is itself just a matter of semantics. That is, I wonder if Chiang’s and Solum’s overarching thesis is novel only because it uses newly crafted concepts to describe the same-old controversy that has been percolating in the courts and law reviews for the last fifteen years or so.

Chiang and Solum assume that the existing debate about claim construction is focused on uncertainties in “linguistic meaning” as they themselves define the concept. That is, they assume that scholars and judges who talk about meaning are talking about an unmediated phenomenon governed solely by the understandings of the intended audience—namely the actual, real-world PHOSITAs. Yet, the linguistic meaning of a term in a patent claim need not be defined in this manner, and I am doubtful that the extant literature on claim construction has routinely defined it in this manner, either. Why can’t linguistic meaning be mediated by courts? What is wrong with saying that courts have authority to establish the rules that must be followed to identify the meanings of the words in a patent claim? There is an Academie Francaise that in some sense determines what French means as a linguistic matter, so why can’t there be courts that establish a claim construction methodology, i.e., the rules that one must follow determine what patent claims mean as a linguistic matter?1 As soon as patent claims are recognized as a realm in which there are special rules for linguistic meaning and as soon as the courts are given an instrumental role in determining these rules, the interpretation-construction distinction blurs into a continuum. There are certainly cases in which the meanings of claim terms according to the court-determined rules will be closer to and farther away from the meanings of those claim terms to actual, real-world PHOSITAs without knowledge of those legal rules, but there is no dichotomy.

Chiang and Solum reject the possibility of a court influencing the linguistic meanings of claim terms by citing a concern about “circular, confused, or misleading argumentation”:

“[T]he standard doctrine…says that courts follow the “meaning” of text. If, for this purpose, the ‘meaning’ referred to the legal effect, then we have a circularity: the court follows the meaning of text, but the meaning is the legal outcome, which is entirely within the control of the court. Under this standard, a court could do anything and it would always be right, which would make clear thinking impossible.” (Pp. 29–30.)

But this argument itself is confused. If courts were to take an ad hoc approach to claim construction, then Chiang and Solum’s concern might have some merit. However, claim construction disputes often center on a dispute over methodology: the set of rules for assessing the meaning of claim terms should be established ex ante, prior to and independent of any given patent claim. The courts’ determination of claim scope is a two-stage game. First, courts establish rules that determine how meaning is to be determined. Second, courts use those rules to determine the meaning of the words in any given claim. The meaning of a text can be its legal effect, and the rules for determining meaning can be both judicially established and steeped in patent policy, yet courts cannot do whatever they please in any given case. Courts must follow the rules, even if rules were established by courts in earlier cases.2

Chiang and Solum argue that the payoff of recognizing that construction, not interpretation, is the primary source of the uncertainty in contemporary claim construction lies in the transparency of intention that follows from calling a spade a spade. If courts and commentators recognize that the disagreements between judges and litigators in claim construction disputes are in fact disagreements over the policy of optimal claim scope rather than disagreements over the actual meaning of language, they posit that we are more likely to identify potential solutions and have rational arguments about which of those solutions will maximize the social-welfare gains of the patent regime. I agree entirely with Chiang and Solum that overt recognition of the role that patent policy does and should play in debates over the optimal claim construction methodology should be encouraged. For any judges or patent practitioners out there who truly believe that the debate over the best claim construction methodology is and should be devoid of policy concerns, Chiang and Solum have provided a sorely needed wake-up call. Yet, I’m not convinced that the distinction between interpretation and construction is either necessary or even helpful to facilitate this recognition. Although it may well just be the rose-colored reading glasses that I wear, I have always read the judicial and scholarly debate over the best methodology for determining the linguistic meanings of words in claims (i.e., the conventional wisdom on claim construction) as a deeply normative debate that is steeped in patent policy. Because I understand word meaning to be a mediated phenomenon in patent law, I read the conventional wisdom to suggest that the courts should establish rules to imbue the words of a patent claim with linguistic meaning that best achieves policy goals (e.g., predictability, optimal claim scope, or something else). Thus, there is no need to recognize a distinction between interpretation and construction to have the conversation about patent policy that we should be having. Furthermore, my suspicion—and it would concededly take much more space than I have here to flush this out in a convincing manner—is that switching over to Chiang’s and Solum’s way of talking about claim construction would do more to muddy the waters than it would to make our policy goals more readily obtainable.

So, in the end, my concern is that the difference between the conventional way of describing what courts do, on the one hand, and Chiang’s and Solum’s way, on the other hand, is itself just a matter of semantics. Chiang and Solum say the rules for determining the meanings of words in a patent claim are exogenous to both the courts and patent policy, so courts need to engage in policy-driven construction in addition to interpretation. Both I and at least some purveyors of the conventional wisdom on claim construction say the rules for determining the meanings of the words in a patent claim should themselves be created by courts in light of patent policy. That is, they say po-TAY-to, and I say po-TAH-to. So long as we agree that the choice of a set of rules for performing claim construction is a choice that should be steeped in patent policy, maybe we should call the whole thing off. If we don’t, each side must at least recognize how the other side defines the term “meaning” so that we don’t just talk past each other.



  1. While it would be an interesting philosophical debate to question whether a phrase in French means what the Academie says it means or what the average francophone thinks it means (to the extent that the two meanings diverge), I’m not seeking a single, dispositive answer. My point is that is entirely imaginable to say that a third-party intermediary establishes the rules that fix meanings of words, even when those words are articulated by a first group and understood by a second group. []
  2. Given the persistent nature of the debate over claim construction in the Federal Circuit, one could argue that the belief in the possibility of a set of clear rules for giving meanings to claim terms is a pie-in-the-sky fantasy. I am sympathetic to this argument, but it does not undermine the point I seek to make here. It undermines the possibility of greater certainty in claim construction regardless of whether or not one embraces the interpretation/construction distinction; recognizing the interpretation/construction distinction does not turn the fantasy of public notice into reality as we still need a set of clear rules for establishing when and how courts should engage in construction. []