Why FISH:FISHES :: INFORMATION:INFORMATIONS

Tim Wu, Properties of Information and the Legal Implications of Same (Columbia Law and Economics Working Paper No. 482, 2014), available at http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2446577

Tim Wu’s new essay, Properties of Information and the Legal Implications of Same, offers both a survey of recent legal and economic scholarship on information and a provocative reconceptualization of it. Wu posits that information is commonly described as an unusual resource because its very nature means that it possesses the twin properties of being non-excludable and non-rival. Taking these properties as givens, information can be readily pegged as a public good, and a strong case can be made out for government intervention to foster the production and/or dissemination of the information (whether in the form of IP rights, subsidies, or something else). However, Professor Wu’s reading of the literature, combined with his sprinkling of original comments on the intrinsic nature of information, suggests that the story is not quite this simple. (Although the review sweeps broadly, discussing securities regulation, contract theory, consumer protection, communications, and free speech, the bulk of it addresses intellectual property.) Professor Wu argues that information is not by its very nature non-excludable or non-rival. Rather, the subject matter of the information, the context in which the information exists, and the structure of the industry that employs the information all matter. They all affect the extent to which information is a public good. As a consequence, Professor Wu counsels against a single policy prescription for problems concerning the underproduction or under-dissemination of information and in favor of context-specific, dynamic laws governing information.

Professor Wu argues that non-excludability per se is not what makes information prone to free-riding problems (and thus the problem of underproduction without government intervention). Positing that information “consists of patterns, which must subsist in some form, whether ink on paper, stored magnetic charges, or whatever else” and that information is only valuable if a human mind perceives it, he takes it to be self-evident that people can in fact be readily excluded from information. “If you don’t have a ticket, you won’t see the movie, and we are all excluded from the text of a book locked in a vault for which the key is lost, or from the particular information contained in an engraving written in a lost language, like hieroglyphs before the discovery of the Rosetta stone.” Instead, Professor Wu suggests that information raises a free-riding concern because, among other things, it can often be copied at a relatively low cost. Presuming that copying costs vary, this shift in the conceptual framework for understanding what enables free-riding is significant because the argument in favor of government intervention shifts from being inherent in the essential nature of information to contingent on the particularities of the context in which information exists.

In a parallel argument, Professor Wu suggests that whether information is non-rival (and thus whether under-dissemination is a normative problem) may not be an intrinsic property of information, but may instead depend upon the type of information at issue or the structure of the industry in which the information is useful. Landes and Posner famously argued that characters like Mickey Mouse may be subject to “overgrazing” or congestion externalities; Kitch similarly argued that patents can be socially beneficial because they reduce the number of follow-on inventors who can use newly discovered technological information without the authorization of a coordinating entity. In sum, the fact that information is non-rival should not be taken as an intrinsic property of information as a resource. Rather, it is a variable with different values in different contexts.

In a short passage from his conclusion, Professor Wu nicely sums up the arguments discussed above and pivots to his take-home lesson:

It is … curious that, given the myriad properties of information, nonexcludability and non-rivalry have received so much attention. One may be suspicious that the attention may be prompted by its neatness of fit into the pre-existing concept of a public good more than the underlying realities of what information is. In any event, it is worth suggesting that lawyers’ or economists’ understanding of information’s properties might be broader, and begin to draw less on just anecdotal examples, but some study of the science of information. Indeed, it may turn out that information’s other properties, less studied, will be equally important for public policy.

In gross, Professor Wu taps into an important theme in the zeitgeist of contemporary intellectual property scholarship: information policy should not be constructed based on the presumption that there are universal truths about the intrinsic qualities of “information” (whatever it is), but rather on context-specific interventions to address particular problems.

I believe that legal scholarship would greatly benefit from greater precision when it discusses information. When we talk about information, we often talk in generalities that stand in the way of the more nuanced conversations that I think Professor Wu would like us to have. Does copyright prevent others from copying “information,” or is “information” what lies beyond the reach of copyright law because of the idea/expression and fact/expression dichotomies? Patent law is routinely described as an intellectual property regime that prevents others from copying “information” and “ideas,” yet “information” and “ideas” are also routinely described as the very resources that patent disclosures publicize and make freely available to all.1 Until we can pin down precisely what we mean when we say information we cannot begin to identify “information’s other properties” that are “less studied” yet that may well “be equally important for public policy” according to Professor Wu’s thesis. For example, Professor Wu himself distinguishes “knowledge” and “wisdom” from “information” at one point, without offering guidance as to how to draw the distinction.

I do have what I see as a friendly amendment to Professor Wu’s suggestion of a route forward. He suggests that we might draw on “some study of the science of information” to move beyond our myopic focus on non-excludability and non-rivalry. However, there is no single science of information. Scholars who study information are a diverse lot. Information theorists who follow Shannon, computer scientists, semanticists, semioticians, bioinformaticists, philosophers of technology, and cognitive scientists, among others, all point to different things when they pick out information in the world.2 Each discipline defines information in the manner that is useful to its own goals, and the disciplines rarely talk to one another. This balkanization complicates the mining of these disciplines for insights that are useful for economic and legal theorists: we will need to sort through the many taxonomies of information that have been proposed to find the ones which, given our goals, provide us with useful tools for identifying and understanding the information that concerns us.

In the English language, there is today no distinct word for the plural of “information.” Yet, I believe that “informations” would be a useful word to coin in order to highlight the true nature of the resource at issue and to facilitate the discussion that Prof. Wu would like to initiate. The point of the plural is not to make “information” operate like other count nouns; I’m perfectly happy with the notion that I buy information from you when I buy ten tips on ten distinct stocks. Rather, the model here is “fish.” The plural of “fish” is “fish” when all of the fish are the same species: “There are fish in the goldfish bowl.” However, the plural of “fish” is “fishes” when the speaker refers collectively to multiple species: “There are three fishes in that tank,” to the extent that the tank contains tuna, snapper, and flounder. To have intelligent discussions about information policy, we do more than realize that information exists in many different contexts; we must do more than seek context-sensitive policies to address the free flow of a unitary phenomenon called information. We must instead recognize that there are many different informations, each of which is studied by its own group of scholars and each of which may merit its own information policy, sometimes regardless of context.



  1. Kevin Emerson Collins, The Knowledge/Embodiment Dichotomy, 47 Davis L. Rev. 1279, 1307–14 (2014). []
  2. For a wide-angle view on information, see James Gleick, The Information: A History, a Theory, a Flood (2011). []
 
 

An Information-Cost Critique of Chattel Property Servitudes

Mulligan, Christina, The Cost of Personal Property Servitudes: Lessons for the Internet of Things (July 14, 2014). Available at SSRN.

Property scholars have long noted a peculiar inconsistency between real and chattel property. While law increasingly tolerates different forms of ownership in and servitudes limiting the use of land, it has remained steadfastly resistant to such restrictions in the context of personal property. In her sharp new paper, “The Cost of Personal Property Servitudes: Lessons for the Internet of Things,” Christina Mulligan shows that this long-lamented inconsistency isn’t a problem at all, but rather a sensible distinction that flows naturally from the core differences between real and chattel property. This insight not only helps explain a longstanding puzzle in property law, but sheds new light on the increasing practice of content owners using license agreements to restrict the use of digital goods.

From a purely formal perspective, one might reasonably wonder why courts allow increasing complexity in real property ownership—from historical forms like contingent remainders and fees simple subject to executory limitation to modern innovations like condominiums and time-shares—while insisting that no such variation is permitted with respect to chattels. If I can have a defeasible fee interest or a time-share in a vacation home in Boca Raton, why not also in a Rolex or a refrigerator? This seeming has engaged scholars since Coke. Most recently, Molly Van Houweling investigated contract-based restrictions on personal property from the perspective of physical property, suggesting that the same concerns that warrant skepticism about servitudes on real property may be used to govern servitudes in the context of personal property as well.

Mulligan takes a different approach to this issue altogether. Instead of proposing a way to render uniform law’s treatment of real and personal property servitudes, she instead seeks a way to show that what other scholars have treated as a formal inconsistency actually rests on solid practical grounds. Mulligan deploys Merrill and Smith’s information costs defense of limited forms of property rights. Because chattels, unlike land, tend to be small, mobile and fungible, it will prove very difficult for a buyer to determine whether those chattels are heavily encumbered by restrictions or free for any sort of use. And since personalty tends to be worth less than land, these information costs will loom as an even greater burden to purchasers of chattel property than they would to real property. After all, it would be well worth the money to pay $1000 to clear title to a $200,000 vacant lot you’re thinking of purchasing, but you’d be a fool to shell out that amount to make sure there are no encumbrances on a $200 watch. And finally, Mulligan aptly observes that while one needs to investigate the state of title in real property relatively rarely, we purchase and interact with things all the time, so that in a world where we had to worry about use restrictions constantly, we would have to overcome information costs multiple times on a daily basis.

In answer to the longstanding question why law is conflicted in its treatment of servitudes on land and things, Christina Mulligan has shown us that the question itself is misconceived. There is not a conflict at all, but rather a perfectly plausible approach to managing the different information costs associated with real and personal property. Mulligan’s elegant information cost solution to this problem invites even more reasons that law may wisely choose to disfavor use restrictions on chattel property. Consider, for example, how information costs may play out in light of knowledge asymmetries between buyers and sellers of chattel property. Sellers’ greater knowledge and expertise would allow them to exploit information costs to systematically disadvantage end users. Imagine, for example, than an unscrupulous seller wants to sell books subject to a long, complex series of use restrictions, including the requirement that buyers will return the copy after six months of use. There’s a pretty good chance that many consumers will fail to smoke out this trap embedded in the Sargasso Sea of boilerplate—after all, it’s just a book, and hardly worth all the trouble of wading through a long agreement. But if law simply sidelines the whole idea of use restrictions on chattel property for a rule that says “when you buy it, it’s yours, no exceptions,” then we avoid this risk of consumer exploitation in addition to lowering information costs.

The nefarious bookseller in my example is hardly fictional. The Supreme Court’s 2013 decision in Kirtsaeng v. John Wiley Co. involved the legality of resale restrictions on textbooks sold in foreign markets, and most law professors have received at least a few desk copies plastered in big, ugly stickers reading “for professor use only.” The notion of imposing servitudes on chattel property may have been historically disfavored, but this apparently hasn’t prevented contemporary content owners from pushing the practice. In fact, as Mulligan correctly observes, use restrictions on personal property threaten to become ever more pervasive with the advent of the “internet of things”—the increasing enmeshment of everyday items with copyrighted software. Given courts’ peculiar toleration for use limits on software via licensing, and the ever-growing presence of computer programs embedded in cars, watches, and even refrigerators, long-rejected chattel servitudes may be approaching faster than we realize.

And here is where Mulligan’s focus on chattel property hits a normative wall. Information costs tell us a lot about why personal property servitudes are a bad idea, or at least a worse idea than they are in the context of real property. But the threat of pervasive, cost-prohibitive use restrictions on the countless things we own and use in our daily life isn’t merely, or even primarily, a property problem, but a contract problem. Software companies don’t regard themselves as selling copies of their programs that are subject to limited use rights. Rather, they portray these transfers as mere licenses, so that users are not owners of copies at all, but mere licensees, with the limits on their use—however draconian—flowing from the limited nature of licensure itself. So even if we agree—as Mulligan convincingly argues we should—that chattel property doctrine wisely disfavors use restrictions, this doesn’t answer the much harder question of what this means for owners’ ability to contract. And while there is no shortage of proposals for limiting how owners can license uses of their works of authorship, it bears remembering that owners are under no obligation to permit uses of their works at all, so a regime that excessively burdened their freedom of contract might lead owners to simply not contract—or engage in creative production—at all.

Scholarship should be measured by what it does, not by what it fails to do, and by that standard, Christina Mulligan’s “The Cost of Personal Property Servitudes” succeeds admirably. Her information-cost critique of chattel property servitudes not only helps to illuminate a longstanding puzzle in property law, but also moves the ball forward on the socially important and legally difficult issues raised by the increasing ubiquity of computer software in the objects that populate our daily lives. Mulligan’s insightful article also provides more evidence that there is more to be gained by investigating, rather than reflexively resisting, the essential commonalities between physical and intellectual property.

 
 

Seeing Like a Copyright Lawyer: Judging Similarity in Copyright Cases

There are many ways to use empirical research in intellectual property scholarship. Work can be qualitative or quantitative, interdisciplinary or highly focused on the law. One of the most intriguing questions I’ve seen investigated empirically of late is “what makes us think that one work is similar enough to a previous one to infringe?” Given the significant expansion in the scope of copyright from pure reproduction to derivative works and substantially similar works, this is an important issue.

The two articles I highlight here approach the question from very different, but complementary, perspectives. The authors of Judging Similarity start with legal scenarios adapted from real cases and ask survey respondents whether the works are similar enough to infringe. Kate Klonick, like David Morrison, examines similarity from a cognitive science perspective, starting with what researchers already know about human judgments of similarity and difference in nonlegal contexts. (Disclosure: I advised Klonick, now pursuing a Ph.D. at Yale, on this project.)

Judging Similarity begins by noting, as others have, that substantial similarity doctrine is a mess. They hypothesize that factfinders are more likely to find substantial similarity—and thus liability—in the context of “a narrative that not only describes the intentional act of making one thing look like another but also identifies a wrongdoer.”

In their control group, subjects had minimal information and were asked to judge the similarity of images; in the test group, subjects had additional information about “the act of copying, the creation of the work or the consequences of the copying.” Information about the fact of copying or that significant effort was required to create the first work increased similarity judgments, “despite the fact that the works themselves remained the same and even though the subjects were consistently told that they had to base their assessments entirely on the works themselves.” (Subjects weren’t making the on-off determination of substantial similarity that would be required in an actual copyright case; they were rating similarity on a scale; further work is needed to see how much this kind of information affects a threshold determination that could lead to liability.)

Morality, the authors suggest, affects factual judgments, fitting into a larger literature on motivated cognition. They argue that this isn’t just a question of framing—increased attention to similarity caused by the emphasis on the fact of copying—because information about the amount of labor involved in creating the original also increases similarity judgments. Given this effect, if we’re serious about keeping “labor = right” arguments like that rejected in Feist out of the copyright system, they suggest, the law might need to change further—or accept that labor-related considerations will come back in at the infringement stage, once certain works are deemed unprotectable. Copyright law’s claim to impose strict liability might be overstated, if we see respondents’ reactions as fault-based.

Intriguingly, however, information about market substitution, or the lack thereof, did not affect similarity judgments. The authors don’t have much to say about that result, but someone like me is inclined to read this as a moral intuition consistent with current fair use doctrine: if the copying is otherwise ok—if the work is transformative, and doesn’t displace a separate and robust derivative market like the market for film adaptations of novels—then the fact that it causes market harm is irrelevant. Reverse engineering cases and the idea/expression distinction that allows copying of ideas also follow this pattern. There’s nothing inherently wrong with giving people options to choose their favorite variations on a theme, even if one seller inspired others to enter the market.

Klonick’s article takes a different approach, focusing on other features of human thought, independent of moral principles. She notes that three significant areas of copyright law involve variations on judging similarity to and divergence from an existing work: (1) originality, where there is a public domain (or even copyrighted) referent and the question is whether enough has been added to create an original work; (2) infringement due to substantial similarity; and (3) transformativeness for fair use purposes.

She explains that cognitive scientists know some interesting things about similarity judgments. For one thing, there’s a difference between similarity judgments and difference judgments—depending on how a question is framed, respondents’ answers may diverge a lot. When asked to judge similarity, people give greater weight to common features; when asked to judge difference, they give greater weight to distinctive features of each item. For another, similarity judgments are not always transitive or symmetric. If you start with A and ask about its similarity to B, you can get different results than if you start with B and ask about its similarity to A. So, Americans asymmetrically found greater similarity in the phrase “Mexico is like the United States” than the phrase “the United States is like Mexico.” Consistent with this result, people are also more likely to find similarity when the more “prominent” idea or object is the referent—the B in “A is like B.” This phenomenon, she suggests, tilts the field in favor of owners of very successful works: in strike suits brought by owners of unsuccessful works, the accused work will be more prominent and will look less similar in comparison to the accusing work, while when the successful copyright owner sues, the accused work will look more similar.

In addition, depending on the situation, feature similarity may matter less than “relational” similarity—a bear isn’t a lot like a bird, but a mother bear with a cub is a lot like a bird with a chick. Consider the following images:

TushnetIP

 

 

 

 

 

 

 

Most observers consider B (component or “primitive” feature match) more similar to the top stimulus in the panel on the left, but A (relational match) more similar to the top stimulus in the panel on the right. But this “relational weighting” also happens in difference judgments as well—so they are also likely to consider A more different from the stimulus than B is in the panel on the right.

Depending on how issues in copyright cases are presented, then, factfinders may be more inclined to find important differences or important similarities. She suggests that the Roth look and feel test might help moderate this effect, though I don’t really see how that would work. Klonick also suggests that transformativeness analysis’s shift to “purpose” can be explained in relational terms: when the question is how the new work fits into the universe of expression/what social role it plays, it may be easier to see relational differences, such as the difference between a single expressive work and a component of a large database (Perfect 10, etc.).

All of these features have implications for copyright cases, though they may be hard to tease out. At the very least, they offer guidance for savvy copyright lawyers trying to frame a case in a favorable way.

What these articles give us, especially taken together, are significant building blocks. They can be combined with other empirical work, such as Jamie Lund’s study showing that lay listeners simply can’t perform the task of judging infringment of musical works by listening to sound recordings, to suggest reforms to make practice fit theory. Jessica Litman’s recent article about the history of movie infringement cases also reminds us that we haven’t paid as much attention as we should to the fact that several foundational infringement cases involved moves from one medium to another—plays to silent movies, in her examples. What makes a dialogue-laden play seem similar or dissimilar to a mostly gestural movie is a question worth asking. As we gain a better idea of how people react to different comparisons, framing, and evidence, we can try to make both jury instructions and substantive law support the outcomes we believe to be the best for copyright policy overall.

 

 

 
 

From Crowd-Sourcing to Crowd-Enforcing: An Empirical analysis of Threadless’s Community IP Norms

Empirical studies of creative communities continue to provide scholars and policymakers with useful evidence for assessing intellectual property regimes. In Seven Commandments, we find yet another excellent example of the type of evidence we need to know and, perhaps even more importantly, robust methods for gathering it.

The article reports on a study of Threadless, an online community that crowd-sources t-shirt designs. As with many such communities, it uses a combination of collaborative and competitive elements, allowing users to work together on certain projects while also competing with each other for approval, funding, and ultimately production and distribution of the designed apparel. The authors of the paper seek to study the IP norms of the Threadless community in order to understand what makes it succeed in terms of incentives to create. In particular, they note that because formal enforcement of copyright law is generally difficult if not impossible on such sites, normative systems are presumed to play the major role in protecting the investment of creators.

In order to discover these norms, the authors gather data in three ways: netnography (observation on online communications and interactions), a survey, and a field experiment where the authors intentionally violated various designer’s IP by copying and posting some or all of their designs as new submissions on the Threadless site.

What they discover are seven “commandments” that appear to dominate the Threadless culture when it comes to IP norms. These include: (1) You must not make an unauthorized copy of a design; (2) If you copy, you have to ask the original designer for permission; (3) If you copy, you must provide attribution; (4) If you are suspicious of a design, you must investigate before accusing it of being a copy; (5) If you find that a design was copied, you have to make the copy case public; (6) the public trial must be fair; and (7) If someone is caught copying a design, you have to join in a collective sanctioning of the copier.

These results are somewhat surprising given the general skepticism that most researchers have had for crowdsourcing as a means of generating and enforcing norms, especially IP norms. Yet Seven Commandments shows that a large percentage of the active Threadless community has come to at least a rough consensus against unauthorized copying and for attribution. More interesting still, they have chosen to adopt some of the core elements of procedural due process from the court system as part of their approach to enforcement. For example, the notion that accusations of copying should be made public and that there should be some version of a public trial with evidence and the opportunity for the accused to contest and challenge the case against them goes back as far as the Magna Carta in Anglo-American law. Ironically, such elements are nearly absent from formal online IP enforcement/resolution regimes, such as the Digital Millennium Copyright Act’s “notice-and-takedown” scheme or the recent private agreement among United States Internet Service Providers to adopt a “six strikes” system of copyright complaints.

So what makes Threadless so special? How are they able to succeed in this regard? The authors are understandably cautious to pinpoint any one factor, but their findings suggest that there is a strong moral core to the approach of the community on these issues, and that the community’s commitment to this core allows them to have a more robust, participatory, and honest dialog about how IP should be handled. Moreover, there is a strong emphasis on the transparency of the community’s behavior. The accusations, evidence, trial, and sanctions are public and thus accountable themselves to any accusations of bias or abuse. This reinforces not only fairness but also rationality in outcomes.

Of course, Threadless is but one community, so one hopes that the authors of Seven Commandments expand their project to provide comparative results and analysis, especially across different modes and means of creativity, but by itself it serves as a beacon to help guide sites that want to avoid both unauthorized appropriation and excessive or draconian legal battles between community members

 
 

Are Patent Damages Uncertain?

Michael Mazzeo, Jonathan Hillel, & Samantha Zyontz, Explaining the “Unpredictable”: An Empirical Analysis of Patent Infringement Awards, 35 Int’l Rev. of L. & Econ. 58 (2013).

A recent American Bar Association “Corporate Counsel” seminar styled itself as “The Uncertain Arena: Claims for Damages and Injunctive Relief in the Unpredictable World of IP Litigation.” The seminar began by recounting the seemingly surprising, $1 billion-plus damage awards in the patent infringement actions, Carnegie Mellon v. Marvell Technology, Apple v. Samsung, and Monsanto v. DuPont. These blockbuster awards stand in stark contrast to the usual awards of $20 million or less in a typical case.

By and large, in-house counsel have chalked up much of these differences to the luck of the draw. Such a sentiment is all-too-common not only among practitioners, but also among policymakers and academics. No less than the eminent IP scholar Mark Lemley has remarked, “Patent damages are unpredictable because the criteria most commonly used are imprecise and difficult to apply.”

Mazzeo, Hillel, and Zyontz make an impressive contribution to the literature by casting substantial doubt on such views. Specifically, in their recent empirical study of district court patent infringement judgments between 1995 and 2008, they show that patent damages can be explained in a large part by a fairly small number of patent-, litigant-, and court-related factors.

The authors assembled a set of over 1300 case outcomes from the PricewaterhouseCoopers database, which they boiled down to 340 judgments in favor of the patentholder in which award details were available. Although this number of judgments may seem low, based on independent work of my own for a study on the duration of patent infringement actions, these counts represent a high percentage of the total number of actions and judgments. Thus, it is unlikely that including the unavailable judgments and awards in the dataset would substantially change their results.

Mazzeo, Hillel, and Zyontz begin their exposition by noting—contrary to the widespread view that patent damages awards are shockingly high—that the median damage award has remained fairly constant from 1995 through 2008, at roughly a low $5 million. The billion-dollar damage awards in Apple v. Samsung and other cases are thus extreme outliers. Indeed, during the time period at issue, only eight awards came in over $200 million, yet those awards accounted for 47.6% of collective damages of all cases (other than generic-branded pharmaceutical disputes under the Hatch-Waxman Act). So, outside of a small number of highly publicized, blockbuster cases, damages awards are (perhaps shockingly) low – a fact that flies in the face of current rhetoric about outsized awards in patent cases.

The most impressive aspect of the article is the authors’ empirical models explaining roughly 75% of the variation among damages awards. In particular, they assemble various factors—including the number of patents asserted, the age of the patents, the number of citations to the patents, whether the defendant is publicly traded, and whether a jury or judge assessed damages—and construct a regression model that shows statistically significant relationships between these factors and the amount of damages awarded.

For example, in one model, if the defendant was publicly traded, damages were roughly 1.5 times higher than when the defendant was privately held, controlling for other factors. What is particularly striking is that the outlier awards—namely, those above $200 million—fall squarely within the model’s explanatory power. Thus, rather than being the random results of rogue juries, these large damage awards likely reflect a variety of measurable factors that point in favor of larger awards across the large number of cases confronted by the courts.

These findings have important public policy implications. As the authors point out, stable, predictable damage awards are essential for a properly functioning patent system. Otherwise, the careful balance of incentives to patentees to innovate and incentives to third parties either to acquire licenses to patented inventions or invent around would be thwarted.

On the other hand, Mazzeo, Hillel, and Zyontz overreach by concluding that their “findings thus bolster the core tenets of the patent system” that exclusive patent rights are an appropriate means for protecting inventions. Specifically, the authors’ argument that “several of the driving factors correspond to accepted indicators of patent quality” is insufficient to support such an assertion, because these factors—such as forward citations, number of claims, and number of patents—are accepted indicators of a patent’s economic “value,” not a patent’s “quality,” which concerns its validity. (Although there is very likely a relationship between the two notions, no study has resoundingly linked patent value to patent quality.) And typically these value indicators have been derived from studies of patent litigation. Thus, to argue that high damages in litigation justify the patent system on the basis of such metrics is essentially circular. Indeed, as I have argued elsewhere, it is very likely that patent damages as they stand should be reengineered to provide more optimal innovation incentives.

Nonetheless, despite this study’s inability to “bolster the core tenets of the patent system,” its result that damages awards are fairly predictable is a very important contribution to the literature. Moreover, this work provides the starting point for more comprehensive investigations of damages in patent cases, such as the follow-on study the authors recently undertook regarding non-practicing entity (NPE) and non-NPE suits. Additionally, their explanatory approach could be extended to the more basic win/loss determinations on infringement and validity. One cannot ask for much more in any empirical study, and Mazzeo, Hillel, and Zyontz deserve kudos for their exacting labors and notable insights.

 
 

Intermediary Trademark Liability: A Comparative Lens

Graeme B. Dinwoodie, Secondary Liability for Online Trademark Infringement: The International Landscape, 36 Colum. J.L. & Arts (forthcoming 2014), available at SSRN.

Although we live in a global, interconnected world, legal scholarship – even scholarship about the Internet – often focuses on domestic law with little more than a nod to developments in other jurisdictions.  That’s not necessarily a bad thing; after all, theoretically robust or historically thorough works can rarely achieve their goals while surveying the landscape across multiple countries with disparate traditions and laws.  But as a student of U.S. law, I appreciate articles that explain how other legal systems are addressing issues that perplex or divide our scholars and courts.  Given the tumult over intermediary liability in recent years, comparative commentary on that topic has special salience.

In this brief (draft) article, Graeme Dinwoodie explores both structural and substantive differences in how the United States and Europe approach intermediary trademark liability in the Internet context.  To an outsider, the European web of private agreements, Community Directives, CJEU opinions, and sundry domestic laws can appear daunting and sometimes self-contradictory.  Dinwoodie puts them all into context, offering a coherent explanation of the interaction between Community law, member state law, and private ordering, and situating the overall picture within a broad normative framework.  And he contrasts that picture with the one emerging through common law in the United States.  The result is a readable, informative study of two related but distinct approaches to intermediary trademark law.

Dinwoodie begins by framing the core normative question:  how should the law balance trademark holders’ interest in enforcing their marks against society’s interest in “legitimate development of innovative technologies that allow new ways of trading in goods”?  This tension is a familiar one: from Sony through Grokster, from Inwood through eBay, courts and lawmakers have struggled with how to allocate responsibility between intellectual property holders, those who infringe their rights, and those whose behavior, product, or technology plays some role in that infringement.  Dinwoodie identifies the tension but does not resolve it, purporting to have the more modest goal of exposing the differences between the American and European approaches and discussing their relative virtues.  But the article barely conceals Dinwoodie’s preference for rules that give intermediaries at least some of the burden of policing trademark infringement online.

Structurally, there are some significant differences between the European and American approaches.  Whereas courts have shaped the U.S. law primarily through common law development, Europe has a set of Directives that offer guidance to member states in developing intermediary trademark liability rules.  Europe has also experimented with private ordering as a partial solution, with stakeholders recently entering a Memorandum of Understanding (MOU) that addresses the role of brand owners and intermediaries in combating counterfeiting online.  In other words, rather than relying exclusively on judge-made standards of intermediary liability, European policymakers and market actors have crafted rules and norms of intermediary responsibility for trademark enforcement.

Whether as a result of these structural differences or as a byproduct of Europe’s tradition of stronger unfair competition laws, the substantive rules that have emerged in Europe reflect more solicitude for trademark owners than provided by United States law.  Doctrinally, intermediaries have a superficial advantage in Europe, because the Court of Justice limits direct infringement to those who have used the mark in connection with their own advertising or sales practices.  They also benefit from Europe’s horizontal approach to Internet safe harbors.  Unlike the United States, Europe includes trademark infringement, unfair competition, and other torts in the “notice-and-takedown” system, offering service providers the same kind of immunity for these infractions as they receive under copyright law.  The safe harbor law explicitly provides that intermediaries need not actively root out infringement.

Other features of European law, however, temper the effects of these protections.  Most significantly, Article 11 of the European Enforcement Directive requires member states to ensure that “rights holders are in a position to apply for an injunction against intermediaries whose services are used by third parties to infringe an intellectual property right.”  In other words, even if they fall within the Internet safe harbor (and thus are immune from damages), intermediaries may face an injunction requiring affirmative efforts to reduce infringement on their service.  In Germany, at least, courts have ordered intermediaries to adopt technical measures such as filtering to minimize future infringement.  The threat of such an injunction no doubt played a role in bringing intermediaries to the table in negotiating the MOU, which requires them to take “appropriate, commercially reasonable and technically feasible measures” to reduce counterfeiting online.

This explicit authority to mandate filtering or other proactive enforcement efforts finds no counterpart in U.S. law.  On its face, U.S. contributory infringement law requires specific knowledge of particular acts of infringement before an intermediary has an obligation to act.  And while scholars (including myself) have argued that intermediaries’ efforts to reduce infringement have played an implicit role in case outcomes, the letter of the law requires nothing but a reactive response to notifications of infringement.  Dinwoodie suggests that this “wooden” approach to intermediary liability may miss an opportunity to place enforcement responsibility with the party best suited to enforce.

In the end, while professing neutrality, Dinwoodie clearly sees virtues in the European model.  He applauds the horizontal approach to safe harbors, welcomes the combination of legal standards and private ordering, and praises the flexibility and transparency of Europe’s largely least-cost-avoider model.  Whether the reader agrees with him or prefers the United States’ more technology-protective standard, she will come away with a better understanding of the structure and content of intermediary trademark law in both the United States and Europe.

 
 

Open and Close Innovation Regimes: An Experiment on Institutional and Organizational Design

Kevin J. Boudreau & Karim R. Lakhani, How Disclosure Policies Impact Search in Open Innovation, Harv. Bus. Sch. Tech. & Operations Mgmt. Working Paper (2013).

One of the central tensions in the institutional design of innovation regimes is the trade-off between incentives and disclosure. Innovation systems, including intellectual property systems, are created to optimize creative output by balancing ex ante incentives for initial creators with ex post disclosure of the innovation to follow-on creators and the public. According to accepted theory, the more rigorous the disclosure—in terms of when and how it occurs—the weaker the incentives. But a fascinating new experiment by Kevin Boudreau and Karim Lakhani suggests that differences in disclosure regimes can affect not just the amount of innovation but also the kind of innovation that takes place.

Boudreau and Lakhani set up a tournament on the TopCoder programming platform that involved solving a complicated algorithmic task over the course of two weeks. All members of the community were invited to participate in the tournament, and contest winners would receive cash prizes (up to $500) and reputational enhancement within the TopCoder community. The coding problem was provided by Harvard Medical School, and solutions were scored according to accuracy and speed. Importantly, the top solutions in the tournament significantly outperformed those produced within the medical school, but that’s a different paper.

Boudreau and Lakhani randomly assigned participants into different conditions based on varying disclosure regimes and tracked their behavior. The three disclosure conditions were:

  • Intermediate Disclosure – Subjects could submit solutions to the contest, and, when they did, the solutions and their scores were immediately available for other subjects in the same condition to view and copy.
  • No Disclosure – Subjects’ solutions to the contest were not disclosed to other subjects until the end of the two-week contest.
  • Mixed – During the first week of the contest, submissions were concealed from other subjects, but, during the last week of the contest, they were open and free to copy.

For the Intermediate and Mixed conditions, subjects were asked to provide attribution to other subjects’ whose code they copied.

Cash prizes were given out at the end of the first and second weeks based on the top-scoring solutions. For the Intermediate condition, the prizes were split evenly between the subject who had the highest scoring solution and the subject who received the highest degree of attribution.

The subjects were about equally split between professional and student programmers, and they represented a broad range of skill levels. 733 subjects began the task. Of them, 124 submitted a total of 654 intermediate and final solutions. The solutions were determined to represent 56 unique combinations of programming techniques.

The authors predicted that mandatory disclosure in the Intermediate condition would reduce incentives to participate because other subjects could free-ride on the solutions of initial inventors. The data are consistent with this hypothesis: Fewer people submitted answers in the Intermediate condition than in the No Disclosure condition, and the average number of submissions and the number of self-reported hours worked were also lower by significant margins. The Mixed condition generally produced data that were between the other two conditions. Ultimately, scores in the Intermediate condition were better than those in the other conditions because subjects could borrow from high-performing solutions.

More importantly, the data also disclosed differences in how subjects solved the problem. Consistent with the authors’ hypotheses, subjects in the Intermediate condition tried fewer technical approaches and seemed to experiment less than did those in the No Disclosure condition. Once significant improvements were disclosed, other subjects in the Intermediate condition tended to borrow the successful code leading to a relatively smooth improvement curve. In the No Disclosure condition, by contrast, although new submissions were generally better than those the subjects had submitted before, they were more variable and less consistent in their improvement.

In summary, when subjects can view each others’ code, innovation tends to be more path-dependent and to happen more rapidly and successfully than when there is no disclosure. But when innovation systems are closed, people tend to participate more, and they tend to try a wider variety of solution strategies.

In previous research, these authors have explained how open-access innovation systems succeed in the face of diminished extrinsic incentives. This experiment provides valuable insight into the relative merits of open- and closed-access systems. Open-access systems will, all else equal, have advantages when creators have significant intrinsic incentives and when the innovation problem has one or few optimal solutions.

Closed-access systems, by contrast, will prove comparatively beneficial when the system must provide independent innovation incentives and when the problem involves a wide variety of successful solutions. The experiment’s contribution, then, is not to resolve the debate about open versus closed innovation but rather to help policymakers and organizations predict which kind of system will tend to be most beneficial.

The experiment also suggests helpful ways of thinking about the scope of Intellectual Property rights in terms of follow-on innovation. For example, strong derivative-works rights in copyright law create a relatively closed innovation system compared to patent law’s regime of blocking patents. If we think of the areas of copyright creativity as exhibiting a large variety of optimal solutions, then the closed-innovation system may help prevent path-dependence and encourage innovation (evidence from the movie industry not withstanding). Future research could test this hypothesis.

As with any experiment, many questions remain. Boudreau and Lakhani’s incentives manipulation is not as clean as could be hoped, both because payouts in the Intermediate condition are lower and because attribution in the No Disclosure condition is effectively unavailable. Accordingly, it is difficult to make causal arguments about the relationship between the disclosure regime and incentives. In addition, although the Intermediate condition produces lower participation incentives for subjects who expect to be high performing, it creates higher participation incentives for subjects who expect to be low performing because they can simply borrow from high-scoring submissions at the end of the game.

Interestingly, there seems to be surprisingly little borrowing, which could suggest a number of curious features about the experiment: Perhaps only high-skill subjects are capable of borrowing and/or there may be social norms against certain kinds of borrowing even though it is technically allowed. And, as always, there are questions about the representativeness of the sample. Subjects were likely disproportionately men, and they were also likely ones with significant open-source experience where they may have internalized the norms of that community. On the other hand, TopCoder bills itself as “A Place to Compete,” which may have primed competitive behaviors rather than sharing behaviors.

Ultimately, Boudreau and Lakhani have produced an exciting new contribution to intellectual property and innovation research.

 
 

How Copyright Prevents Us From Getting the Books We Want

Paul Heald, The Demand for Out-of-Print Works and Their (Un)Availability in Alternative Markets (2014), available at SSRN.

Back in mid-2013, Paul Heald posted to SSRN a short paper that already has had far more impact than academic papers usually have on the public debate over copyright policy. That paper, How Copyright Makes Books and Music Disappear (and How Secondary Liability Rules Help Resurrect Old Songs), employed a clever methodology to see whether copyright facilitates the continued availability and distribution of books and music. Encouraging the production of new works is, of course, copyright’s principal justification. But some have contended that copyright is also necessary to encourage continued exploitation and maintenance of older works. We find an example in the late Jack Valenti, who, as head of the Motion Picture Association of America, in 1995 made the argument before the Senate Judiciary Committee that it was necessary to extend the copyright term in part to provide continued incentives for the exploitation of older works. “A public domain work is an orphan,” Valenti testified. “No one is responsible for its life.” And of course if no one is responsible for keeping a creative work alive, it will, Valenti suggests, die.

Is that argument right? Enter Paul Heald. Heald’s 2013 article employs a set of clever methodologies to test whether copyright did, indeed, facilitate the continued availability of creative works—in Heald’s article, books and music.  With respect to books, Heald constructed a random sample of 2300 books on Amazon, arranged them in groups according to the decade in which they were published, and counted them. Here are his findings:

 

© 2012 by Paul Heald. All rights reserved. Reprinted with permission of Paul Heald.

© 2013 by Paul Heald. All rights reserved. Reprinted with permission of Paul Heald.

If you hadn’t already seen Heald’s article, the shape of this graph should surprise you.  You would probably expect that the number of books from Amazon would be highest in the most recent decade, 2000–2010, and would decline continuously as one moves to the left in the graph—i.e., further into the past. On average, books are, all things equal, less valuable as they age, so we should expect to see fewer older books on Amazon relative to newer ones.

But that’s not what we see.  Instead, we see a period from roughly 1930 to 1990, where books just seem to disappear. And we see a large number of quite old books on Amazon. There are many from the late-19th century and the first two decades of the 20th century.  Indeed, there are far more new editions from the 1880s on Amazon than from the 1980s.

What on earth is causing this odd pattern? In a word: copyright. All books published before 1923 are out of copyright and in the public domain. And a variety of publishers are engaging in a thriving business of publishing these out-of-copyright works—and so they’re available on Amazon. In contrast, a large fraction of the more recent works—the ones under copyright—simply disappear. Maybe they’ll spring back to life when (or if?) their copyright expires. But for now, copyright doesn’t seem to be doing anything to facilitate the continued availability of these books. In fact, copyright seems to be causing some books to disappear.

Heald does a similar analysis for music, and this analysis too shows that copyright causes music to disappear, relative to music in the public domain. The effect is less pronounced than in the case of books, but it is still there.

In short, Heald’s paper placed a big question mark after the “continued availability” justification for copyright. If we care about works remaining available, then copyright, in fact, seems to be hurting and not helping.

Now Heald is back with a follow-up paper, The Demand for Out-of-Print Works and Their (Un)Availability in Alternative Markets, that takes on the most important question raised by his first: Should we be concerned that copyright appears to make works disappear? If there is no consumer demand for these disappeared works, then possibly not. But if there is consumer demand for the works that copyright kills, then we should care because that demand is not being met.

Heald employs a number of tests to determine whether there is consumer demand for the books that copyright makes disappear. Read the article if you want a full account, but it is worthwhile to give a couple of highlights.  In a particularly nifty part of the paper, Heald compares books available on Amazon with those available on the biggest used books website. The graph is instructive:

 

© 2014 by Paul Heald. All rights reserved. Reprinted with permission of Paul Heald.

© 2014 by Paul Heald. All rights reserved. Reprinted with permission of Paul Heald.

That gap between the red (Amazon) and blue (used book) curves suggest that used book sellers take advantage of a market in many books that copyright has made disappear from new book shelves, which suggests that there is consumer demand for these books.

Heald then examines other possible ways that the market may provide access to works that copyright has made disappear. For music, Heald looks to see whether copyright owners are digitizing out-of-print records and either selling them on iTunes or posting them on YouTube. The answer, hearteningly, appears to be yes. Unfortunately, the picture for books is much less reassuring. As usual, Heald’s chart speaks more clearly than words:

 

© 2014 by Paul Heald. All rights reserved. Reprinted with permission of Paul Heald.

© 2014 by Paul Heald. All rights reserved. Reprinted with permission of Paul Heald.

Look at the number of popular songs from 1923–32 that are on iTunes—almost all of them. But then look at the number of popular books from the same period that are offered as eBooks—less than 40%. Many of these books are not available on Amazon in paper form. Nor are they distributed digitally.

So why the difference between the music and book publishing industries when it comes to the availability of older titles still under copyright? I’ll leave that as a mystery—and I hope your unslaked curiosity will lead you to read Heald’s article. It is well worth your time.

 
 

Copyright as a Chill on Innovation

Michael A. Carrier, Copyright and Innovation: The Untold Story, 2012 Wisc. L. Rev. 891 (2012).

Sony’s Betamax was the first reprography technology to attract a copyright infringement lawsuit. Little did copyright experts back then realize how much of a harbinger of the future the Betamax would turn out to be. Countless technologies since then designed, like the Betamax, to enable personal use copying of in-copyright works have come to market. Had the Supreme Court outlawed the Betamax, few of these technologies would have seen the light of day.

The most significant pro-innovation decision was Supreme Court’s Sony Betamax decision. It created a safe harbor for technologies with substantial non-infringing uses. Entrepreneurs and venture capitalists have heavily relied on this safe harbor as a shield against copyright owner lawsuits. Yet, notwithstanding this safe harbor, copyright owners have had some successes in shutting down some systems, most notably, the peer-to-peer file-sharing platform Napster.

It stands to reason that decisions such as Napster would have some chilling effect on the development of copy-facilitating technologies. But how much of a chilling effect has there been? Some would point to products and services such as SlingBox and Cablevision’s remote DVR feature and say “not much.”

Antitrust and innovation scholar Michael Carrier decided to do some empirical research to investigate whether technological innovation has, in fact, been chilled by decisions such as Napster. He conducted qualitative interviews with 31 CEOs, co-founders and vice presidents of technology firms, venture capitalists (VCs), and recording industry executives. The results of his research are reported in this Wisconsin article, which I like a lot.

One reason I liked the article was because it confirmed my longstanding suspicion that the prospect of extremely large awards of statutory damages does have a chilling effect on the development of some edgy technologies. Because statutory damages can be awarded in any amount between $750 and $150,000 per infringed work and because copy-facilitating technologies can generally be used to interact with millions of works, copyright lawsuits put technology firms at risk for billions and sometimes trillions of dollars in statutory damages. For instance, when Viacom charged YouTube with infringing 160,000 works, it exposed YouTube and its corporate parent Google to up to $24 billion in damages. While a company such as Google has the financial resources to fight this kind of claim, small startups are more likely to fold than to let themselves become distracted by litigation and spend precious VC resources on lawyers.

But a better reason to like the article is the fascinating story Carrier and his interviewees tell about the mindset of the record labels about Napster and the technology “wasteland” caused by the Napster decision.

The lesson that record labels should have learned from Napster’s phenomenal (if short-lived) success was that consumers wanted choice—to be able to buy a single song instead of a whole album—and if it was easy and convenient to get what they wanted, they would become customers for a whole new way of doing business. Had the record labels settled with Napster, they would have benefited from the new digital market and earned billions from the centralized peer-to-peer service that Napster wanted to offer.

The labels were used to treating record stores as their customers, not the people who actually buy and play music. Radio play, record clubs, and retail were the focus of the labels’ attention. They thought that the Internet was a fad, or a problem to be eradicated. They were unwilling to allow anyone to create a business on the back of their content. They believed that if they didn’t like a distribution technology, it would go away because they wouldn’t license it. They coveted control above all. When the labels began to venture into the digital music space themselves, they wanted to charge $3.25 a track, which was completely unrealistic.

Some of Carrier’s interviewees thought that the courts had reached the right decision in the Napster case, but questioned the breadth of the injunction, which required 100% effectiveness in filtering out infringing content and not just the use of best efforts, thereby making it impossible to do anything in the digital music space. One interviewee asserted that in the ten years after the Napster decision, iTunes was the only innovation in the digital music marketplace. Many more innovations would have occurred but for the rigidity of the Napster ruling and the risk of personal liability for infringement by tech company executives and VCs.

The role of copyright in promoting innovation was recently highlighted in the Department of Commerce’s Green Paper on “Copyright Policy, Creativity and Innovation in the Digital Economy” (July 2013). It aspires to present a balanced agenda of copyright reform ideas that will promote innovation. It is an encouraging sign that the Green Paper identifies statutory damage risks in secondary liability cases as a policy issue that should be addressed. Reforming statutory damages would not entirely eliminate the risks that copyright would chill innovation, but it would go a long way toward that goal.

 
 

Quantifying the Copyright-Innovation Interference

Joel Waldfogel, Copyright Protection, Technological Change, and the Quality of New Products: Evidence from Recorded Music Since Napster, 55 J.L. & Econ. 715 (2012), available at the University of Minnesota.

The constitution empowers Congress to promote the useful and the expressive arts, which Congress does through the laws governing patents and copyrights. But, promoting one may sometimes retard the other. This happens in the context of new technologies of copying and dissemination, such as the photocopier, VTR, the MP3 player, and file-sharing networks. Imposing copyright liability on the makers and users of these technologies encourages copyright owners but may discourage innovators. Shielding such makers and users from liability encourages technological innovation but may retard expressive creativity. How should we strike this trade-off, either in general or in particular cases?

This question has long been a major issue in copyright law and scholarship.1 To know what the right policy is, we should have some sense of the degree to which incentives to create content are diminished, if at all, in the face of the new technology. Indeed, much empirical work surrounding the file-sharing litigation has studied the effect file-sharing had on music sales. This body of literature contains diverse views, and the debate on the empirics is sometimes as heated as the one on the theory and policy side.2

Joel Waldfogel’s paper is a recent and valuable contribution to the empirical literature. I like it lots because it takes a new approach to quantifying the digital age’s net effect on incentives to create expressive works. Waldfogel does not believe that quantifying the reduction in copyright owners’ revenue is the most important question. Technological change may reduce the amount that copyright owners can receive for their content, but technological change may also reduce copyright owners’ cost of producing and disseminating content. If the latter effect is greater than the first, technological change may actually enhance incentives to create. To know whether and what kind of legal intervention is needed, we need to determine which effect is greater. The paper tries to quantify the technology’s net effect on incentives to create by looking at what has happened to the quality of sound recordings produced since the advent of online file-sharing in 1999.

The paper does so by constructing three original measures for the quality of music over time. One is an index of the volume of high quality music since the 1960s. It is based on critics’ retrospective lists of the best works over various time periods (e.g., Rolling Stone’s 2004 list of the 500 best albums based on 273 critics and expert musicians opinions, or Pitchfork Media’s 200 best albums of the 2000s etc.). It contains 88 rankings of either songs or albums from Anglophone countries. Two additional indices track the quality of music from different vintages using data on record sales (relating to RIAA’s gold and platinum certifications awarded between 1958-2010) and airplay data (during 2004–2008 of songs originally released in previous years). Here, the assumption is that higher quality music should generate more sales and airplays over time, surviving longer in the market. These two indices evaluate vintages of music by the extent to which records from a particular year continue to sell, or to be played, years later.

Below are graphs of two of the music quality indices over time (the certifications graph is very similar to the airplay one, both being measures of public enjoyment):

© 2012 by The University of Chicago & Joel Waldfogel. All rights reserved. Reprinted with permission of the University of Chicago and Joel Waldfogel.

© 2012 by The University of Chicago & Joel Waldfogel. All rights reserved. Reprinted with permission of the University of Chicago and Joel Waldfogel.

© 2012 by The University of Chicago & Joel Waldfogel. All rights reserved. Reprinted with permission of the University of Chicago and Joel Waldfogel.

The paper finds that these indices—measures of music quality—are consistent with each other, and that there’s no evidence that the quality of music declined in the years since Napster. The paper’s certifications and airplay data indices suggest that music quality has rather increased substantially since 1999 (as is shown on the last graph above). The paper concludes by suggesting that its findings are relevant to policymakers setting the strength of copyright protection.

Several assumptions need to be made before using the paper for policymaking, and I would like to note two. First, one would have to accept the indices as good measures for industry output. One could question, however, whether this is so. Clearly, altering the measure for quality may alter the findings of what happened to that quality over time, and thus may alter the policy implications. For example, the finding that music quality increased significantly post-Napster is borne by two of the paper’s indices (the airplay and the certification indices) but not by the third (the experts’ index). It would thus be interesting to see whether the finding of non-decreasing music quality is robust to the use of other measures of quality.

But even assuming that the paper’s findings are robust to alternative measures of quality, another issue remains: What policy implications is one to draw from the paper’s findings? Here, the paper provides qualified guidance. One possibility, implicitly suggested at the conclusion of the paper, is that policymakers should not strengthen copyright protections since copyright owners’ reduced revenue (and potentially profit) was not associated with a decline in the quality of the music industry’s output. The paper’s findings, however, cannot support such a conclusion unequivocally. The years just prior to Napster’s advent and since were characterized not only by technological change, but also by legal change. By and large, copyrights have been strengthened over the time period. We cannot know whether incentives would have remained the same but for the legal change. Perhaps enhancing copyright owners’ rights was necessary to keep production levels constant. It is also possible that a lesser, or a greater, invigoration of copyrights would have enhanced music quality. Largely abstracting away from legal change, the paper leaves these possibilities open.

But these two remarks should not take away from Waldfogel’s substantial contribution to the literature. Rather, they are to acknowledge how much he has pushed the ball forward and how much more insight can be gained on the intersection of copyright law and technological innovation if more research is conducted following his approach.



  1. For a few notable treatments of the topic, see Paul Goldstein, Copyright’s Highway: From Gutenberg to the Celestial Jukebox (2d ed. 2003); Jessica Litman, Digital Copyright (2d ed. 2006); William Fisher, Promises to Keep: Technology, Law, and the Future of Entertainment (2004); Jane C. Ginsburg, Copyright and Control Over New Technologies of Dissemination, 101 Colum. L. Rev. 1613 (2001), Tim Wu, Copyright’s Communications Policy, 103 Mich. L. Rev. 278 (2004), available at SSRN. For my treatment on the topic, see Dotan Oliar, The Copyright-Innovation Tradeoff: Property Rules, Liability Rules, and Intentional Infliction of Harm, 64 Stan. L. Rev. 951 (2012). []
  2. There is a growing scholarly empirical literature on the effect of file sharing on the sale of sound recordings. For two summaries of the literature, compare Stan Liebowitz, The Metric is the Message: How Much of the Decline in Sound Recording Sales is due to File-Sharing?, CAPRI Publication 11-02 (2011) (suggesting that file sharing has caused the entire enormous decline in record sales over the past decade) with Felix Oberholzer-Gee & Koleman Strumpf, File Sharing and Copyright, Harvard Bus. Sch. Working Paper 09-132 (2009) (suggesting that file sharing has caused no more than 20% of the decline in music sales). []