Internet Payment Blockades: SOPA and PIPA in Disguise? Or Worse?

Annemarie Bridy, Internet Payment Blockades, Fla. L. Rev (forthcoming), available at SSRN.

The law of intermediary liability in intellectual property reflects a constant struggle for balance. On the one hand, rights owners frustrated by the game of whack-a-mole have good reason to look for more efficient ways to stanch the flow of infringement. While this concern is not a new one, the global reach and decentralization of the Internet have exacerbated it. On the flipside, consumers, technology developers, and others fret about the impact of broad liability: it can impede speech, limit competition, and impose a drag on economic sectors with only a peripheral relationship to infringement. As the Supreme Court put it thirty years ago in the seminal Sony case, the law must seek a “balance between a [rights] holder’s legitimate demand for effective – not merely symbolic – protection of the statutory monopoly, and the rights of others freely to engage in substantially unrelated areas of commerce.”

For the most part, the battle of these competing interests has played out in litigation, legislation, and deals involving online intermediaries whose services are used to infringe. The Digital Millennium Copyright Act’s notice-and-takedown procedures, the peer-to-peer copyright battles, keyword advertising suits, and lawsuits against websites like eBay are giving shape to the relative rights and responsibilities of IP owners and intermediaries.

While few would defend the existing structure as perfect, it has a number of features – such as the (usual) requirement of actual knowledge – that are designed to minimize IP enforcement’s collateral effects. More generally, except in the case of inducement, the obligation to stop infringement has been limited to those with a close technological relationship to it – parties that may not have initiated the infringing act, but have the tools to stop it surgically, with minimal collateral effects.

These standards of secondary liability, however, have not satisfied rights-holders. In addition to substantive objections to the rules, IP owners point out that many online intermediaries lie beyond the reach of United States courts. In the last several years, they have tried to widen the net of responsibility to include not just technological intermediaries, but also payment intermediaries that process Internet transactions.

The efforts began in the courts, which held that absent evidence of collusion or inducement, financial intermediaries were too far removed from infringement to justify liability against them. Attention then turned to the legislature, which showed initial enthusiasm but backed down in response to a wave of protest that reflected both substantive and procedural objections to PIPA, SOPA, and related legislation.

In this article, Annemarie Bridy (who has written insightfully about the lead-up to, and fallout from, the PIPA and SOPA debacle) explores the latest front in this battle over responsibility for online enforcement: “voluntary” Internet payment blockades.

The picture she paints is not pretty. If SOPA and PIPA raised transparency concerns, and if the prospect of broad liability cast a pall over financial intermediaries’ engagement in “substantially unrelated areas of commerce,” the latest chapter will make the winners of earlier battles wonder what they fought for. As Bridy explains, in the wake of the SOPA and PIPA collapse, rights-holders enlisted the executive branch to pressure payment intermediaries to “voluntarily” take on obligations that mirrored those in the failed legislation. Under the agreement, financial intermediaries commit to a “notice-and-terminate protocol,” in which merchants selling infringing products are cut off from access to payment services.

Even if the agreement were truly voluntary, it would raise troubling questions of substance and process. The notice-and-takedown system does not appear to require any inquiry into the relative volume of infringing and non-infringing sales by the offending merchant. The agreement, moreover, arose out of a closed and non-transparent process, as evidenced by the government’s failure to produce a single document in response to a FOIA request about negotiations. Equally distressing, the “voluntary,” non-mandated and non-contractual nature of the agreement means that no court will adjudicate its meaning and scope.

Bridy makes a persuasive case that this form of “private ordering” is really regulation in disguise. After years on the defensive, and in the shadow of prospective regulation, the payment intermediaries acceded to pressure and joined the ranks of intellectual property enforcers. Without transparency in either design or implementation, the public has no way to know whether this quasi-regulatory system reflects any of the balance that’s so critical in the Internet context – between freedom and responsibility, and between infringing and non-infringing use.

I found Bridy’s article engaging, informative, and disturbing. She does a terrific job of introducing readers to the past and present of payment blockades. She lays the historical foundation, from initial case law through PIPA and SOPA, and offers a clear, simple, and informative explanation of how the best practices protocol works. And her critique is powerful. For those of us focused primarily on developments in litigation, legislation, and formal administrative law, the article offers a sobering reminder of the limitations of these formal fora, and the doggedness and resourcefulness of rights-holder advocates.

The article left me eager for more details about the agreement’s content and effects – will it impede access to websites offering plenty of non-infringing stuff? Where are the incentives for financial intermediaries in that scenario? Market forces may well substitute for legal doctrine and lead payment intermediaries to avoid cutting off legitimate trade. But investigation is expensive, and it may prove expedient to terminate in the questionable case.

Ultimately, whether this system will cause more harm than good is an empirical – and probably untestable – question. It may well be that, with a narrowly tailored notice-and-terminate system, payment intermediaries offer an efficient way to reduce rampant counterfeiting and infringement. But Bridy’s article gives us reason to question the jump to that conclusion, especially in a process as muddy as this one.

Cite as: Stacey Dogan, Internet Payment Blockades: SOPA and PIPA in Disguise? Or Worse?, JOTWELL (March 25, 2015) (reviewing Annemarie Bridy, Internet Payment Blockades, Fla. L. Rev (forthcoming), available at SSRN), http://ip.jotwell.com/internet-payment-blockades-sopa-and-pipa-in-disguise-or-worse/.
 
 

Surveying the Field: The Role of Surveys in Trademark Litigation

Shari Seidman Diamond & David J. Franklyn, Trademark Surveys: An Undulating Path, 92 Tex. L. Rev. 2029 (2014).

Trademark surveys have traditionally been seen as a core element of any trademark infringement or dilution dispute. How else would we discover, the theory goes, whether the typical consumer is confused about the source of a particular product, believes the prestige of a famous mark to have been diluted, or considers a once valid mark to have become generic?

Recent empirical work, focusing on published judicial opinions, has debated whether surveys have indeed played as significant a role as some have asserted or whether they are generally disregarded by courts, perhaps in favor of judges’ own intuitions. In a recent symposium contribution published in the Texas Law Review, Shari Seidman Diamond and David J. Franklyn help to expand the field. Because published opinions tell only part of the story, Profs. Diamond and Franklyn surveyed trademark practitioners in an attempt to discover how surveys are used in early stages of legal disputes. The results provide some useful food for thought both for trademark practitioners and for empirical legal scholars.

Some background for readers who are unfamiliar with trademark doctrine: Trademarks function, in the now prevalent description, as source identifiers — they tell consumers who is responsible for (or, perhaps, who sponsors or authorizes) the good or service at hand. Unlawful uses of trademarks disrupt this function in impermissible ways, either by confusing consumers into thinking that two parties are related when they are not or by diluting the strength of a famous trademark by using it on unrelated goods or services. When a trademark no longer performs its source-identifying function — that is, when it becomes the common word for a good or service rather than identifying a particular producer of that good or service — the mark is said to have become generic and so can no longer be claimed by anyone as a mark for that good or service.

The conventional wisdom was that courts trying to determine how consumers interpreted a particular trademark would want to see evidence of that interpretation in the form of a survey. Consumers would be intercepted in a mall, or contacted via the telephone, and asked certain questions about the mark. There are, of course, limits to survey evidence, since any results must be assessed with the knowledge that the artificial nature of the survey environment is likely to encourage focus on small similarities or differences that might otherwise go unnoticed. (Likewise, there are equally troublesome instances of judges believing that their own views, arrived at in the quiet of chambers, are equivalent to those of the reasonable consumer.)

In a careful and influential 2006 study of the multifactor tests for trademark infringement, Prof. Barton Beebe concluded that courts did not ultimately place as much weight on surveys as litigants might have assumed. Given that only 20 percent of the 331 opinions he studied discussed survey evidence, and only 10 percent credited the survey evidence, Prof. Beebe concluded that survey evidence was “in practice of little importance.”

Prof. Beebe’s conclusions were challenged to some extent by Profs. Sarel and Marmorstein in their 2009 study. Profs. Sarel and Marmorstein narrowed the field to 126 opinions in which the plaintiff’s trademark rights were undisputed, in an attempt to focus their assessment on surveys regarding likelihood of confusion. They found that in about one-third of their cases, the plaintiff offered a likelihood-of-confusion survey, with a significant impact on the results of decided cases in which the survey was admitted.

A third study was published by Profs. Bird and Steckel in 2012. Like the previous two studies, the Bird/Steckel study focused on published opinions, expanding Prof. Beebe’s data set with an additional 202 later cases. With this larger dataset, Profs. Bird and Steckel found that only 16.6 percent of their 533 cases discussed survey evidence, and so their results supported Prof. Beebe’s finding that surveys were not as broadly influential as assumed. They also found, however, consistent with Profs. Sarel and Marmorstein, that surveys could be influential in particular kinds of cases (such as where the marks were similar but the products dissimilar).

In their contribution to this line of research, Profs. Diamond and Franklyn note — as the previous authors acknowledge — that all of these studies focus on published opinions, and so tend to show us what judges and juries think about trademark surveys. To learn more about how litigants assess the value of surveys, Profs. Diamond and Franklyn obtained the permission of INTA (the International Trademark Association) to survey its members to determine when and under what circumstances attorneys chose to commission a survey in a trademark case and what effect the survey had over the course of the litigation.

The survey included questions assessing the respondents’ experience with surveys, the factors they considered in deciding whether to commission a survey, and the effect the survey had in the respondents’ most recent case, such as leading one party to drop or settle a claim. The survey also asked respondents to assess the effect of any surveys conducted by the opposing party in the case. (It should be noted that, inevitably, the dataset for this study had its own limitations, in that it asked respondents about surveys in trademark and deceptive advertising litigation, which would not include the use of surveys in proceedings before the Trademark Trial and Appeal Board, where they may well play a different role.)

Although, as they acknowledge, the overall response rate to the survey was fairly low (333 practicing attorneys), Profs. Diamond and Franklyn conclude that “not only are surveys widely used in pretrial stages, but that the attorneys who commission them generally perceive their impact as quite influential on the outcome of the case.” (P. 2052.) Indeed, although only 19.2 percent of the surveys that respondents reported were eventually presented at trial, more than half of the 333 attorneys reported having commissioned at least one survey.

Profs. Diamond and Franklyn also asked about the factors used in deciding to commission a survey, and here, the results are not particularly surprising: the cost to the client; an assessment of how close the case was; how high-stakes the litigation was; and the likelihood that the survey would return a favorable result. Finally, the responses indicated that surveys played a substantial role in early stages of litigation: 54 percent reported that the survey persuaded one of the parties to drop the claim or settle, while 36 percent reported that the survey was presented at a preliminary injunction hearing and/or at trial (which, as Profs. Diamond and Franklyn note, could also have motivated settlement of the case).

The authors do not purport to offer a normative assessment on when and whether trademark surveys should be conducted or proffered, but their results can’t be fully interpreted without taking such questions into account. The effect of a survey in a trademark case is as much about which party has the resources to fully commit to the survey process as it is about a search for the truth about consumer perception. A party with the resources to commission a carefully designed survey, conduct pilots or pretests that yield useful information about predicted results (perhaps via a nontestifying expert in an attempt to shield the results of the pilot from discovery), and have an expert on hand to question the methodology of the opposing survey will be in a better position in settlement negotiations than the party without such resources. (When six percent of survey respondents believe that a parody advertisement featuring “Michelob Oily” was an actual Anheuser-Busch advertisement, one can appreciate the benefit of an expert survey team.) So, relatedly, a survey may well be a type of signaling device that communicates a belief in the strength of one’s case quite separate from the results of the survey itself. We should not, therefore, draw broad conclusions about the value of surveys in litigation without considering how the survey came to be in the first place.

All of this, however, is fodder for future inquiry. For now, this article provides a very helpful reminder to all scholars embarking on empirical projects that when we focus only on published opinions, we can only say so much about the world.

Cite as: Laura Heymann, Surveying the Field: The Role of Surveys in Trademark Litigation, JOTWELL (February 23, 2015) (reviewing Shari Seidman Diamond & David J. Franklyn, Trademark Surveys: An Undulating Path, 92 Tex. L. Rev. 2029 (2014)), http://ip.jotwell.com/surveying-the-field-the-role-of-surveys-in-trademark-litigation/.
 
 

Measuring Serendipity

Bhaven N. Sampat, Serendipity (Mar. 8, 2014), available at SSRN.

“Serendipity, the notion that research in one area often leads to advances in another, has been a central idea in the economics of innovation and science and technology policy.” Yet, Bhaven Sampat argues, “serendipity is a hypothesis, not a fact,” and it therefore needs to be tested. So Sampat set out to do just that, designing an impressive study to measure serendipity in pharmaceutical research. In this context, Sampat defines serendipity in terms of cross-disease spillover: results are serendipitous when research that was originally funded for one purpose (to target a certain disease) turns out to be useful for another purpose (in the treatment of different diseases).

Sampat tests the serendipity hypothesis by comparing the disease foci of NIH grants to the research output from those grants. Because most of the NIH Institutes and Centers are explicitly disease-oriented, Sampat is able to compare the disease foci of the granting Institutes to the foci of publications that result from the grants, to citations to these publications in patents, and then to marketed drugs associated with those patents. Finally, Sampat focuses on the subset of drugs held by NIH researchers themselves (the Bayh Dole drugs). Publications, patents, and drugs with different disease foci than the granting Institute are deemed evidence in support of the serendipity hypothesis.

This approach to measuring serendipity is useful because, despite good evidence that spillovers in the innovation context have significant overall economic benefits, evidence of the kind of serendipity to which Sampat refers has been mostly anecdotal. Viagra, for example, is a well-known example of a drug originally developed as a blood pressure medication but later discovered to be useful for treating erectile dysfunction.

Sampat ultimately finds strong evidence of serendipity by all of his measures. In particular, 50-60% of the publications resulting from grants were outside the main disease area and nearly 50% of drugs link back to a grant in a different disease area. The story is similar for drugs based on patents owned by NIH researchers. About 30% of those drugs link back to “basic” grants with no disease focus, and almost a quarter (23%) of the drugs which can be traced back to a disease-focused grant relate to a different disease area than the drug’s first indication.

It’s worth noting that, as Sampat acknowledges, this kind of cross-disease spillover isn’t necessarily a result of serendipity, which obviously implies an unintended outcome. Some of the NIH Institutes explicitly fund basic research, which makes it harder to know that research funded by those Institutes was actually intended for any specific purpose, let alone one that differed from the outcome. Some Institutes also fund research across several diseases, so it should not be surprising to find more diverse research outputs from those Institutes. It seems unlikely, however, that these factors explain the magnitude of cross-disease spillover that Sampat finds. Some significant amount of that spillover is very likely serendipitous.

Sampat’s research is quite timely, as there recently has been a renewed interest in innovation institutions beyond intellectual property, and government grants play an important role in the overall innovation ecosystem. But grants can be structured in a variety of ways, and differently structured grants will interact with other innovation institutions (including patent law) in different ways. Certain combinations of institutions will take us down one innovation path; other combinations will take us down a different path. One important consideration in choosing the mix of institutions is the extent to which we believe we can determine, ex ante, which specific research outputs we want, or whether instead we are better off choosing policies that leave open the greatest number of potential outcomes.

One way to read Sampat’s research is as evidence that policymakers simply can’t fully direct innovation because the research process is inherently unpredictable. On that view, less may be at stake in structuring innovation institutions than previously assumed – even when government grants are intended for particular ends, we can still expect to see a broader range of outputs simply because the research process is not linear. In this respect, those concerned about innovation policy being managed in an excessively top-down manner might have less to worry about than they feared.

But whether that is right depends a lot on whether these research outputs were in fact serendipitous and don’t simply look that way because researchers had to find a way to get their projects funded. To put it somewhat crudely, some applications for disease-focused grants might simply have been directed to particular Institutes because that is where the money was. The extent to which we should still worry about focusing research grants also depends on the range of serendipity that might have resulted under some less restrictive institutional arrangements – whether, for example, more research would have been undertaken that did not have a clear objective at all, or whether the research would have generated an even wider range of outputs simply because the researchers were aiming at a more diverse range of targets. In this sense, the range of targets might anchor the outputs such that, even if the results turn out to be broader than the focus of the grants, they are still more limited than they might otherwise have been.

Of course, none of this answers the question of how much we should want to focus innovation. For while it might be true that less restrictive institutional arrangements would allow for more serendipity, it might also mean that we are less likely to get specific research outputs that we can be confident we want. We might, for example, get a wider range of cancer treatments, but less effective treatments for particular types of cancer. Which kind of system we should prefer is an important normative question that Sampat doesn’t try to answer. But we need to have a meaningful and empirically-grounded sense of the tradeoffs involved in order to think clearly about our priorities, and Sampat has done us a great service by beginning to fill the empirical vacuum.

Cite as: Mark McKenna, Measuring Serendipity, JOTWELL (January 20, 2015) (reviewing Bhaven N. Sampat, Serendipity (Mar. 8, 2014), available at SSRN), http://ip.jotwell.com/measuring-serendipity/.
 
 

The Past and Future of Copyright Politics

In The Fight Over Digital Rights, Bill Herman, a professor in the Department of Film and Media Studies at Hunter College, explores the changing landscape of political debate over digital rights management between 1980 and 2012. This is a book about copyright, but it is also a book about political science and legislation.

You don’t need to know anything about copyright to read this book and learn new things about how the process by which legislation is formed and amended dramatically affects substantive and procedural rights. Herman’s book is about the rights and opportunities that copyright laws provide. And it traces the shape of those rights and opportunities as the legislative process has been affected by the rise of Internet activism since 1980.

One of the most refreshing aspects of Herman’s book is that despite its subject being copyright, its framework is not economic or social welfare theories that claim to unify or explain copyright protection. Herman’s book is about political history – the real nitty-gritty of coalitions, debates and compromises of copyright reform. Indeed, it is both a history and a road map of the copyright sausage factory. As such, it is a welcome departure from theories and reform proposals for what many consider to be a broken copyright system.

Herman gets real with this book. He demonstrates how a way forward to copyright reform that will enable creative practices of both users and authors (the dichotomy being a false one, in any case) requires both a comprehensive look at the path that produced the current Copyright Act and an understanding of how the digital tools the Copyright Act regulates are harnessed in a participatory democracy.

Herman begins with one of the first skirmishes between copyright and digital access that took place in the 1980s over the “digital audio tape” (or DAT) in the context of the passage of the Audio Home Recording Act (AHRA). He ends the book with a play-by-play analysis of the demise of SOPA (“Stop Online Piracy Act”) and PIPA (“Protect IP Act”) in early 2012. If the amount of acronyms in this paragraph isn’t already a clue, this book is candy for the gloriously geeky in the ways of technology and politics. It is also uplifting for those who worry that the legislative factory makes only one kind of sausage.

Herman employs both quantitative and qualitative analysis of political activity in the shape of media coverage, legislative filings and on-line communication to demonstrate the trend in copyright reform that reflects the growing influence of the Internet on advocacy and policy choices. Herman concludes that the demise of SOPA and PIPA is “the best possible evidence of the profound shift in the politics of copyright – an exclamation point on the internet-fueled [strong fair use] coalition” (203) brought about by “a potent group of political actors” and their “political messages communicated through new media.” (206)

Herman’s conclusion is that the “strong fair use” coalition (what he calls the SFU – as if we need more acronyms!) has become an effective political force in the copyright legislative reform. This is in large part because of the Internet (the largest and most accessible copy-machine in the world) that the Copyright Act tried, but has largely failed, to regulate.

In telling the story of the turning tides of legislative power over copyright law reform, Herman’s debt to Jessica Litman’s Digital Copyright is profound.1 Litman was right in 2001 that copyright legislative reform has been the work of only the strong-copyright advocates (the big six movie studios (the “MPAA”), the music recording industry (the “RIAA”), and the text publishing industry (the Author’s Guild)). But Litman was writing only about the past – the 1978 Copyright Act and the 1998 Digital Millennium Copyright Act (“DMCA”). And her argument, although convincing and trustworthy, was also a battle cry for change.

Litman explained (and predicted) how captured legislative process concerning copyright results in benefits for an elite group of copyright holders and harms the everyday audience of copyright users and creators. (In today’s parlance, we might call the beneficiaries of these legislative reforms “the 1%”). Litman forewarned that if the past legislative process were predictive of the future, regulation of digital copyright (the dominant form of expression going forward) would suffocate the constitutional mandate for “progress” that requires distribution and access.

Herman’s book is less of a battle cry. Indeed, it is exhaustively descriptive of the legislative debates and media coverage of copyright reform over the past thirty years. Through the thicket of details about hearings, witnesses, publication venues, coalitions and media debates, Herman tells a normative story about the value of participatory democracy and his belief in its inevitability in the digital age. And that is because he believes the SFU coalition has begun to win some debates. (I question whether sinking SOPA and PIPA is a legislative “win” without the success of newly promulgated laws that instantiate the reasons those proposals were bad, but I will accept that their defeat is a step in the right direction for the SFU coalition).

Reading The Fight Over Digital Rights, I imagined Herman (metaphorically) taking Jessica Litman’s hand and following her back in time, retracing the legislative hearings of the AHRA in the late 1980s, the DMCA in the late 1990s, and then doing a deep dive into the media coverage surrounding both as well as substantial online communication concerning the latter. He does so not to make an argument about the normative benefits of strong-copyright or strong-fair-use (although as I said, it is hard not to read into his book a preference for the latter). His comprehensive discourse analysis, historical case studies, and quantitative measurements of media coverage are primarily about the value of evidence – evidence of the legislative capture Litman describes. And the value of his evidence reads like an antidote to Litman’s warning cry.

Herman shows how fights over copyright are between people, not only corporations, and this matters for how we mobilize on one side or the other and how successful our mobilization will be. And, because the Internet is the ultimate crowd-sourcing tool (in addition to a copy-and-distribution machine), the Internet has and will continue to affect politics and legislative change in the voice of the people using it.

Herman chronicles how the historic insularity of copyright reform has given way to public and political forums on the Internet. In the past, groups with greatest funding and political access have won policy outcomes of their choice by maintaining insularity and elite access; groups with less capital tend to take their case to the street, which lacked the continuity and strength of digital communication as it depended on everyday dedication of time and bodies. However, Herman says, the “internet reshapes policy advocacy … [and] mitigate[s] the problem of collective action.” (13) Because Internet communication is cheap, facilitates the aggregation and identification of communities of interest, and because it is durable and repetitious, it can even out the fight for access and rupture the insular spaces where legislative reform occurs. Herman shows how regarding the debate over SOPA and PIPA, by 2012 when the gears of the Internet were fully harnessed, “underfunded, diffuse group of citizens and nongovernmental organizations scored a victory against … concentrated well-funded industry group[s], highlighting the potential for online communication to shape policy outcomes.” (14)

Herman is not subtle about what he thinks the internet means for copyright reform and political processes generally. It is “nothing less than a fundamental reordering of the copyright policy subsystem.” (19)

As a dense political history, there is a lot that is new in this book, featuring industry leaders and legislators as battling protagonists. But there is also a lot that is left implicit which could be made clearer (with evidence and analysis) about the changing nature of the substance of the debate as the breath of its participants grow. As the medium in which the message circulates changes, how will the substance of the enacted policies also change? As the internet combines qualities of inestimable diffuseness with precision focus, Washington’s power-centers are destabilized and the influences legislators experience are more diverse. But the language of property entitlements and financial incentives is also still very present. These were the baselines of all past copyright reform. Will we hear something different this time around? Will the testimony on behalf of individual people or loosely-affiliated groups as opposed to well-established organizations and companies change what we understand copyright law to be for and how it works?

Certainly, the interests of the new and dominant technology companies that make devices that facilitate the making and dissemination of creativity are being heard (hello Google, Apple and Microsoft). But what about the creative people who are both hobbyists and professionals, who are artists, engineers, scientists, videographers, writers? Is Internet advocacy going to help them too? To answer this question, Herman’s book also traces the rise in public discussions of transparency in access to information, equality of opportunity to access intangible goods and the questionable benefits of exclusive entitlements. But his analysis on this score is thinner than on the expanding access to the legislative process more generally. Does his mention of these qualities mean that the individuals participating en masse through the Internet in debates over copyright care more about these issues than the other copyright interests (such as rights of excludability and maximalist revenue)? There is more research to be done on this score.

What do people who engage in creative practices and produce the work that companies sell need to continue their work? How do the existing laws (or hypothetical future laws) help or hurt their creative practices? Herman’s research leaves the reader with the impression – left to be proven in subsequent analyses and legal reform battles – that the change in access to law reform over copyright has changed the debate over the scope and nature of the rights copyright law should affect. If Herman is right, and the growing discussions about substantive equality and distributive justice will be durable features of IP policy going forward, we can say with some certainty that copyright law and policy is experiencing a transformation of both its substance and practice.



  1. Jessica Litman, Digital Copyright (2001). []
Cite as: Jessica Silbey, The Past and Future of Copyright Politics, JOTWELL (December 15, 2014) (reviewing Bill Herman, The Fight Over Digital Rights: The Politics of Copyright and Technology (2013)), http://ip.jotwell.com/the-past-and-future-of-copyright-politics/.
 
 

Why FISH:FISHES :: INFORMATION:INFORMATIONS

Tim Wu, Properties of Information and the Legal Implications of Same (Columbia Law and Economics Working Paper No. 482, 2014), available at http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2446577

Tim Wu’s new essay, Properties of Information and the Legal Implications of Same, offers both a survey of recent legal and economic scholarship on information and a provocative reconceptualization of it. Wu posits that information is commonly described as an unusual resource because its very nature means that it possesses the twin properties of being non-excludable and non-rival. Taking these properties as givens, information can be readily pegged as a public good, and a strong case can be made out for government intervention to foster the production and/or dissemination of the information (whether in the form of IP rights, subsidies, or something else). However, Professor Wu’s reading of the literature, combined with his sprinkling of original comments on the intrinsic nature of information, suggests that the story is not quite this simple. (Although the review sweeps broadly, discussing securities regulation, contract theory, consumer protection, communications, and free speech, the bulk of it addresses intellectual property.) Professor Wu argues that information is not by its very nature non-excludable or non-rival. Rather, the subject matter of the information, the context in which the information exists, and the structure of the industry that employs the information all matter. They all affect the extent to which information is a public good. As a consequence, Professor Wu counsels against a single policy prescription for problems concerning the underproduction or under-dissemination of information and in favor of context-specific, dynamic laws governing information.

Professor Wu argues that non-excludability per se is not what makes information prone to free-riding problems (and thus the problem of underproduction without government intervention). Positing that information “consists of patterns, which must subsist in some form, whether ink on paper, stored magnetic charges, or whatever else” and that information is only valuable if a human mind perceives it, he takes it to be self-evident that people can in fact be readily excluded from information. “If you don’t have a ticket, you won’t see the movie, and we are all excluded from the text of a book locked in a vault for which the key is lost, or from the particular information contained in an engraving written in a lost language, like hieroglyphs before the discovery of the Rosetta stone.” Instead, Professor Wu suggests that information raises a free-riding concern because, among other things, it can often be copied at a relatively low cost. Presuming that copying costs vary, this shift in the conceptual framework for understanding what enables free-riding is significant because the argument in favor of government intervention shifts from being inherent in the essential nature of information to contingent on the particularities of the context in which information exists.

In a parallel argument, Professor Wu suggests that whether information is non-rival (and thus whether under-dissemination is a normative problem) may not be an intrinsic property of information, but may instead depend upon the type of information at issue or the structure of the industry in which the information is useful. Landes and Posner famously argued that characters like Mickey Mouse may be subject to “overgrazing” or congestion externalities; Kitch similarly argued that patents can be socially beneficial because they reduce the number of follow-on inventors who can use newly discovered technological information without the authorization of a coordinating entity. In sum, the fact that information is non-rival should not be taken as an intrinsic property of information as a resource. Rather, it is a variable with different values in different contexts.

In a short passage from his conclusion, Professor Wu nicely sums up the arguments discussed above and pivots to his take-home lesson:

It is … curious that, given the myriad properties of information, nonexcludability and non-rivalry have received so much attention. One may be suspicious that the attention may be prompted by its neatness of fit into the pre-existing concept of a public good more than the underlying realities of what information is. In any event, it is worth suggesting that lawyers’ or economists’ understanding of information’s properties might be broader, and begin to draw less on just anecdotal examples, but some study of the science of information. Indeed, it may turn out that information’s other properties, less studied, will be equally important for public policy.

In gross, Professor Wu taps into an important theme in the zeitgeist of contemporary intellectual property scholarship: information policy should not be constructed based on the presumption that there are universal truths about the intrinsic qualities of “information” (whatever it is), but rather on context-specific interventions to address particular problems.

I believe that legal scholarship would greatly benefit from greater precision when it discusses information. When we talk about information, we often talk in generalities that stand in the way of the more nuanced conversations that I think Professor Wu would like us to have. Does copyright prevent others from copying “information,” or is “information” what lies beyond the reach of copyright law because of the idea/expression and fact/expression dichotomies? Patent law is routinely described as an intellectual property regime that prevents others from copying “information” and “ideas,” yet “information” and “ideas” are also routinely described as the very resources that patent disclosures publicize and make freely available to all.1 Until we can pin down precisely what we mean when we say information we cannot begin to identify “information’s other properties” that are “less studied” yet that may well “be equally important for public policy” according to Professor Wu’s thesis. For example, Professor Wu himself distinguishes “knowledge” and “wisdom” from “information” at one point, without offering guidance as to how to draw the distinction.

I do have what I see as a friendly amendment to Professor Wu’s suggestion of a route forward. He suggests that we might draw on “some study of the science of information” to move beyond our myopic focus on non-excludability and non-rivalry. However, there is no single science of information. Scholars who study information are a diverse lot. Information theorists who follow Shannon, computer scientists, semanticists, semioticians, bioinformaticists, philosophers of technology, and cognitive scientists, among others, all point to different things when they pick out information in the world.2 Each discipline defines information in the manner that is useful to its own goals, and the disciplines rarely talk to one another. This balkanization complicates the mining of these disciplines for insights that are useful for economic and legal theorists: we will need to sort through the many taxonomies of information that have been proposed to find the ones which, given our goals, provide us with useful tools for identifying and understanding the information that concerns us.

In the English language, there is today no distinct word for the plural of “information.” Yet, I believe that “informations” would be a useful word to coin in order to highlight the true nature of the resource at issue and to facilitate the discussion that Prof. Wu would like to initiate. The point of the plural is not to make “information” operate like other count nouns; I’m perfectly happy with the notion that I buy information from you when I buy ten tips on ten distinct stocks. Rather, the model here is “fish.” The plural of “fish” is “fish” when all of the fish are the same species: “There are fish in the goldfish bowl.” However, the plural of “fish” is “fishes” when the speaker refers collectively to multiple species: “There are three fishes in that tank,” to the extent that the tank contains tuna, snapper, and flounder. To have intelligent discussions about information policy, we do more than realize that information exists in many different contexts; we must do more than seek context-sensitive policies to address the free flow of a unitary phenomenon called information. We must instead recognize that there are many different informations, each of which is studied by its own group of scholars and each of which may merit its own information policy, sometimes regardless of context.



  1. Kevin Emerson Collins, The Knowledge/Embodiment Dichotomy, 47 Davis L. Rev. 1279, 1307–14 (2014). []
  2. For a wide-angle view on information, see James Gleick, The Information: A History, a Theory, a Flood (2011). []
Cite as: Kevin E. Collins, Why FISH:FISHES :: INFORMATION:INFORMATIONS, JOTWELL (November 11, 2014) (reviewing Tim Wu, Properties of Information and the Legal Implications of Same (Columbia Law and Economics Working Paper No. 482, 2014), available at http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2446577), http://ip.jotwell.com/why-fishfishes-informationinformations/.
 
 

An Information-Cost Critique of Chattel Property Servitudes

Mulligan, Christina, The Cost of Personal Property Servitudes: Lessons for the Internet of Things (July 14, 2014). Available at SSRN.

Property scholars have long noted a peculiar inconsistency between real and chattel property. While law increasingly tolerates different forms of ownership in and servitudes limiting the use of land, it has remained steadfastly resistant to such restrictions in the context of personal property. In her sharp new paper, “The Cost of Personal Property Servitudes: Lessons for the Internet of Things,” Christina Mulligan shows that this long-lamented inconsistency isn’t a problem at all, but rather a sensible distinction that flows naturally from the core differences between real and chattel property. This insight not only helps explain a longstanding puzzle in property law, but sheds new light on the increasing practice of content owners using license agreements to restrict the use of digital goods.

From a purely formal perspective, one might reasonably wonder why courts allow increasing complexity in real property ownership—from historical forms like contingent remainders and fees simple subject to executory limitation to modern innovations like condominiums and time-shares—while insisting that no such variation is permitted with respect to chattels. If I can have a defeasible fee interest or a time-share in a vacation home in Boca Raton, why not also in a Rolex or a refrigerator? This seeming has engaged scholars since Coke. Most recently, Molly Van Houweling investigated contract-based restrictions on personal property from the perspective of physical property, suggesting that the same concerns that warrant skepticism about servitudes on real property may be used to govern servitudes in the context of personal property as well.

Mulligan takes a different approach to this issue altogether. Instead of proposing a way to render uniform law’s treatment of real and personal property servitudes, she instead seeks a way to show that what other scholars have treated as a formal inconsistency actually rests on solid practical grounds. Mulligan deploys Merrill and Smith’s information costs defense of limited forms of property rights. Because chattels, unlike land, tend to be small, mobile and fungible, it will prove very difficult for a buyer to determine whether those chattels are heavily encumbered by restrictions or free for any sort of use. And since personalty tends to be worth less than land, these information costs will loom as an even greater burden to purchasers of chattel property than they would to real property. After all, it would be well worth the money to pay $1000 to clear title to a $200,000 vacant lot you’re thinking of purchasing, but you’d be a fool to shell out that amount to make sure there are no encumbrances on a $200 watch. And finally, Mulligan aptly observes that while one needs to investigate the state of title in real property relatively rarely, we purchase and interact with things all the time, so that in a world where we had to worry about use restrictions constantly, we would have to overcome information costs multiple times on a daily basis.

In answer to the longstanding question why law is conflicted in its treatment of servitudes on land and things, Christina Mulligan has shown us that the question itself is misconceived. There is not a conflict at all, but rather a perfectly plausible approach to managing the different information costs associated with real and personal property. Mulligan’s elegant information cost solution to this problem invites even more reasons that law may wisely choose to disfavor use restrictions on chattel property. Consider, for example, how information costs may play out in light of knowledge asymmetries between buyers and sellers of chattel property. Sellers’ greater knowledge and expertise would allow them to exploit information costs to systematically disadvantage end users. Imagine, for example, than an unscrupulous seller wants to sell books subject to a long, complex series of use restrictions, including the requirement that buyers will return the copy after six months of use. There’s a pretty good chance that many consumers will fail to smoke out this trap embedded in the Sargasso Sea of boilerplate—after all, it’s just a book, and hardly worth all the trouble of wading through a long agreement. But if law simply sidelines the whole idea of use restrictions on chattel property for a rule that says “when you buy it, it’s yours, no exceptions,” then we avoid this risk of consumer exploitation in addition to lowering information costs.

The nefarious bookseller in my example is hardly fictional. The Supreme Court’s 2013 decision in Kirtsaeng v. John Wiley Co. involved the legality of resale restrictions on textbooks sold in foreign markets, and most law professors have received at least a few desk copies plastered in big, ugly stickers reading “for professor use only.” The notion of imposing servitudes on chattel property may have been historically disfavored, but this apparently hasn’t prevented contemporary content owners from pushing the practice. In fact, as Mulligan correctly observes, use restrictions on personal property threaten to become ever more pervasive with the advent of the “internet of things”—the increasing enmeshment of everyday items with copyrighted software. Given courts’ peculiar toleration for use limits on software via licensing, and the ever-growing presence of computer programs embedded in cars, watches, and even refrigerators, long-rejected chattel servitudes may be approaching faster than we realize.

And here is where Mulligan’s focus on chattel property hits a normative wall. Information costs tell us a lot about why personal property servitudes are a bad idea, or at least a worse idea than they are in the context of real property. But the threat of pervasive, cost-prohibitive use restrictions on the countless things we own and use in our daily life isn’t merely, or even primarily, a property problem, but a contract problem. Software companies don’t regard themselves as selling copies of their programs that are subject to limited use rights. Rather, they portray these transfers as mere licenses, so that users are not owners of copies at all, but mere licensees, with the limits on their use—however draconian—flowing from the limited nature of licensure itself. So even if we agree—as Mulligan convincingly argues we should—that chattel property doctrine wisely disfavors use restrictions, this doesn’t answer the much harder question of what this means for owners’ ability to contract. And while there is no shortage of proposals for limiting how owners can license uses of their works of authorship, it bears remembering that owners are under no obligation to permit uses of their works at all, so a regime that excessively burdened their freedom of contract might lead owners to simply not contract—or engage in creative production—at all.

Scholarship should be measured by what it does, not by what it fails to do, and by that standard, Christina Mulligan’s “The Cost of Personal Property Servitudes” succeeds admirably. Her information-cost critique of chattel property servitudes not only helps to illuminate a longstanding puzzle in property law, but also moves the ball forward on the socially important and legally difficult issues raised by the increasing ubiquity of computer software in the objects that populate our daily lives. Mulligan’s insightful article also provides more evidence that there is more to be gained by investigating, rather than reflexively resisting, the essential commonalities between physical and intellectual property.

Cite as: David Fagundes, An Information-Cost Critique of Chattel Property Servitudes, JOTWELL (October 10, 2014) (reviewing Mulligan, Christina, The Cost of Personal Property Servitudes: Lessons for the Internet of Things (July 14, 2014). Available at SSRN), http://ip.jotwell.com/an-information-cost-critique-of-chattel-property-servitudes/.
 
 

Seeing Like a Copyright Lawyer: Judging Similarity in Copyright Cases

There are many ways to use empirical research in intellectual property scholarship. Work can be qualitative or quantitative, interdisciplinary or highly focused on the law. One of the most intriguing questions I’ve seen investigated empirically of late is “what makes us think that one work is similar enough to a previous one to infringe?” Given the significant expansion in the scope of copyright from pure reproduction to derivative works and substantially similar works, this is an important issue.

The two articles I highlight here approach the question from very different, but complementary, perspectives. The authors of Judging Similarity start with legal scenarios adapted from real cases and ask survey respondents whether the works are similar enough to infringe. Kate Klonick, like David Morrison, examines similarity from a cognitive science perspective, starting with what researchers already know about human judgments of similarity and difference in nonlegal contexts. (Disclosure: I advised Klonick, now pursuing a Ph.D. at Yale, on this project.)

Judging Similarity begins by noting, as others have, that substantial similarity doctrine is a mess. They hypothesize that factfinders are more likely to find substantial similarity—and thus liability—in the context of “a narrative that not only describes the intentional act of making one thing look like another but also identifies a wrongdoer.”

In their control group, subjects had minimal information and were asked to judge the similarity of images; in the test group, subjects had additional information about “the act of copying, the creation of the work or the consequences of the copying.” Information about the fact of copying or that significant effort was required to create the first work increased similarity judgments, “despite the fact that the works themselves remained the same and even though the subjects were consistently told that they had to base their assessments entirely on the works themselves.” (Subjects weren’t making the on-off determination of substantial similarity that would be required in an actual copyright case; they were rating similarity on a scale; further work is needed to see how much this kind of information affects a threshold determination that could lead to liability.)

Morality, the authors suggest, affects factual judgments, fitting into a larger literature on motivated cognition. They argue that this isn’t just a question of framing—increased attention to similarity caused by the emphasis on the fact of copying—because information about the amount of labor involved in creating the original also increases similarity judgments. Given this effect, if we’re serious about keeping “labor = right” arguments like that rejected in Feist out of the copyright system, they suggest, the law might need to change further—or accept that labor-related considerations will come back in at the infringement stage, once certain works are deemed unprotectable. Copyright law’s claim to impose strict liability might be overstated, if we see respondents’ reactions as fault-based.

Intriguingly, however, information about market substitution, or the lack thereof, did not affect similarity judgments. The authors don’t have much to say about that result, but someone like me is inclined to read this as a moral intuition consistent with current fair use doctrine: if the copying is otherwise ok—if the work is transformative, and doesn’t displace a separate and robust derivative market like the market for film adaptations of novels—then the fact that it causes market harm is irrelevant. Reverse engineering cases and the idea/expression distinction that allows copying of ideas also follow this pattern. There’s nothing inherently wrong with giving people options to choose their favorite variations on a theme, even if one seller inspired others to enter the market.

Klonick’s article takes a different approach, focusing on other features of human thought, independent of moral principles. She notes that three significant areas of copyright law involve variations on judging similarity to and divergence from an existing work: (1) originality, where there is a public domain (or even copyrighted) referent and the question is whether enough has been added to create an original work; (2) infringement due to substantial similarity; and (3) transformativeness for fair use purposes.

She explains that cognitive scientists know some interesting things about similarity judgments. For one thing, there’s a difference between similarity judgments and difference judgments—depending on how a question is framed, respondents’ answers may diverge a lot. When asked to judge similarity, people give greater weight to common features; when asked to judge difference, they give greater weight to distinctive features of each item. For another, similarity judgments are not always transitive or symmetric. If you start with A and ask about its similarity to B, you can get different results than if you start with B and ask about its similarity to A. So, Americans asymmetrically found greater similarity in the phrase “Mexico is like the United States” than the phrase “the United States is like Mexico.” Consistent with this result, people are also more likely to find similarity when the more “prominent” idea or object is the referent—the B in “A is like B.” This phenomenon, she suggests, tilts the field in favor of owners of very successful works: in strike suits brought by owners of unsuccessful works, the accused work will be more prominent and will look less similar in comparison to the accusing work, while when the successful copyright owner sues, the accused work will look more similar.

In addition, depending on the situation, feature similarity may matter less than “relational” similarity—a bear isn’t a lot like a bird, but a mother bear with a cub is a lot like a bird with a chick. Consider the following images:

TushnetIP

 

 

 

 

 

 

 

Most observers consider B (component or “primitive” feature match) more similar to the top stimulus in the panel on the left, but A (relational match) more similar to the top stimulus in the panel on the right. But this “relational weighting” also happens in difference judgments as well—so they are also likely to consider A more different from the stimulus than B is in the panel on the right.

Depending on how issues in copyright cases are presented, then, factfinders may be more inclined to find important differences or important similarities. She suggests that the Roth look and feel test might help moderate this effect, though I don’t really see how that would work. Klonick also suggests that transformativeness analysis’s shift to “purpose” can be explained in relational terms: when the question is how the new work fits into the universe of expression/what social role it plays, it may be easier to see relational differences, such as the difference between a single expressive work and a component of a large database (Perfect 10, etc.).

All of these features have implications for copyright cases, though they may be hard to tease out. At the very least, they offer guidance for savvy copyright lawyers trying to frame a case in a favorable way.

What these articles give us, especially taken together, are significant building blocks. They can be combined with other empirical work, such as Jamie Lund’s study showing that lay listeners simply can’t perform the task of judging infringment of musical works by listening to sound recordings, to suggest reforms to make practice fit theory. Jessica Litman’s recent article about the history of movie infringement cases also reminds us that we haven’t paid as much attention as we should to the fact that several foundational infringement cases involved moves from one medium to another—plays to silent movies, in her examples. What makes a dialogue-laden play seem similar or dissimilar to a mostly gestural movie is a question worth asking. As we gain a better idea of how people react to different comparisons, framing, and evidence, we can try to make both jury instructions and substantive law support the outcomes we believe to be the best for copyright policy overall.

Cite as: Rebecca Tushnet, Seeing Like a Copyright Lawyer: Judging Similarity in Copyright Cases, JOTWELL (September 9, 2014) (reviewing Shyamkrishna Balganesh, Irina D. Manta, & Tess Wilkinson‐Ryan, Judging Similarity, 100 Iowa L. Rev. (forthcoming 2014) and Kate Klonick, Comparing Apples to Applejacks: Cognitive Science Concepts of Similarity Judgment and Derivative Works, 64 J. Copyright Soc’y USA 365 (2013)), http://ip.jotwell.com/seeing-like-a-copyright-lawyer-judging-similarity-in-copyright-cases/.
 
 

From Crowd-Sourcing to Crowd-Enforcing: An Empirical analysis of Threadless’s Community IP Norms

Empirical studies of creative communities continue to provide scholars and policymakers with useful evidence for assessing intellectual property regimes. In Seven Commandments, we find yet another excellent example of the type of evidence we need to know and, perhaps even more importantly, robust methods for gathering it.

The article reports on a study of Threadless, an online community that crowd-sources t-shirt designs. As with many such communities, it uses a combination of collaborative and competitive elements, allowing users to work together on certain projects while also competing with each other for approval, funding, and ultimately production and distribution of the designed apparel. The authors of the paper seek to study the IP norms of the Threadless community in order to understand what makes it succeed in terms of incentives to create. In particular, they note that because formal enforcement of copyright law is generally difficult if not impossible on such sites, normative systems are presumed to play the major role in protecting the investment of creators.

In order to discover these norms, the authors gather data in three ways: netnography (observation on online communications and interactions), a survey, and a field experiment where the authors intentionally violated various designer’s IP by copying and posting some or all of their designs as new submissions on the Threadless site.

What they discover are seven “commandments” that appear to dominate the Threadless culture when it comes to IP norms. These include: (1) You must not make an unauthorized copy of a design; (2) If you copy, you have to ask the original designer for permission; (3) If you copy, you must provide attribution; (4) If you are suspicious of a design, you must investigate before accusing it of being a copy; (5) If you find that a design was copied, you have to make the copy case public; (6) the public trial must be fair; and (7) If someone is caught copying a design, you have to join in a collective sanctioning of the copier.

These results are somewhat surprising given the general skepticism that most researchers have had for crowdsourcing as a means of generating and enforcing norms, especially IP norms. Yet Seven Commandments shows that a large percentage of the active Threadless community has come to at least a rough consensus against unauthorized copying and for attribution. More interesting still, they have chosen to adopt some of the core elements of procedural due process from the court system as part of their approach to enforcement. For example, the notion that accusations of copying should be made public and that there should be some version of a public trial with evidence and the opportunity for the accused to contest and challenge the case against them goes back as far as the Magna Carta in Anglo-American law. Ironically, such elements are nearly absent from formal online IP enforcement/resolution regimes, such as the Digital Millennium Copyright Act’s “notice-and-takedown” scheme or the recent private agreement among United States Internet Service Providers to adopt a “six strikes” system of copyright complaints.

So what makes Threadless so special? How are they able to succeed in this regard? The authors are understandably cautious to pinpoint any one factor, but their findings suggest that there is a strong moral core to the approach of the community on these issues, and that the community’s commitment to this core allows them to have a more robust, participatory, and honest dialog about how IP should be handled. Moreover, there is a strong emphasis on the transparency of the community’s behavior. The accusations, evidence, trial, and sanctions are public and thus accountable themselves to any accusations of bias or abuse. This reinforces not only fairness but also rationality in outcomes.

Of course, Threadless is but one community, so one hopes that the authors of Seven Commandments expand their project to provide comparative results and analysis, especially across different modes and means of creativity, but by itself it serves as a beacon to help guide sites that want to avoid both unauthorized appropriation and excessive or draconian legal battles between community members

Cite as: Jason Schultz, From Crowd-Sourcing to Crowd-Enforcing: An Empirical analysis of Threadless’s Community IP Norms, JOTWELL (July 28, 2014) (reviewing Julia Bauer, Nikolaus Franke & Philipp Tuertscher, The Seven IP Commandments of a Crowdsourcing Community: How Self-Organized Norms-Based IP Systems Overcome Imitation Problems (forthcoming 2014)), http://ip.jotwell.com/from-crowd-sourcing-to-crowd-enforcing-an-empirical-analysis-of-threadlesss-community-ip-norms/.
 
 

Are Patent Damages Uncertain?

Michael Mazzeo, Jonathan Hillel, & Samantha Zyontz, Explaining the “Unpredictable”: An Empirical Analysis of Patent Infringement Awards, 35 Int’l Rev. of L. & Econ. 58 (2013).

A recent American Bar Association “Corporate Counsel” seminar styled itself as “The Uncertain Arena: Claims for Damages and Injunctive Relief in the Unpredictable World of IP Litigation.” The seminar began by recounting the seemingly surprising, $1 billion-plus damage awards in the patent infringement actions, Carnegie Mellon v. Marvell Technology, Apple v. Samsung, and Monsanto v. DuPont. These blockbuster awards stand in stark contrast to the usual awards of $20 million or less in a typical case.

By and large, in-house counsel have chalked up much of these differences to the luck of the draw. Such a sentiment is all-too-common not only among practitioners, but also among policymakers and academics. No less than the eminent IP scholar Mark Lemley has remarked, “Patent damages are unpredictable because the criteria most commonly used are imprecise and difficult to apply.”

Mazzeo, Hillel, and Zyontz make an impressive contribution to the literature by casting substantial doubt on such views. Specifically, in their recent empirical study of district court patent infringement judgments between 1995 and 2008, they show that patent damages can be explained in a large part by a fairly small number of patent-, litigant-, and court-related factors.

The authors assembled a set of over 1300 case outcomes from the PricewaterhouseCoopers database, which they boiled down to 340 judgments in favor of the patentholder in which award details were available. Although this number of judgments may seem low, based on independent work of my own for a study on the duration of patent infringement actions, these counts represent a high percentage of the total number of actions and judgments. Thus, it is unlikely that including the unavailable judgments and awards in the dataset would substantially change their results.

Mazzeo, Hillel, and Zyontz begin their exposition by noting—contrary to the widespread view that patent damages awards are shockingly high—that the median damage award has remained fairly constant from 1995 through 2008, at roughly a low $5 million. The billion-dollar damage awards in Apple v. Samsung and other cases are thus extreme outliers. Indeed, during the time period at issue, only eight awards came in over $200 million, yet those awards accounted for 47.6% of collective damages of all cases (other than generic-branded pharmaceutical disputes under the Hatch-Waxman Act). So, outside of a small number of highly publicized, blockbuster cases, damages awards are (perhaps shockingly) low – a fact that flies in the face of current rhetoric about outsized awards in patent cases.

The most impressive aspect of the article is the authors’ empirical models explaining roughly 75% of the variation among damages awards. In particular, they assemble various factors—including the number of patents asserted, the age of the patents, the number of citations to the patents, whether the defendant is publicly traded, and whether a jury or judge assessed damages—and construct a regression model that shows statistically significant relationships between these factors and the amount of damages awarded.

For example, in one model, if the defendant was publicly traded, damages were roughly 1.5 times higher than when the defendant was privately held, controlling for other factors. What is particularly striking is that the outlier awards—namely, those above $200 million—fall squarely within the model’s explanatory power. Thus, rather than being the random results of rogue juries, these large damage awards likely reflect a variety of measurable factors that point in favor of larger awards across the large number of cases confronted by the courts.

These findings have important public policy implications. As the authors point out, stable, predictable damage awards are essential for a properly functioning patent system. Otherwise, the careful balance of incentives to patentees to innovate and incentives to third parties either to acquire licenses to patented inventions or invent around would be thwarted.

On the other hand, Mazzeo, Hillel, and Zyontz overreach by concluding that their “findings thus bolster the core tenets of the patent system” that exclusive patent rights are an appropriate means for protecting inventions. Specifically, the authors’ argument that “several of the driving factors correspond to accepted indicators of patent quality” is insufficient to support such an assertion, because these factors—such as forward citations, number of claims, and number of patents—are accepted indicators of a patent’s economic “value,” not a patent’s “quality,” which concerns its validity. (Although there is very likely a relationship between the two notions, no study has resoundingly linked patent value to patent quality.) And typically these value indicators have been derived from studies of patent litigation. Thus, to argue that high damages in litigation justify the patent system on the basis of such metrics is essentially circular. Indeed, as I have argued elsewhere, it is very likely that patent damages as they stand should be reengineered to provide more optimal innovation incentives.

Nonetheless, despite this study’s inability to “bolster the core tenets of the patent system,” its result that damages awards are fairly predictable is a very important contribution to the literature. Moreover, this work provides the starting point for more comprehensive investigations of damages in patent cases, such as the follow-on study the authors recently undertook regarding non-practicing entity (NPE) and non-NPE suits. Additionally, their explanatory approach could be extended to the more basic win/loss determinations on infringement and validity. One cannot ask for much more in any empirical study, and Mazzeo, Hillel, and Zyontz deserve kudos for their exacting labors and notable insights.

Cite as: Ted Sichelman, Are Patent Damages Uncertain?, JOTWELL (June 30, 2014) (reviewing Michael Mazzeo, Jonathan Hillel, & Samantha Zyontz, Explaining the “Unpredictable”: An Empirical Analysis of Patent Infringement Awards, 35 Int’l Rev. of L. & Econ. 58 (2013)), http://ip.jotwell.com/are-patent-damages-uncertain/.
 
 

Intermediary Trademark Liability: A Comparative Lens

Graeme B. Dinwoodie, Secondary Liability for Online Trademark Infringement: The International Landscape, 36 Colum. J.L. & Arts (forthcoming 2014), available at SSRN.

Although we live in a global, interconnected world, legal scholarship – even scholarship about the Internet – often focuses on domestic law with little more than a nod to developments in other jurisdictions.  That’s not necessarily a bad thing; after all, theoretically robust or historically thorough works can rarely achieve their goals while surveying the landscape across multiple countries with disparate traditions and laws.  But as a student of U.S. law, I appreciate articles that explain how other legal systems are addressing issues that perplex or divide our scholars and courts.  Given the tumult over intermediary liability in recent years, comparative commentary on that topic has special salience.

In this brief (draft) article, Graeme Dinwoodie explores both structural and substantive differences in how the United States and Europe approach intermediary trademark liability in the Internet context.  To an outsider, the European web of private agreements, Community Directives, CJEU opinions, and sundry domestic laws can appear daunting and sometimes self-contradictory.  Dinwoodie puts them all into context, offering a coherent explanation of the interaction between Community law, member state law, and private ordering, and situating the overall picture within a broad normative framework.  And he contrasts that picture with the one emerging through common law in the United States.  The result is a readable, informative study of two related but distinct approaches to intermediary trademark law.

Dinwoodie begins by framing the core normative question:  how should the law balance trademark holders’ interest in enforcing their marks against society’s interest in “legitimate development of innovative technologies that allow new ways of trading in goods”?  This tension is a familiar one: from Sony through Grokster, from Inwood through eBay, courts and lawmakers have struggled with how to allocate responsibility between intellectual property holders, those who infringe their rights, and those whose behavior, product, or technology plays some role in that infringement.  Dinwoodie identifies the tension but does not resolve it, purporting to have the more modest goal of exposing the differences between the American and European approaches and discussing their relative virtues.  But the article barely conceals Dinwoodie’s preference for rules that give intermediaries at least some of the burden of policing trademark infringement online.

Structurally, there are some significant differences between the European and American approaches.  Whereas courts have shaped the U.S. law primarily through common law development, Europe has a set of Directives that offer guidance to member states in developing intermediary trademark liability rules.  Europe has also experimented with private ordering as a partial solution, with stakeholders recently entering a Memorandum of Understanding (MOU) that addresses the role of brand owners and intermediaries in combating counterfeiting online.  In other words, rather than relying exclusively on judge-made standards of intermediary liability, European policymakers and market actors have crafted rules and norms of intermediary responsibility for trademark enforcement.

Whether as a result of these structural differences or as a byproduct of Europe’s tradition of stronger unfair competition laws, the substantive rules that have emerged in Europe reflect more solicitude for trademark owners than provided by United States law.  Doctrinally, intermediaries have a superficial advantage in Europe, because the Court of Justice limits direct infringement to those who have used the mark in connection with their own advertising or sales practices.  They also benefit from Europe’s horizontal approach to Internet safe harbors.  Unlike the United States, Europe includes trademark infringement, unfair competition, and other torts in the “notice-and-takedown” system, offering service providers the same kind of immunity for these infractions as they receive under copyright law.  The safe harbor law explicitly provides that intermediaries need not actively root out infringement.

Other features of European law, however, temper the effects of these protections.  Most significantly, Article 11 of the European Enforcement Directive requires member states to ensure that “rights holders are in a position to apply for an injunction against intermediaries whose services are used by third parties to infringe an intellectual property right.”  In other words, even if they fall within the Internet safe harbor (and thus are immune from damages), intermediaries may face an injunction requiring affirmative efforts to reduce infringement on their service.  In Germany, at least, courts have ordered intermediaries to adopt technical measures such as filtering to minimize future infringement.  The threat of such an injunction no doubt played a role in bringing intermediaries to the table in negotiating the MOU, which requires them to take “appropriate, commercially reasonable and technically feasible measures” to reduce counterfeiting online.

This explicit authority to mandate filtering or other proactive enforcement efforts finds no counterpart in U.S. law.  On its face, U.S. contributory infringement law requires specific knowledge of particular acts of infringement before an intermediary has an obligation to act.  And while scholars (including myself) have argued that intermediaries’ efforts to reduce infringement have played an implicit role in case outcomes, the letter of the law requires nothing but a reactive response to notifications of infringement.  Dinwoodie suggests that this “wooden” approach to intermediary liability may miss an opportunity to place enforcement responsibility with the party best suited to enforce.

In the end, while professing neutrality, Dinwoodie clearly sees virtues in the European model.  He applauds the horizontal approach to safe harbors, welcomes the combination of legal standards and private ordering, and praises the flexibility and transparency of Europe’s largely least-cost-avoider model.  Whether the reader agrees with him or prefers the United States’ more technology-protective standard, she will come away with a better understanding of the structure and content of intermediary trademark law in both the United States and Europe.

Cite as: Stacey Dogan, Intermediary Trademark Liability: A Comparative Lens, JOTWELL (May 28, 2014) (reviewing Graeme B. Dinwoodie, Secondary Liability for Online Trademark Infringement: The International Landscape, 36 Colum. J.L. & Arts (forthcoming 2014), available at SSRN), http://ip.jotwell.com/intermediary-trademark-liability-a-comparative-lens/.