How Copyright Prevents Us From Getting the Books We Want

Paul Heald, The Demand for Out-of-Print Works and Their (Un)Availability in Alternative Markets (2014), available at SSRN.

Back in mid-2013, Paul Heald posted to SSRN a short paper that already has had far more impact than academic papers usually have on the public debate over copyright policy. That paper, How Copyright Makes Books and Music Disappear (and How Secondary Liability Rules Help Resurrect Old Songs), employed a clever methodology to see whether copyright facilitates the continued availability and distribution of books and music. Encouraging the production of new works is, of course, copyright’s principal justification. But some have contended that copyright is also necessary to encourage continued exploitation and maintenance of older works. We find an example in the late Jack Valenti, who, as head of the Motion Picture Association of America, in 1995 made the argument before the Senate Judiciary Committee that it was necessary to extend the copyright term in part to provide continued incentives for the exploitation of older works. “A public domain work is an orphan,” Valenti testified. “No one is responsible for its life.” And of course if no one is responsible for keeping a creative work alive, it will, Valenti suggests, die.

Is that argument right? Enter Paul Heald. Heald’s 2013 article employs a set of clever methodologies to test whether copyright did, indeed, facilitate the continued availability of creative works—in Heald’s article, books and music.  With respect to books, Heald constructed a random sample of 2300 books on Amazon, arranged them in groups according to the decade in which they were published, and counted them. Here are his findings:


© 2012 by Paul Heald. All rights reserved. Reprinted with permission of Paul Heald.

© 2013 by Paul Heald. All rights reserved. Reprinted with permission of Paul Heald.

If you hadn’t already seen Heald’s article, the shape of this graph should surprise you.  You would probably expect that the number of books from Amazon would be highest in the most recent decade, 2000–2010, and would decline continuously as one moves to the left in the graph—i.e., further into the past. On average, books are, all things equal, less valuable as they age, so we should expect to see fewer older books on Amazon relative to newer ones.

But that’s not what we see.  Instead, we see a period from roughly 1930 to 1990, where books just seem to disappear. And we see a large number of quite old books on Amazon. There are many from the late-19th century and the first two decades of the 20th century.  Indeed, there are far more new editions from the 1880s on Amazon than from the 1980s.

What on earth is causing this odd pattern? In a word: copyright. All books published before 1923 are out of copyright and in the public domain. And a variety of publishers are engaging in a thriving business of publishing these out-of-copyright works—and so they’re available on Amazon. In contrast, a large fraction of the more recent works—the ones under copyright—simply disappear. Maybe they’ll spring back to life when (or if?) their copyright expires. But for now, copyright doesn’t seem to be doing anything to facilitate the continued availability of these books. In fact, copyright seems to be causing some books to disappear.

Heald does a similar analysis for music, and this analysis too shows that copyright causes music to disappear, relative to music in the public domain. The effect is less pronounced than in the case of books, but it is still there.

In short, Heald’s paper placed a big question mark after the “continued availability” justification for copyright. If we care about works remaining available, then copyright, in fact, seems to be hurting and not helping.

Now Heald is back with a follow-up paper, The Demand for Out-of-Print Works and Their (Un)Availability in Alternative Markets, that takes on the most important question raised by his first: Should we be concerned that copyright appears to make works disappear? If there is no consumer demand for these disappeared works, then possibly not. But if there is consumer demand for the works that copyright kills, then we should care because that demand is not being met.

Heald employs a number of tests to determine whether there is consumer demand for the books that copyright makes disappear. Read the article if you want a full account, but it is worthwhile to give a couple of highlights.  In a particularly nifty part of the paper, Heald compares books available on Amazon with those available on the biggest used books website. The graph is instructive:


© 2014 by Paul Heald. All rights reserved. Reprinted with permission of Paul Heald.

© 2014 by Paul Heald. All rights reserved. Reprinted with permission of Paul Heald.

That gap between the red (Amazon) and blue (used book) curves suggest that used book sellers take advantage of a market in many books that copyright has made disappear from new book shelves, which suggests that there is consumer demand for these books.

Heald then examines other possible ways that the market may provide access to works that copyright has made disappear. For music, Heald looks to see whether copyright owners are digitizing out-of-print records and either selling them on iTunes or posting them on YouTube. The answer, hearteningly, appears to be yes. Unfortunately, the picture for books is much less reassuring. As usual, Heald’s chart speaks more clearly than words:


© 2014 by Paul Heald. All rights reserved. Reprinted with permission of Paul Heald.

© 2014 by Paul Heald. All rights reserved. Reprinted with permission of Paul Heald.

Look at the number of popular songs from 1923–32 that are on iTunes—almost all of them. But then look at the number of popular books from the same period that are offered as eBooks—less than 40%. Many of these books are not available on Amazon in paper form. Nor are they distributed digitally.

So why the difference between the music and book publishing industries when it comes to the availability of older titles still under copyright? I’ll leave that as a mystery—and I hope your unslaked curiosity will lead you to read Heald’s article. It is well worth your time.


Copyright as a Chill on Innovation

Michael A. Carrier, Copyright and Innovation: The Untold Story, 2012 Wisc. L. Rev. 891 (2012).

Sony’s Betamax was the first reprography technology to attract a copyright infringement lawsuit. Little did copyright experts back then realize how much of a harbinger of the future the Betamax would turn out to be. Countless technologies since then designed, like the Betamax, to enable personal use copying of in-copyright works have come to market. Had the Supreme Court outlawed the Betamax, few of these technologies would have seen the light of day.

The most significant pro-innovation decision was Supreme Court’s Sony Betamax decision. It created a safe harbor for technologies with substantial non-infringing uses. Entrepreneurs and venture capitalists have heavily relied on this safe harbor as a shield against copyright owner lawsuits. Yet, notwithstanding this safe harbor, copyright owners have had some successes in shutting down some systems, most notably, the peer-to-peer file-sharing platform Napster.

It stands to reason that decisions such as Napster would have some chilling effect on the development of copy-facilitating technologies. But how much of a chilling effect has there been? Some would point to products and services such as SlingBox and Cablevision’s remote DVR feature and say “not much.”

Antitrust and innovation scholar Michael Carrier decided to do some empirical research to investigate whether technological innovation has, in fact, been chilled by decisions such as Napster. He conducted qualitative interviews with 31 CEOs, co-founders and vice presidents of technology firms, venture capitalists (VCs), and recording industry executives. The results of his research are reported in this Wisconsin article, which I like a lot.

One reason I liked the article was because it confirmed my longstanding suspicion that the prospect of extremely large awards of statutory damages does have a chilling effect on the development of some edgy technologies. Because statutory damages can be awarded in any amount between $750 and $150,000 per infringed work and because copy-facilitating technologies can generally be used to interact with millions of works, copyright lawsuits put technology firms at risk for billions and sometimes trillions of dollars in statutory damages. For instance, when Viacom charged YouTube with infringing 160,000 works, it exposed YouTube and its corporate parent Google to up to $24 billion in damages. While a company such as Google has the financial resources to fight this kind of claim, small startups are more likely to fold than to let themselves become distracted by litigation and spend precious VC resources on lawyers.

But a better reason to like the article is the fascinating story Carrier and his interviewees tell about the mindset of the record labels about Napster and the technology “wasteland” caused by the Napster decision.

The lesson that record labels should have learned from Napster’s phenomenal (if short-lived) success was that consumers wanted choice—to be able to buy a single song instead of a whole album—and if it was easy and convenient to get what they wanted, they would become customers for a whole new way of doing business. Had the record labels settled with Napster, they would have benefited from the new digital market and earned billions from the centralized peer-to-peer service that Napster wanted to offer.

The labels were used to treating record stores as their customers, not the people who actually buy and play music. Radio play, record clubs, and retail were the focus of the labels’ attention. They thought that the Internet was a fad, or a problem to be eradicated. They were unwilling to allow anyone to create a business on the back of their content. They believed that if they didn’t like a distribution technology, it would go away because they wouldn’t license it. They coveted control above all. When the labels began to venture into the digital music space themselves, they wanted to charge $3.25 a track, which was completely unrealistic.

Some of Carrier’s interviewees thought that the courts had reached the right decision in the Napster case, but questioned the breadth of the injunction, which required 100% effectiveness in filtering out infringing content and not just the use of best efforts, thereby making it impossible to do anything in the digital music space. One interviewee asserted that in the ten years after the Napster decision, iTunes was the only innovation in the digital music marketplace. Many more innovations would have occurred but for the rigidity of the Napster ruling and the risk of personal liability for infringement by tech company executives and VCs.

The role of copyright in promoting innovation was recently highlighted in the Department of Commerce’s Green Paper on “Copyright Policy, Creativity and Innovation in the Digital Economy” (July 2013). It aspires to present a balanced agenda of copyright reform ideas that will promote innovation. It is an encouraging sign that the Green Paper identifies statutory damage risks in secondary liability cases as a policy issue that should be addressed. Reforming statutory damages would not entirely eliminate the risks that copyright would chill innovation, but it would go a long way toward that goal.


Quantifying the Copyright-Innovation Interference

Joel Waldfogel, Copyright Protection, Technological Change, and the Quality of New Products: Evidence from Recorded Music Since Napster, 55 J.L. & Econ. 715 (2012), available at the University of Minnesota.

The constitution empowers Congress to promote the useful and the expressive arts, which Congress does through the laws governing patents and copyrights. But, promoting one may sometimes retard the other. This happens in the context of new technologies of copying and dissemination, such as the photocopier, VTR, the MP3 player, and file-sharing networks. Imposing copyright liability on the makers and users of these technologies encourages copyright owners but may discourage innovators. Shielding such makers and users from liability encourages technological innovation but may retard expressive creativity. How should we strike this trade-off, either in general or in particular cases?

This question has long been a major issue in copyright law and scholarship.1 To know what the right policy is, we should have some sense of the degree to which incentives to create content are diminished, if at all, in the face of the new technology. Indeed, much empirical work surrounding the file-sharing litigation has studied the effect file-sharing had on music sales. This body of literature contains diverse views, and the debate on the empirics is sometimes as heated as the one on the theory and policy side.2

Joel Waldfogel’s paper is a recent and valuable contribution to the empirical literature. I like it lots because it takes a new approach to quantifying the digital age’s net effect on incentives to create expressive works. Waldfogel does not believe that quantifying the reduction in copyright owners’ revenue is the most important question. Technological change may reduce the amount that copyright owners can receive for their content, but technological change may also reduce copyright owners’ cost of producing and disseminating content. If the latter effect is greater than the first, technological change may actually enhance incentives to create. To know whether and what kind of legal intervention is needed, we need to determine which effect is greater. The paper tries to quantify the technology’s net effect on incentives to create by looking at what has happened to the quality of sound recordings produced since the advent of online file-sharing in 1999.

The paper does so by constructing three original measures for the quality of music over time. One is an index of the volume of high quality music since the 1960s. It is based on critics’ retrospective lists of the best works over various time periods (e.g., Rolling Stone’s 2004 list of the 500 best albums based on 273 critics and expert musicians opinions, or Pitchfork Media’s 200 best albums of the 2000s etc.). It contains 88 rankings of either songs or albums from Anglophone countries. Two additional indices track the quality of music from different vintages using data on record sales (relating to RIAA’s gold and platinum certifications awarded between 1958-2010) and airplay data (during 2004–2008 of songs originally released in previous years). Here, the assumption is that higher quality music should generate more sales and airplays over time, surviving longer in the market. These two indices evaluate vintages of music by the extent to which records from a particular year continue to sell, or to be played, years later.

Below are graphs of two of the music quality indices over time (the certifications graph is very similar to the airplay one, both being measures of public enjoyment):

© 2012 by The University of Chicago & Joel Waldfogel. All rights reserved. Reprinted with permission of the University of Chicago and Joel Waldfogel.

© 2012 by The University of Chicago & Joel Waldfogel. All rights reserved. Reprinted with permission of the University of Chicago and Joel Waldfogel.

© 2012 by The University of Chicago & Joel Waldfogel. All rights reserved. Reprinted with permission of the University of Chicago and Joel Waldfogel.

The paper finds that these indices—measures of music quality—are consistent with each other, and that there’s no evidence that the quality of music declined in the years since Napster. The paper’s certifications and airplay data indices suggest that music quality has rather increased substantially since 1999 (as is shown on the last graph above). The paper concludes by suggesting that its findings are relevant to policymakers setting the strength of copyright protection.

Several assumptions need to be made before using the paper for policymaking, and I would like to note two. First, one would have to accept the indices as good measures for industry output. One could question, however, whether this is so. Clearly, altering the measure for quality may alter the findings of what happened to that quality over time, and thus may alter the policy implications. For example, the finding that music quality increased significantly post-Napster is borne by two of the paper’s indices (the airplay and the certification indices) but not by the third (the experts’ index). It would thus be interesting to see whether the finding of non-decreasing music quality is robust to the use of other measures of quality.

But even assuming that the paper’s findings are robust to alternative measures of quality, another issue remains: What policy implications is one to draw from the paper’s findings? Here, the paper provides qualified guidance. One possibility, implicitly suggested at the conclusion of the paper, is that policymakers should not strengthen copyright protections since copyright owners’ reduced revenue (and potentially profit) was not associated with a decline in the quality of the music industry’s output. The paper’s findings, however, cannot support such a conclusion unequivocally. The years just prior to Napster’s advent and since were characterized not only by technological change, but also by legal change. By and large, copyrights have been strengthened over the time period. We cannot know whether incentives would have remained the same but for the legal change. Perhaps enhancing copyright owners’ rights was necessary to keep production levels constant. It is also possible that a lesser, or a greater, invigoration of copyrights would have enhanced music quality. Largely abstracting away from legal change, the paper leaves these possibilities open.

But these two remarks should not take away from Waldfogel’s substantial contribution to the literature. Rather, they are to acknowledge how much he has pushed the ball forward and how much more insight can be gained on the intersection of copyright law and technological innovation if more research is conducted following his approach.

  1. For a few notable treatments of the topic, see Paul Goldstein, Copyright’s Highway: From Gutenberg to the Celestial Jukebox (2d ed. 2003); Jessica Litman, Digital Copyright (2d ed. 2006); William Fisher, Promises to Keep: Technology, Law, and the Future of Entertainment (2004); Jane C. Ginsburg, Copyright and Control Over New Technologies of Dissemination, 101 Colum. L. Rev. 1613 (2001), Tim Wu, Copyright’s Communications Policy, 103 Mich. L. Rev. 278 (2004), available at SSRN. For my treatment on the topic, see Dotan Oliar, The Copyright-Innovation Tradeoff: Property Rules, Liability Rules, and Intentional Infliction of Harm, 64 Stan. L. Rev. 951 (2012). []
  2. There is a growing scholarly empirical literature on the effect of file sharing on the sale of sound recordings. For two summaries of the literature, compare Stan Liebowitz, The Metric is the Message: How Much of the Decline in Sound Recording Sales is due to File-Sharing?, CAPRI Publication 11-02 (2011) (suggesting that file sharing has caused the entire enormous decline in record sales over the past decade) with Felix Oberholzer-Gee & Koleman Strumpf, File Sharing and Copyright, Harvard Bus. Sch. Working Paper 09-132 (2009) (suggesting that file sharing has caused no more than 20% of the decline in music sales). []

Fair Use in Context

Michael C. Donaldson, Refuge From The Storm: A Fair Use Safe Harbor For Non-Fiction Works, 59 J. Copyright Soc’y U.S.A. 477 (2012), available at SSRN.

When is a use of a copyrighted work a fair use? This issue has grown in significance with the increase in the economic value of copyrighted works and in the ways in which users can distribute, rework, or otherwise borrow from copyrighted works. The fair-use inquiry is contextual, formally focusing on the nature and purpose of a use, the creative nature of the work, the amount of the work used, and the effect of the use on the copyright owner’s ability to economically exploit the work. For some, fair use’s attention to context renders it an unreliable ally for the diligent user.

However, a number of commentators, including this one,1 have argued that the multifactor inquiry does not lead truly to “case-by-case” adjudication. Instead, the principles of fair use protect certain identifiable patterns or bundles of uses with soft rules while remaining sufficiently open textured to balance interests implicated by new or emerging patterns of use. Others have gone further. My colleagues Peter Jaszi and Patricia Aufderheide have worked with creative communities to identify and articulate best practices in fair use in the context of their patterns of use as described in their recent book Reclaiming Fair Use.

Comes now Michael Donaldson to articulate the soft rule—or in his words the “safe harbor”—that applies when one seeks to make a fair use of a copyrighted work in a new work of non-fiction. Donaldson’s analysis flows not only from his reading of judicial opinions but also from his practice counseling clients and providing formal opinions of counsel that make reliance on fair use an insurable risk for documentary filmmakers, among others.

His article is a worthwhile read for many reasons. On its own terms, the article yields an important and useful insight into the unstated rules of decision that courts use when applying fair use in this context. Donaldson helpfully, and in my view, correctly, identifies the real concerns that animate decision-making. He argues that a fair-use decision-maker is likely to ask the following three questions about the use of a copyrighted work or “asset” in a new work of non-fiction:

  1. Does the asset illustrate or support a point that the creator is trying to make in the new work?
  1. Does the creator of the new work use only as much of the asset as is reasonably appropriate to illustrate or support the point being made?
  1. Is the connection between the point being made and the asset being used to illustrate or support the point clear to the average viewer?

Donaldson argues that when the answer to all three questions is affirmative, the use is within the fair use “safe harbor.” He is careful to also argue that a use may still be a fair use even when the answer to one or more of the questions is “no,” but then the fair-use analysis becomes more context-specific. Additionally, he addresses a number of issues that frequently arise in fair-use decision-making—such as whether the user or copyright owner is acting in good faith or whether the parties discussed a possible license for the use—and argues that these usually serve as distractions. Finally, Donaldson provides an extensive appendix that identifies the cases on which he relies and summarizes how they fare under his three-question test. This is a thoughtful and thought-provoking piece, and scholars and practitioners would do well to engage with Donaldson’s arguments even if they disagree with his particular reformulation of the fair-use inquiry.

This article also is useful to scholars and teachers who seek to better understand the real decision-making process masked by the mechanical jurisprudence that multifactor tests like Section 107 of the Copyright Act or the likelihood-of-confusion test under the Lanham Act sometimes produces (particularly in the latter case). Donaldson presents a model for translating a test that purports to weigh and balance a range of considerations into a more directive inquiry that focuses attention on the facts that really matter in the analysis.

Finally, this article demonstrates how open-ended standards designed to allocate entitlements between private and public interests in intellectual property law can be, and have been, tailored through interpretation to provide a more fine-grained balance than could be readily achieved through legislation. As a result, this article should have appeal both for those interested in the specific application of fair use in the context of non-fiction adaptations and for those who may be inspired to adapt this mode of analysis for other multi-factor legal tests.

  1. See Michael W. Carroll, Fixing Fair Use, 85 N.C. L. Rev. 1087, 1106 (2007) (“There are a range of cases in which the question of fair use recurs. In a few settings, litigation has provided ex ante certainty through the emergence of soft fair use rules.” []

Trademark As Promise

Jeremy N. Sheff, Marks, Morals, and Markets, 65 Stan. L. Rev. 761 (2013).

The primary theory of trademark law in the academic literature is an economic one. Trademarks are a shorthand, the theory goes, for a number of observable and unobservable qualities of products. The trademark PEPSI, to take one example, is an easy way for a consumer to identify the cola that she enjoys without having to investigate other types of information, such as the location or corporate identity of the manufacturer. Indeed, some types of investigation in this regard—tasting the cola before purchase to confirm that it is the preferred drink—are frowned upon, to say the least. So the law regulates the use of trademarks in order to reduce search costs for consumers and, relatedly, to encourage producer investment in goodwill. When an unauthorized producer uses another’s trademark, the consumer is deceived into purchasing an unwanted product or forced to engage in additional efforts to find the product that she desires, both of which are inefficient. Although economic theory may not map neatly onto all areas into which trademark law extends (dilution law being one such example), it appears to be fairly well accepted in the scholarly literature that economic theory provides the predominant justification for trademark law’s existence.

But consumers obviously do not always act in ways consistent with economic theory. The relationships that some consumers have with some brands transcend a mere economic transaction; they involve identity construction and signaling motivated not by a product’s objective qualities but by intangible, emotional responses to the brand. The fact that some consumers are willing to pay many hundreds of dollars for a designer handbag or watch beyond the price that could be justified by the item’s materials or workmanship are a testament to the limits of economic theory.

This suggests that alternate theories of trademark law are required, and Jeremy Sheff, in his thoughtful and sophisticated article, aims to provide one. Sheff begins by noting that although a deontological framework in the Lockean tradition is typically the intellectual property counterpart to the law-and-economics framework, the Lockean justification cannot tell the whole story in trademark as it might for its adherents in copyright law or patent law. Lockean labor theory, to the extent one favors it, maps best onto intellectual property schemes where the goal is to incentivize the production of intellectual property, which trademark law does not. Indeed, although early trademark doctrine focused, as Mark McKenna has detailed, on the moral notion of unfair competition, modern trademark doctrine is primarily concerned with consumer confusion, which Lockean labor theory, with its focus on harms committed by one producer to another, doesn’t address. Thus, the economic or consequentialist justification can identify both a producer-side wrong (free riding) and a consumer-side wrong (enhanced search costs), but a deontological justification typically relates only to a producer-side wrong (misappropriation of the fruits of one’s labor).

Sheff therefore proposes a Kantian contractualist theory to fill this gap, in which actors are motivated not by consequentialist notions but by a moral imperative to act in a particular manner—as one might characterize it, motivation by “good will,” not by “goodwill.” Sheff notes that under this theory, individuals are treated as ends, not means, which requires an acknowledgement of “the unique capacity of rational beings to choose what ends they will pursue and to settle on actions to achieve those ends” (P. 777) and a commitment not to interfere with those choices by lying or deception. In essence, this is a theory of “trademark as promise.”1

Sheff then turns to an initial application of a contractualist theory to trademark law. Significantly, his theory addresses a hypothetical with which a consequentialist theory has difficulty: What happens when a consumer is misled into buying a product from producer X because of X’s use of Y’s trademark, but X’s and Y’s goods are of identical quality, such that no harm is done? Under a contractualist theory, there is indeed still harm done: To the extent that the use of a trademark is a promise from the producer to the prospective consumer that the producer is who he says he is (and assuming that the consumer is purchasing goods based on this representation), the harm is the very breaking of that promise through deception because it deprives the consumer of autonomy in the marketplace.

To be sure, not every doctrine in trademark law can be explained this cleanly. Later in the article, Sheff turns to post-sale confusion, a more complicated application of his premise, in which the confused consumer is not the point-of-sale purchaser but some later observer, an application that raises the question of whether it is the defendant producer or the status-seeking consumer who is the breaching party. In other words, when Buyer A displays a fake Gucci handbag, thereby deceiving Bystander B into thinking it is genuine, Buyer A may be the morally questionable individual (at least in some circles). But it is the producer of the fake Gucci handbag that is the putative defendant in trademark law—arguably only an accessory to Buyer A’s deception—not Buyer A herself.

The fact that Sheff’s article leaves many questions not fully answered, however, is not a failing but rather a testament to the creativity it sparks, and Sheff assists the reader by highlighting many of these areas for further exploration. What, for example, would a contractualist theory have to say about trademark infringement cases that don’t involve deceptive behavior but instead involve an innocently adopted trademark that turns out to be somewhat similar to the mark of another producer? Does the same notion of a broken promise exist in that instance? Is it a moral violation for a company to change its trademark after a public relations disaster, or for a company to obscure the connections between its brands by using different trademarks for different products? Does a theory of “trademark as promise” limit the extent of any changes a company can make to its ingredients or formulation while maintaining the same brand? Put otherwise, do brand owners also have a claim to autonomy in their choices that is incompatible with a theory of trademark as a promise to a consumer?

Sheff does not purport to set forth an all-encompassing theory, but his proposal is highly compatible with the way we now talk about brands. We are ever more in a world in which consumers engage with many brands as personas. Brands are trusted confidants and comforting companions. They find allegiances with different social groups at different times in their development; they uplift us and betray us. These brands are not simply a way of finding goods in the marketplace; they are also a way of announcing or defining one’s identity, creating relationships with others, signaling wealth, or engaging in any one of a number of expressive functions. Companies respond in kind, by creating advertising or affinity groups that foster this type of engagement, and by aggressively using trademark law as a kind of corporate defamation law, pushing back at uses that offend their view of their brands. If these are our relationships with brands today, then perhaps we should be characterizing their relationships with us as ones of promise, representations, and trust. The difficulty will then be in determining which promises we truly expect brands to keep.

  1. Sheff later offers illustrations of a contractualist theory of markets by examining contract law and the prohibition of insider trading by securities laws, both of which regulate the flow of information among individuals in the market. []

Creative Incentives

Katharina Eckartz, Oliver Kirchkamp, & Daniel Schunk, How Do Incentives Affect Creativity (CESifo Working Paper Series, Paper No. 4049, 2012), available at SSRN.

The classic justification for intellectual property laws was perhaps stated best by Abraham Lincoln, who, in speaking of the patent system, characterized its function as “adding the fuel of interest to the fire of genius.” Put less poetically, IP aims to encourage creativity by granting creators exclusive property rights in their creations. That way, if a patented invention or copyrighted work turns out to be worth money, the creator will benefit, rather than a copyist.

That sounds entirely sensible in theory. We think that people generally respond to incentives. Make gasoline more expensive by taxing it, and people generally use less of it. Give people a tax break on home mortgages, and they build more and bigger houses. Make creativity a little less risky, and the payoff a bit more certain, and we’ll get more investment in creative labor.

But is creativity really like gas and houses? Can the amount that we get of creativity be raised by providing incentives? The answer is bound to differ a lot depending on what sort of creativity we’re talking about. Poetry and drug discovery are both creative endeavors, but they are more different than alike. It is difficult to imagine piracy leading to poets throwing down their pens. But we may more readily imagine a drug company cutting its research budget if others are permitted to copy the pills the company spends hundreds of millions to develop. In between poetry and pharma there is a lot of territory. Where in this spectrum of creativity do intrinsic incentives to create leave off? And where do we need to start to introduce extrinsic incentives—either through prizes, or IP laws, or other schemes like government funding of basic research? On these questions, we know surprisingly little. Do incentives work? If so, when? And what sort of incentives are best?

A new paper by three European economists sheds some light on those questions. In How Do Incentives Affect Creativity?, Katharina Eckartz and Oliver Kirchkamp of Friedrich-Schiller-University Jena, and Daniel Schunk of the University of Mainz present the results of an carefully-designed and admirably creative experiment (hereinafter, the “E-K-S Experiment”).

The authors are interested in whether compensation schemes that pay people more when they perform better provoke creative effort, relative to payment of a flat fee that does not vary based on performance. That’s a narrow question, but, as so often is the case, one which is surprisingly difficult to test.

The E-K-S Experiment employs a word task as its measurement of creativity. Participants were presented with a series of alphabetically-ordered lettersets consisting of 12 letters—an example would be “accdeeeginst”—and asked to construct as many words as they could, using only the letters in the letterset, in five minutes. Longer words were deemed more valuable than shorter, in an amount disproportionate to the number of letters in the word.

Is assembling words from a letterset really a “creativity task”? The authors contend that the task is measuring a specific sort of creativity; namely, the ability to combine known elements within a set of pre-determined rules. Of course this is far from the only form of creativity, and it is possible that different sorts of creators might respond to incentives differently. But that’s not really a criticism of the E-K-S Experiment. No one experiment can capture all the facets of the huge range of activities we group as “creativity”. To be a valuable contribution, the E-K-S Experiment must only describe how incentives affect subjects’ performance in a credible model of one sort of creativity, and then later papers, or other researchers, can expand the findings to model many different types of creative work.

Back to the design of the E-K-S Experiment. The experiment was run with 216 student subjects. Subjects were first required to pass a German-language pre-test. Subjects were also asked to self-report their interest in participating in a range of creative tasks. This was done to assess subjects’ intrinsic motivation to engage in creativity.

Once these preliminaries were completed, subjects performed the creativity task three times, in “rounds” lasting five minutes. Each time the subject performed the task, the payment scheme shifted. The experimenters tested a flat fee scheme, a pay-for-performance scheme—where the amount of money earned increased along with performance—and a “tournament” scheme, in which subjects were divided into groups of four and the highest-scoring in the group earned the lion’s share (but not all) of the prize. Subjects undertook the creativity task within each of these conditions. The order in which the conditions were presented was varied to make sure that performance was not dependent on which condition the subjects completed first (that is, before they either got better at the task, or became tired and bored with it). After performing the creativity task under one of the payment conditions, the subjects were asked to perform a “control” task—i.e., a task, such as a number-adding test, that involves work but not creativity—under the same payment condition. In the final stage of the E-K-S Experiment, subjects were asked to pick one of the payment schemes they wished to repeat. This last stage was included to see whether there would be any significant differences among subjects regarding their choice of “favorite” payment scheme.

So, what were the results? They can be stated very simply: Variation in subjects’ performance was driven almost entirely by subjects’ individual characteristics—i.e., how good they were at the task. The payment incentive schemes had almost no effect on performance in either the creativity or control tasks. Subjects in the flat fee payment condition performed about the same as when they were in the pay-for-performance or tournament schemes. This was true whether subjects’ performance was assessed for complexity of the words they produced, or their originality. Aside from a small, positive effect on word length produced by the pay-for-performance scheme, all effects were too small to be statistically significant.

Subjects’ general indifference to incentive payment schemes carried through, moreover, into the final “self-selection” round. Flat payment schemes—i.e., those not related to performance—were subjects’ favorite on average, with 40.74% of subjects choosing this scheme. (Females favored flat payment more than males, who actually favored pay-for-performance very slightly.) The tournament condition was the least popular, with approximately 27% of subjects choosing it overall, and with women and men choosing tournament at roughly the same rate. This cuts against previous findings that male subjects, supposedly more risk-seeking than their female counterparts, favor tournament settings where most if not all of the rewards go to a winner, leaving the losers with little or nothing. So if the conventional wisdom says that men are from Mars and women from Venus, the results of this experiment suggest that, at least as far as creativity incentive preferences go, the sexes occupy pretty much the same territory right here on Earth.

What did stand out in the E-K-S Experiment was the tendency of higher-performing subjects to sort themselves in the “self-selection” stage into either the pay-for-performance or tournament scheme. This means that average performance was marginally higher in these two conditions vs. flat fee. Not, however, because the conditions created some differential incentive effect. Rather, because subjects had a relatively accurate sense of their own relative performance (even though the experiment was not designed to give them information about relative performance) and chose performance-based schemes when they were confident of their relative ability. But there is very little normative significance to higher-performing subjects’ self-selection. When these subjects were confined to a flat payment condition, they performed about the same. In short, this subgroup’s preference has little effect on their creative output.

So, what’s the payoff of the E-K-S Experiment for our thinking about IP law? Well, it’s complicated, but here’s a take. You can think about both patent and copyright as analogous to the E-K-S Experiment’s “tournament” payment condition. That is, if you set aside for a moment other compensation options—in particular the prospect of being paid a salary to engage in creative work—an inventor who patents his new machine, or a writer who copyrights her screenplay, both are entering into a market where there are a few (potentially big) winners and a lot of losers. Put differently, only a few patented inventions and copyrighted works are worth real money. The vast majority are worthless. Most people probably realize this. Does it blunt their willingness to enter into tournament markets?

There is a good deal of research into these “tournament” markets, and most of it suggests that people are quite optimistic about their chance of success, and are attracted to the prospect of a big payoff, even if any individual competitor is very unlikely to actually win it. The E-K-S Experiment, however, suggests otherwise. Subjects were no more creative in the tournament market setting. Nor, in general, did subjects favor that form of compensation over flat fee or linear performance-based incentives.

Which leads to the following hypothesis—and it is only a hypothesis, because the E-K-S Experiment suggests it but does not test it directly: In at least some settings—and at least for individual creators (the behavior of corporate creators is not modeled in the E-K-S Experiment)—getting paid a flat salary to do creative work is every bit as effective an incentive to perform as the prospect of striking it rich via a valuable patent or copyright. The hard part is figuring out when this is true. And why.


The Interpretation-Construction Distinction in Patent Law: Is It Just a Matter of Semantics?

Tun-Jen Chiang & Lawrence B. Solum, The Interpretation-Construction Distinction in Patent Law, Yale L. J. (forthcoming), available at SSRN.

Claim construction is the meat and potatoes of a patent litigator’s diet: it is performed early and often in patent infringement litigation, and it is often outcome determinative. Claim construction’s notoriously uncertain and unpredictable nature is therefore highly problematic. In The Interpretation-Construction Distinction in Patent Law, Tun-Jen Chiang and Lawrence B. Solum argue that courts and commentators have misdiagnosed the root cause of this problematic unpredictability, and they lay out a new route forward for courts seeking to make claim construction more predictable. At the end of the day, I am unconvinced that the patent community should follow this route. Nonetheless, I think that The Interpretation-Construction Distinction is a provocative read that forces the reader to clarify what are sometimes implicit, unarticulated assumptions about the nature of claim construction in order to mount an effective rebuttal.

Drawing on a literature that explains how courts give legal effect to other legal documents (including the Constitution and contracts), Chiang and Solum offer a new perspective on claim construction. They argue that many courts that perform claim construction are not employing a single process but, rather, are in fact employing two distinct processes. First, there is interpretation, or the process of determining the linguistic meanings of words. For Chiang and Solum, linguistic meaning is entirely determined by the understandings of an audience, and it is therefore factual and objective. “The ideas and concepts that the intended audience will comprehend from a certain text is simply a fact of the world.” (P. 15.) Second, there is construction, or the process of imbuing claim language with legal import in order to achieve particular policy outcomes. For Chiang and Solum, any process in which courts consider the policy of optimal claim scope cannot be interpretation and must be construction. “[L]inguistic meaning is the domain of interpretation, and it is factual, and there is no ‘should’ in that question.” (P. 22.) “[L]inguistic meaning is beyond the control of, and thus not dependent upon, the normative preferences of a third-party interpreter such as a judge.” (P. 15.) For example, any attempt of a court “to tailor patent scope to the real invention” is an act of construction because it requires a court to consider normative, patent-policy concerns to identify the level of abstraction at which the “real invention” should be identified. (P. 4.)

With the interpretation-construction distinction in place, Chiang and Solum draw a sharp distinction between what conventional wisdom identifies as the source of uncertainty of claim construction and what they identify as its source. Conventional wisdom, they argue, is that the uncertainty derives predominantly from interpretation and thus uncertainty in the descriptive breadth of words’ meanings. In contrast, Chiang and Solum argue that the uncertainty derives primarily from a disagreement among judges concerning the legal import with which courts should imbue claim language in order to achieve particular policy outcomes. They argue that courts frequently use claim construction to override the legitimate range of interpretation of a word’s meaning and to give a legal effect to a word that is beyond the bounds of what the word actually means as a linguistic matter:  “[T]he cause of uncertainty in claim analysis is typically not a linguistic defect, but rather normative disagreement.” (P. 11).

I do not want to suggest that Chiang and Solum are per se wrong when they draw a categorical distinction between interpretation from construction. In fact, I believe that they offer one theoretically plausible way of describing what goes on when a court engages in claim construction. However, I worry that they oversimplify the conventional wisdom on claim construction and that the distinction between the extant debate and the Chiang/Solum thesis is itself just a matter of semantics. That is, I wonder if Chiang’s and Solum’s overarching thesis is novel only because it uses newly crafted concepts to describe the same-old controversy that has been percolating in the courts and law reviews for the last fifteen years or so.

Chiang and Solum assume that the existing debate about claim construction is focused on uncertainties in “linguistic meaning” as they themselves define the concept. That is, they assume that scholars and judges who talk about meaning are talking about an unmediated phenomenon governed solely by the understandings of the intended audience—namely the actual, real-world PHOSITAs. Yet, the linguistic meaning of a term in a patent claim need not be defined in this manner, and I am doubtful that the extant literature on claim construction has routinely defined it in this manner, either. Why can’t linguistic meaning be mediated by courts? What is wrong with saying that courts have authority to establish the rules that must be followed to identify the meanings of the words in a patent claim? There is an Academie Francaise that in some sense determines what French means as a linguistic matter, so why can’t there be courts that establish a claim construction methodology, i.e., the rules that one must follow determine what patent claims mean as a linguistic matter?1 As soon as patent claims are recognized as a realm in which there are special rules for linguistic meaning and as soon as the courts are given an instrumental role in determining these rules, the interpretation-construction distinction blurs into a continuum. There are certainly cases in which the meanings of claim terms according to the court-determined rules will be closer to and farther away from the meanings of those claim terms to actual, real-world PHOSITAs without knowledge of those legal rules, but there is no dichotomy.

Chiang and Solum reject the possibility of a court influencing the linguistic meanings of claim terms by citing a concern about “circular, confused, or misleading argumentation”:

“[T]he standard doctrine…says that courts follow the “meaning” of text. If, for this purpose, the ‘meaning’ referred to the legal effect, then we have a circularity: the court follows the meaning of text, but the meaning is the legal outcome, which is entirely within the control of the court. Under this standard, a court could do anything and it would always be right, which would make clear thinking impossible.” (Pp. 29–30.)

But this argument itself is confused. If courts were to take an ad hoc approach to claim construction, then Chiang and Solum’s concern might have some merit. However, claim construction disputes often center on a dispute over methodology: the set of rules for assessing the meaning of claim terms should be established ex ante, prior to and independent of any given patent claim. The courts’ determination of claim scope is a two-stage game. First, courts establish rules that determine how meaning is to be determined. Second, courts use those rules to determine the meaning of the words in any given claim. The meaning of a text can be its legal effect, and the rules for determining meaning can be both judicially established and steeped in patent policy, yet courts cannot do whatever they please in any given case. Courts must follow the rules, even if rules were established by courts in earlier cases.2

Chiang and Solum argue that the payoff of recognizing that construction, not interpretation, is the primary source of the uncertainty in contemporary claim construction lies in the transparency of intention that follows from calling a spade a spade. If courts and commentators recognize that the disagreements between judges and litigators in claim construction disputes are in fact disagreements over the policy of optimal claim scope rather than disagreements over the actual meaning of language, they posit that we are more likely to identify potential solutions and have rational arguments about which of those solutions will maximize the social-welfare gains of the patent regime. I agree entirely with Chiang and Solum that overt recognition of the role that patent policy does and should play in debates over the optimal claim construction methodology should be encouraged. For any judges or patent practitioners out there who truly believe that the debate over the best claim construction methodology is and should be devoid of policy concerns, Chiang and Solum have provided a sorely needed wake-up call. Yet, I’m not convinced that the distinction between interpretation and construction is either necessary or even helpful to facilitate this recognition. Although it may well just be the rose-colored reading glasses that I wear, I have always read the judicial and scholarly debate over the best methodology for determining the linguistic meanings of words in claims (i.e., the conventional wisdom on claim construction) as a deeply normative debate that is steeped in patent policy. Because I understand word meaning to be a mediated phenomenon in patent law, I read the conventional wisdom to suggest that the courts should establish rules to imbue the words of a patent claim with linguistic meaning that best achieves policy goals (e.g., predictability, optimal claim scope, or something else). Thus, there is no need to recognize a distinction between interpretation and construction to have the conversation about patent policy that we should be having. Furthermore, my suspicion—and it would concededly take much more space than I have here to flush this out in a convincing manner—is that switching over to Chiang’s and Solum’s way of talking about claim construction would do more to muddy the waters than it would to make our policy goals more readily obtainable.

So, in the end, my concern is that the difference between the conventional way of describing what courts do, on the one hand, and Chiang’s and Solum’s way, on the other hand, is itself just a matter of semantics. Chiang and Solum say the rules for determining the meanings of words in a patent claim are exogenous to both the courts and patent policy, so courts need to engage in policy-driven construction in addition to interpretation. Both I and at least some purveyors of the conventional wisdom on claim construction say the rules for determining the meanings of the words in a patent claim should themselves be created by courts in light of patent policy. That is, they say po-TAY-to, and I say po-TAH-to. So long as we agree that the choice of a set of rules for performing claim construction is a choice that should be steeped in patent policy, maybe we should call the whole thing off. If we don’t, each side must at least recognize how the other side defines the term “meaning” so that we don’t just talk past each other.

  1. While it would be an interesting philosophical debate to question whether a phrase in French means what the Academie says it means or what the average francophone thinks it means (to the extent that the two meanings diverge), I’m not seeking a single, dispositive answer. My point is that is entirely imaginable to say that a third-party intermediary establishes the rules that fix meanings of words, even when those words are articulated by a first group and understood by a second group. []
  2. Given the persistent nature of the debate over claim construction in the Federal Circuit, one could argue that the belief in the possibility of a set of clear rules for giving meanings to claim terms is a pie-in-the-sky fantasy. I am sympathetic to this argument, but it does not undermine the point I seek to make here. It undermines the possibility of greater certainty in claim construction regardless of whether or not one embraces the interpretation/construction distinction; recognizing the interpretation/construction distinction does not turn the fantasy of public notice into reality as we still need a set of clear rules for establishing when and how courts should engage in construction. []

Unauthorized Improvement Across Property Law

Deepa Varadarajan, Improvement Doctrines, 41 Geo. Mason L. Rev. (forthcoming 2014), available at SSRN.

Scholars often debate whether intellectual property really is property at all. This is far more than just a descriptive inquiry. Asking how law regulates tangible and intangible goods differently can deliver valuable insights about the optimal governance of real property and chattels, patents and copyrights. Deepa Varadarajan’s engaging piece, Improvement Doctrines, forthcoming in the George Mason Law Review, represents an important contribution to the growing literature about what property and IP can help us learn about one another. Improvement Doctrines focuses on the fascinating but underappreciated body of doctrines in physical property law that favor—and sometimes entirely excuse—trespass and conversion that is done in good faith and that adds significant value to the res. Professor Varadarajan’s article then uses these improvement doctrines as a lens though which to analyze intellectual property’s relatively anemic attempts to take account of improving but unauthorized uses of patented inventions and copyrighted works of authorship. In so doing, Improvement Doctrines identifies and illuminates a series of fascinating problems that span both physical and intellectual property law.

Professor Varadarajan’s article begins with physical property’s improvement doctrines and moves into an investigation of unauthorized improvement in intellectual property law. In so doing, though, it neither assumes that tangible and intangible property law must mechanically mirror each other, nor insists that the differences between corporeal and incorporeal goods render any such extrapolations useless. Rather, Improvement Doctrines’ primary analytical driver is its thoughtful synthesis of the efficiency and equity rationales underlying ameliorative waste, accession, adverse possession, and mistaken improvement of land. This discussion alone yields a number of valuable insights, such as the creative point that adverse possession warrants categorization as an improvement doctrine even though its substantive law does not require enhancements to land.

Professor Varadarajan’s piece then leverages this discussion of the structure of property’s improvement doctrines to draw attention to a perplexing problem that has largely escaped the attention of scholars: Property law is in many respects more forgiving of unauthorized improving use than intellectual property law, even though both patent and copyright exist only pursuant to a constitutional mandate to advance the progress of science and the useful arts. The piece concludes with a series of suggestions for how to make IP law more closely approximate the kind of toleration toward improvers that physical property law exhibits. These final prescriptive claims, though, rest on the undefended assumption that more toleration for improvement in IP law would foster more creation and invention. It is at least worth pondering the possibility that greater leeway for unauthorized improvement of others’ inventions and works of authorship would undermine authors’ and inventors’ incentives to create them in the first place.

Like all good work, Improvement Doctrines not only makes a strong case for its central claims but also raises a host of other provocative points to contemplate. Among these, perhaps the most interesting flow from Professor Varadarajan’s apt observation that property law’s improvement doctrines reflect the capacity of trespass and conversion to create social welfare. This point carries a pair of intriguing implications: First, law has traditionally assumed that the best arbiter of a productive use of property is the owner of the res. But Improvement Doctrines reminds us that this is by no means the case. Landowners may let productive space go unused, so that we end up cutting a break for the good faith improver who mistakenly builds a house on a vacant lot. And copyright owners may guard their works too jealously, so that we need law to favor the user who makes an unauthorized but socially valuable derivative work.

Second, and related, it’s not only unauthorized improvement of property that can create substantial social value, but unauthorized use. Professor Varadarajan’s analysis elucidates the case of the unauthorized use that transforms property, like building a house on the wrong lot or making a remix of sound recordings without the owners’ permission. But not all socially productive unauthorized use of property necessarily creates something new or improved in the process. Trespass over an unimproved plot of land may provide a helpful way of access at no cost to the owner. Making an unauthorized digital version of an obscure, out of print book may provide the only way for a student to acquire a copy of that text for class. The improvement doctrines on which Professor Varadarajan focuses may be only a part of a larger picture that situates in doctrine and theory the unappreciated upsides of unauthorized use of property, physical and intellectual alike.


Knowing a New Business Model When You See It: Adult Entertainment Innovations And Adaptations in a Post-Napster Era

Kate Darling, What Drives IP without IP? A Study of the Online Adult Entertainment Industry, (February 1, 2013), available at SSRN.

Throughout media history, purveyors of illicit content have always had to think on their feet when faced with economic or legal dilemmas. Never the darling of regulators, law enforcement, or public representatives, adult entertainment companies have pretty much been left to their own devices for dealing with new challenges. It comes as no surprise, then, that in a post-Napster era of expansive networks, easy copying, and free content, porn producers must rethink how they approach their products and profitability.

In her excellent article, What Drives IP without IP? A Study of the Online Adult Entertainment Industry, MIT Research Specialist Kate Darling investigates the current state of the American sexual expression industry to sort out the various perspectives and approaches its players are taking to the shifting ground they currently tread. What she finds is both an affirmation of numerous common sense intuitions (e.g., that porn producers rely less on copyright enforcement and more on lower costs, higher video quality, content curation, and targeted marketing to maintain market share), as well as some surprising discoveries (e.g., that “experience goods” such as live interaction, gaming, and 3D are becoming a staple for building solid online brands).

More fundamentally, though, Darling’s article brings home a classic post-Napster lesson that all content industries have had to learn at some point: consumer expectations have changed. No matter what you think about copyright, “content theft,” BitTorrent, or digital rights management, the majority of consumers will no longer tolerate significant friction between them and what they want online. Darling presents several credible testimonials of this realization within the adult entertainment industry, each bringing a different emotion to the narrative as if they were part of the Kübler-Ross model: shock, disbelief, anger, tolerance, and, finally, acceptance. In terms of intellectual property in particular, Darling shows that this shift has, in fact, led to some reduction in productivity. Yet she makes a convincing case that it has also forced the industry to adapt in new ways, building new and better distribution platforms, and fulfilling new markets with niche products and services.

Darling’s article also fits nicely into a growing body of empirical research on industries and individuals and their relationship to intellectual property. Through her interviews, she provides yet another narrative where content producers (the group one might assume is most in favor of IP) are deeply uncomfortable with copyright. On the one hand, they do want people to pay for what they produce. On the other hand, they seem less concerned about both exclusivity and enforcement. As long as they can make some money, leaky copyrights aren’t a big problem. Part of this attitude may derive from some characteristics of adult entertainment producers—for example, the quantity (large) and quality (varied) of their products—but more likely, it is further evidence of how difficult and expensive the current copyright system is to use. Given the dramatic shifts in production timelines, distribution modes, and consumption rates, the practical costs to register one’s copyright and hire a lawyer to enforce it probably seems somewhat antiquated and obsolete, especially compared to spending the money on producing a new line of content.

Darling’s article also highlights the way in which adult entertainment producers have capitalized on the market on private viewing. Unlike office workers who may want to share funny YouTube videos or emails for comic relief, pornography consumers often want sanctuary and security for their viewings. Darling’s interviews show that the adult industry is using their understanding of this preference to construct products and services that explicitly cater to those conditions—something the movie, music, and newspaper industries continue to struggle to provide, even though their customers also often desire privacy to consume their products.

As the Copyright Office and members of Congress reimagine “The Next Great Copyright Act” in the coming months and years, one hopes that studies such as this one will help form the foundation for a set of policies that are evidence-based and appropriate for a networked society.


Out of the Mouths of Babes: Studying Children’s Judgments about Creativity, Ideas, and Ownership

Alex Shaw, Vivian Li & Kristina R. Olson, Children Apply Principles of Physical Ownership to Ideas, 36 Cognitive Science 1383 (2012), available at Yale University.

Legal scholars who study intellectual property rarely think about children.  Children are almost never inventors of patented technologies, and, although children’s drawings technically receive copyright protection the moment they are created, children rarely appear as litigants in disputes.

But recent research coming from psychologist Kristina Olson’s lab suggests that we should be thinking more about children.  In their new paper, Olson and her graduate students Alex Shaw and Vivian Li, report the results of experiments testing children’s intuitions about the ownership of ideas.  This paper provides an interesting insight into the development of our ideas about intellectual property and creativity, and it should be widely read by IP scholars.

Olson is interested in learning how and when children’s judgments about creativity, ideas, and ownership emerge.  Research by other developmental psychologists had shown that very young children (2+ years old) apply concepts of ownership to physical objects. In earlier work, Olson showed that 6-year-old, but not 4-year-old, children respond negatively to those who plagiarize others’ ideas.  That research suggested the possibility that children of a certain age think about ideas as things that can be owned.

In this project , Olson and her colleagues set out to test whether children apply principles of physical ownership to ideas.  In a series of studies, 6- to 8-year-old children were presented with vignettes that were intended to test whether they applied fundamental principles of ownership—first possession, non-transfer of ownership via theft, and control of permission—to ideas.  For example, children were given the following stimulus:

Steven has been trying to come up with a song about a dragon but has not come up with a song. Zack sees this and comes up with a song about a dragon. Who owns the song about the dragon?

The authors found that children apply the first possession heuristic—that the first possessor of a thing, not the first pursuer, is its owner—to intangible ideas in much the same way that adults do.  In addition, when told of a situation in which one person steals an idea from the person who initially had the idea, children declare that the original thinker and not the thief is the idea’s owner.  Children also seem to pay attention to whether an individual controls access to an idea in determining its owner – a finding that is striking considering the important role that control and exclusion play in theories of ownership of property and intellectual property.  Importantly, however, the authors show that children do not apply ownership indiscriminately.  For example, children tend not to say that someone is the owner of a mere word rather than a complete idea.

Olson and colleagues speculate on the origins of children’s sense that ideas can be owned. They suggest that children develop concepts of ownership applied to physical objects early in life, and, as they develop, children learn to apply those same concepts to ideas.  This learning may come from explicit socialization about rules or through implicit cues related to the value that is placed on idea creation.  Interestingly, they suggest that these effects may be weakened in non-Western cultures that are less individualist and more prone to see ideas as public goods. William Alford’s work on Chinese IP law is certainly relevant here.

This research raises a number of interesting issues for IP scholars.  Most interesting to me is what they tell us about the emergence and strength of IP metaphors and the moral psychology of creativity and copying.  Much recent IP scholarship has focused on (and often critiqued) the use of real property metaphors in IP debates.  Concepts like “property rights” and “theft,” it is argued, are inappropriate to copyrights and patents.  Bill Patry, for example, has noted how property and birth metaphors are often used in IP debates to promote stronger rights and tougher enforcement, and he has suggested that we should instead understand copyright law as establishing “social relations” between people.

But the work of Olson and her colleagues suggests that getting people to think differently about IP may be incredibly difficult.  Although their research does not suggest that treating ideas like other physical property that can be owned is evolutionarily hard-wired into our brains, it does suggest how easily our minds seem to take to these concepts.  Even without explicit socialization, it seems, 6-8 year old children begin to think of ideas the way they do physical property.  In combination with Olson’s work on children’s plagiarism judgments, this implies that, at a very early age, children are developing moral judgments about the kinds of issues—creativity, copying, and ownership—that are at the heart of IP law.

In other recent and forthcoming research, Olson’s lab is investigating the extent to which children value the contribution of ideas vs. labor in artistic creation and children’s ideas about reputation and attribution.  IP scholars should keep a close eye on all of this work.