Listing entries tagged with Copyright and Copyleft
iTunes U: more read/write than you'd think 02.22.2006, 8:02 AM
In Ben’s recent post, he noted that Larry Lessig worries about the trend toward a read-only internet, the harbinger of which is iTunes. Apple's latest (academic) venture is iTunes U, a project begun at Duke and piloted by seven universities -- Stanford, it appears, has been most active. Since they are looking for a large scale roll out of iTunes U for 2006-07, and since we have many podcasting faculty here at USC, a group of us met with Apple reps yesterday.
Initially I was very skeptical about Apple's further insinuation into the academy and yet, what iTunes U offers is a repository for instructors to store podcasts, with several components similar to courseware such as Blackboard. Apple stores the content on its servers but the university retains ownership. The service is fairly customizable—you can store audio, video with audio, slides with audio (aka enhanced podcasts) and text (but only in pdf). Then you populate the class via university course rosters, which are password protected.
There are also open access levels on which the university (or, say, the alumni association) can add podcasts of vodcasts of events. And it is free. At least for now -- the rep got a little cagey when asked about how long this would be the case.
The point is to allow students to capture lectures and such on their iPods (or MP3 players) for the purposes of study and review. The rationale is that students are already extremely familiar with the technology so there is less of a learning curve (well, at least privileged students such as those at my institution are familiar).
What seems particularly interesting is that students can then either speed up the talk of the lecture without changing pitch (and lord knows there are some whose speaking I would love to accelerate) or, say, in the case of an ESL student, slow it down for better comprehension. Finally, there is space for students to upload their own work —- podcasting has been assigned to some of our students already.
Part of me is concerned at further academic incorporation, but a lot more parts of me are thinking this is not only a chance to help less tech savvy profs employ the technology (the ease of collecting and distributing assets is germane here) while also really pushing the envelope in terms of copyright, educational use, fair use, etc. Apple wants to only use materials that are in the public domain or creative commons initially, but undoubtedly some of the more muddy digital use issues will arise and it would be nice to have academics involved in the process.
Posted by virginia kuhn at 08:02 AM
| Comments (0)
tags: Copyright and Copyleft , Education , Publishing, Broadcast, and the Press , apple , copyright , elearning , fair_use , ipod , itunes , itunes_u , podcast , read/write_web , stanford , university
lessig: read/write internet under threat 02.17.2006, 1:51 PM
In an important speech to the Open Source Business Conference in San Francisco, Lawrence Lessig warned that decreased regulation of network infrastructure could fundamentally throw off the balance of the "read/write" internet, gearing the medium toward commercial consumption and away from creative production by everyday people. Interestingly, he cites Apple's iTunes music store, generally praised as the shining example of enlightened digital media commerce, as an example of what a "read-only" internet might look like: a site where you load up your plate and then go off to eat alone.
Lessig is drawing an important connection between the question of regulation and the question of copyright. Initially, copyright was conceived as a way to stimulate creative expression -- for the immediate benefit of the author, but for the overall benefit of society. But over the past few decades, copyright has been twisted by powerful interests to mean the protection of media industry business models, which are now treated like a sacred, inviolable trust. Lessig argues that it's time for a values check -- time to return to the original spirit of copyright:
It's never been the policy of the U.S. government to choose business models, but to protect the authors and artists... I'm sure there is a way for [new models to emerge] that will let artists succeed. I'm not sure we should care if the record companies survive. They care, but I don't think the government should.
Big media have always lobbied for more control over how people use culture, but until now, it's largely been through changes to the copyright statutes. The distribution apparatus -- record stores, booksellers, movie theaters etc. -- was not a concern since it was secure and pretty much by definition "read-only." But when we're dealing with digital media, the distribution apparatus becomes a central concern, and that's because the apparatus is the internet, which at present, no single entity controls.
Which is where the issue of regulation comes in. The cable and phone companies believe that since it's through their physical infrastructure that the culture flows, that they should be able to control how it flows. They want the right to shape the flow of culture to best fit their ideal architecture of revenue. You can see, then, how if they had it their way, the internet would come to look much more like an on-demand broadcast service than the vibrant two-way medium we have today: simply because it's easier to make money from read-only than from read/write -- from broadcast than from public access."
Control over culture goes hand in hand with control over bandwidth -- one monopoly supporting the other. And unless more moderates like Lessig start lobbying for the public interest, I'm afraid our government will be seduced by this fanatical philosophy of control, which when aired among business-minded people, does have a certain logic: "It's our content! Our pipes! Why should we be bled dry?" It's time to remind the media industries that their business models are not synonymous with culture. To remind the phone and cable companies that they are nothing more than utility companies and that they should behave accordingly. And to remind the government who copyright and regulation are really meant to serve: the actual creators -- and the public.
Posted by ben vershbow at 01:51 PM
| Comments (6)
tags: Copyright and Copyleft , DRM , Network_Freedom , broadband , copyleft , copyright , internet , lessig , media , network_freedom , network_neutrality , policy , read/write_web
can there be a compromise on copyright? 02.08.2006, 7:19 AM
The following is a response to a comment made by Karen Schneider on my Monday post on libraries and DRM. I originally wrote this as just another comment, but as you can see, it's kind of taken on a life of its own. At any rate, it seemed to make sense to give it its own space, if for no other reason than that it temporarily sidelined something else I was writing for today. It also has a few good quotes that might be of interest. So, Karen said:
I would turn back to you and ask how authors and publishers can continue to be compensated for their work if a library that would buy ten copies of a book could now buy one. I'm not being reactive, just asking the question--as a librarian, and as a writer.
This is a big question, perhaps the biggest since economics will define the parameters of much that is being discussed here. How do we move from an old economy of knowledge based on the trafficking of intellectual commodities to a new economy where value is placed not on individual copies of things that, as a result of new technologies are effortlessly copiable, but rather on access to networks of content and the quality of those networks? The question is brought into particularly stark relief when we talk about libraries, which (correct me if I'm wrong) have always been more concerned with the pure pursuit and dissemination of knowledge than with the economics of publishing.
Consider, as an example, the photocopier -- in many ways a predecessor of the world wide web in that it is designed to deconstruct and multiply documents. Photocopiers have been unbundling books in libraries long before there was any such thing as Google Book Search, helping users break through the commodified shell to get at the fruit within.
I know there are some countries in Europe that funnel a share of proceeds from library photocopiers back to the publishers, and this seems to be a reasonably fair compromise. But the role of the photocopier in most libraries of the world is more subversive, gently repudiating, with its low hum, sweeping light, and clackety trays, the idea that there can really be such a thing as intellectual property.
That being said, few would dispute the right of an author to benefit economically from his or her intellectual labor; we just have to ask whether the current system is really serving in the authors' interest, let alone the public interest. New technologies have released intellectual works from the restraints of tangible property, making them easily accessible, eminently exchangable and never out of print. This should, in principle, elicit a hallelujah from authors, or at least the many who have written works that, while possessed of intrinsic value, have not succeeded in their role as commodities.
But utopian visions of an intellecutal gift economy will ultimately fail to nourish writers who must survive in the here and now of a commercial market. Though peer-to-peer gift economies might turn out in the long run to be financially lucrative, and in unexpected ways, we can't realistically expect everyone to hold their breath and wait for that to happen. So we find ourselves at a crossroads where we must soon choose as a society either to clamp down (to preserve existing business models), liberalize (to clear the field for new ones), or compromise.
In her essay "Books in Time," Berkeley historian Carla Hesse gives a wonderful overview of a similar debate over intellectual property that took place in 18th Century France, when liberal-minded philosophes -- most notably Condorcet -- railed against the state-sanctioned Paris printing monopolies, demanding universal access to knowledge for all humanity. To Condorcet, freedom of the press meant not only freedom from censorship but freedom from commerce, since ideas arise not from men but through men from nature (how can you sell something that is universally owned?). Things finally settled down in France after the revolution and the country (and the West) embarked on a historic compromise that laid the foundations for what Hesse calls "the modern literary system":
The modern "civilization of the book" that emerged from the democratic revolutions of the eighteenth century was in effect a regulatory compromise among competing social ideals: the notion of the right-bearing and accountable individual author, the value of democratic access to useful knowledge, and faith in free market competition as the most effective mechanism of public exchange.
Barriers to knowledge were lowered. A system of limited intellectual property rights was put in place that incentivized production and elevated the status of writers. And by and large, the world of ideas flourished within a commercial market. But the question remains: can we reach an equivalent compromise today? And if so, what would it look like? Creative Commons has begun to nibble around the edges of the problem, but love it as we may, it does not fundamentally alter the status quo, focusing as it does primarily on giving creators more options within the existing copyright system.
Which is why free software guru Richard Stallman announced in an interview the other day his unqualified opposition to the Creative Commons movement, explaining that while some of its licenses meet the standards of open source, others are overly conservative, rendering the project bunk as a whole. For Stallman, ever the iconoclast, it's all or nothing.
But returning to our theme of compromise, I'm struck again by this idea of a tax on photocopiers, which suggests a kind of micro-economy where payments are made automatically and seamlessly in proportion to a work's use. Someone who has done a great dealing of thinking about such a solution (though on a much more ambitious scale than library photocopiers) is Terry Fisher, an intellectual property scholar at Harvard who has written extensively on practicable alternative copyright models for the music and film industries (Ray and I first encountered Fisher's work when we heard him speak at the Economics of Open Content Symposium at MIT last month).
The following is an excerpt from Fisher's 2004 book, "Promises to Keep: Technology, Law, and the Future of Entertainment", that paints a relatively detailed picture of what one alternative copyright scheme might look like. It's a bit long, and as I mentioned, deals specifically with the recording and movie industries, but it's worth reading in light of this discussion since it seems it could just as easily apply to electronic books:
....we should consider a fundamental change in approach.... replace major portions of the copyright and encryption-reinforcement models with a variant of....a governmentally administered reward system. In brief, here’s how such a system would work. A creator who wished to collect revenue when his or her song or film was heard or watched would register it with the Copyright Office. With registration would come a unique file name, which would be used to track transmissions of digital copies of the work. The government would raise, through taxes, sufficient money to compensate registrants for making their works available to the public. Using techniques pioneered by American and European performing rights organizations and television rating services, a government agency would estimate the frequency with which each song and film was heard or watched by consumers. Each registrant would then periodically be paid by the agency a share of the tax revenues proportional to the relative popularity of his or her creation. Once this system were in place, we would modify copyright law to eliminate most of the current prohibitions on unauthorized reproduction, distribution, adaptation, and performance of audio and video recordings. Music and films would thus be readily available, legally, for free.
Painting with a very broad brush...., here would be the advantages of such a system. Consumers would pay less for more entertainment. Artists would be fairly compensated. The set of artists who made their creations available to the world at large--and consequently the range of entertainment products available to consumers--would increase. Musicians would be less dependent on record companies, and filmmakers would be less dependent on studios, for the distribution of their creations. Both consumers and artists would enjoy greater freedom to modify and redistribute audio and video recordings. Although the prices of consumer electronic equipment and broadband access would increase somewhat, demand for them would rise, thus benefiting the suppliers of those goods and services. Finally, society at large would benefit from a sharp reduction in litigation and other transaction costs.
While I'm uncomfortable with the idea of any top-down, governmental solution, this certainly provides food for thought.
Posted by ben vershbow at 07:19 AM
| Comments (8)
tags: Copyright and Copyleft , DRM , IP , Libraries, Search and the Web , Publishing, Broadcast, and the Press , condorcet , copyleft , copyright , creative_commons , enlightenment , france , free_software , intellectual_property , libraries , music , open_source , photocopy , printing , richar_stallman , xerox
DRM and the damage done to libraries 02.06.2006, 7:51 AM
A recent BBC article draws attention to widespread concerns among UK librarians (concerns I know are shared by librarians and educators on this side of the Atlantic) regarding the potentially disastrous impact of digital rights management on the long-term viability of electronic collections. At present, when downloads represent only a tiny fraction of most libraries' circulation, DRM is more of a nuisance than a threat. At the New York Public library, for instance, only one "copy" of each downloadable ebook or audio book title can be "checked out" at a time -- a frustrating policy that all but cancels out the value of its modest digital collection. But the implications further down the road, when an increasing portion of library holdings will be non-physical, are far more grave.
What these restrictions in effect do is place locks on books, journals and other publications -- locks for which there are generally no keys. What happens, for example, when a work passes into the public domain but its code restrictions remain intact? Or when materials must be converted to newer formats but can't be extracted from their original files? The question we must ask is: how can librarians, now or in the future, be expected to effectively manage, preserve and update their collections in such straightjacketed conditions?
This is another example of how the prevailing copyright fundamentalism threatens to constrict the flow and preservation of knowledge for future generations. I say "fundamentalism" because the current copyright regime in this country is radical and unprecedented in its scope, yet traces its roots back to the initially sound concept of limited intellectual property rights as an incentive to production, which, in turn, stemmed from the Enlightenment idea of an author's natural rights. What was originally granted (hesitantly) as a temporary, statutory limitation on the public domain has spun out of control into a full-blown culture of intellectual control that chokes the flow of ideas through society -- the very thing copyright was supposed to promote in the first place.
If we don't come to our senses, we seem destined for a new dark age where every utterance must be sanctioned by some rights holder or licensing agent. Free thought isn't possible, after all, when every thought is taxed. In his "An Answer to the Question: What is Enlightenment?" Kant condemns as criminal any contract that compromises the potential of future generations to advance their knowledge. He's talking about the church, but this can just as easily be applied to the information monopolists of our times and their new tool, DRM, which, in its insidious way, is a kind of contract (though one that is by definition non-negotiable since enforced by a machine):
But would a society of pastors, perhaps a church assembly or venerable presbytery (as those among the Dutch call themselves), not be justified in binding itself by oath to a certain unalterable symbol in order to secure a constant guardianship over each of its members and through them over the people, and this for all time: I say that this is wholly impossible. Such a contract, whose intention is to preclude forever all further enlightenment of the human race, is absolutely null and void, even if it should be ratified by the supreme power, by parliaments, and by the most solemn peace treaties. One age cannot bind itself, and thus conspire, to place a succeeding one in a condition whereby it would be impossible for the later age to expand its knowledge (particularly where it is so very important), to rid itself of errors, and generally to increase its enlightenment. That would be a crime against human nature, whose essential destiny lies precisely in such progress; subsequent generations are thus completely justified in dismissing such agreements as unauthorized and criminal.
We can only hope that subsequent generations prove more enlightened than those presently in charge.
Posted by ben vershbow at 07:51 AM
| Comments (4)
tags: Copyright and Copyleft , DRM , IP , Libraries, Search and the Web , books , copyright , digital , digitization , ebooks , enlightenment , fundamentalism , intellectual_property , kant , libraries , library , philosophy , public_domain , scholarship
rethinking copyright: learning from the pro sports? 01.27.2006, 1:10 AM
As Ben has reported, the Economics of Open Content conference spent a good deal of time discussing issues of copyright and fair use. During a presentation, David Pierce from Copyright Services noted that the major media companies are mainly concerned about protecting their most valuable assets. The obvious example is Disney's extreme vested interest in protecting the Mickey Mouse, now 78 years old, from entering the public domain. Further, Pierce mentioned that these media companies fight to extend the copyright protection of everything they own in order to protect their most valuable assets. Finally, he stated that only a small portions of their total film libraries are available to consumers. Many people in attendance were intrigued by these ideas, including myself and Paul Courant from the University of Michigan. Earlier in the conference, Courant explained that 90-95% of UM's library is out of print, and presumably much of that is under copyright protection.
If this situation is true, then, staggering amounts of media are being kept from the public domain or are closed from licensing for little or no reason. A little further thinking quickly leads to alternative structures of copyright that would move media into the public domain or at the least increase its availability, while appeasing the media conglomerates economic concerns.
Rules controlling the protection of assets is nothing new. For instance, in US professional sports, fairly elaborate structures are in place determine how players can be traded. Common sense dictates that teams cannot stockpile players from other teams. In the free agency era of the National Football League, teams have limited rights to control players from signing with other teams. Each NFL team can designate a single athlete as a "franchise" player, according to the current Collecting Bargaining Agreement with the player union. This designation gives them exclusive rights in retaining their player from competing offers. Similarly, in the National Basketball Association, when the league adds a new team, existing teams are allowed to protect eight players from being drafted and signed from the expansion team(s). What can we learn from these institutions? The examples show hoarding players is not good for sports, similarly hoarding assets is not in the best interest of the public good either.
The sports example has obviously limitations. In the NBA, team rosters are limited to fifteen players. On the other hand, a media company can hold an unlimited number of assets. In turn, applying this model would allow companies to seek extensions to only a portion of their copyright assets. Defining this proportion would certainly be difficult. For instance, it is still unclear to me how this might adapt to owners of one copyrighted property.
Another variant interpretation of this model would be to move the burden of responsibility back to the copyright holder. Here, copyright holders must show active economic use and value from these properties. This strategy would force media companies to make their archives available or put the media into the public domain. These copyright holders need to overcome their fears of flooding the markets and dated claims of limited shelf space, which are simply not relevant in the digital media / e-commerce age. Further, media companies would be encouraged to license their holdings for derivatives works, which would in fact lead to more profits. In that, these implementations would increase revenue by challenging the current shortsighted marketing decisions which fail to account for the long tail economic value of their holdings. Although these materials would not enter the public domain, they would be become accessible.
Would this block innovation? Creators of content will still be able to profit from their work for decades. When limited copyright did exist in its original implementation, creative innovation was certainly not hindered. Therefore, the argument that limiting protection of all of a media company's assets in perpetuity would slow innovation is baseless. By the end of the current time copyright period, holders have ample time to extract value from those assets. In fact, infinite copyright protection slows innovation by removing incentives to create new intellectual property.
Finally, few last comments are worth noting. These models are, at best, compromises. I present them because the current state of copyright protection and extensions seems headed towards former Motion Pictures Association of America President Jack Valenti's now infamous suggestion of extending copyright to "forever less a day." Although these media companies have a huge financial stake in controlling these copyrights, I cannot overemphasize our Constitutional right to place these materials in the public domain. Article I, Section 8, clause 8 of the United States Constitution states:
Congress has the power to promote the Progress of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Rights to their respective Writings and Discoveries.
Under these proposed schemes, fair use becomes even more cruical. Conceding that the extraordinary preciousness of intellectual property as Mickey Mouse and Bugs Bunny supersedes rights found in our Constitution implies a similarly extraordinary importance of these properties to our culture and society. Thus, democratic access to these properties for use in education and critical discourse must be equally imperative to the progress of culture and society. In the end, the choice, as a society, is ours. We do not need to concede anything.
what I heard at MIT 01.26.2006, 9:47 AM
Over the next few days I'll be sifting through notes, links, and assorted epiphanies crumpled up in my pocket from two packed, and at times profound, days at the Economics of Open Content symposium, hosted in Cambridge, MA by Intelligent Television and MIT Open CourseWare. For now, here are some initial impressions -- things I heard, both spoken in the room and ricocheting inside my head during and since. An oral history of the conference? Not exactly. More an attempt to jog the memory. Hopefully, though, something coherent will come across. I'll pick up some of these threads in greater detail over the next few days. I should add that this post owes a substantial debt in form to Eliot Weinberger's "What I Heard in Iraq" series (here and here).
Naturally, I heard a lot about "open content."
I heard that there are two kinds of "open." Open as in open access -- to knowledge, archives, medical information etc. (like Public Library of Science or Project Gutenberg). And open as in open process -- work that is out in the open, open to input, even open-ended (like Linux, Wikipedia or our experiment with MItch Stephens, Without Gods).
I heard that "content" is actually a demeaning term, treating works of authorship as filler for slots -- a commodity as opposed to a public good.
I heard that open content is not necessarily the same as free content. Both can be part of a business model, but the defining difference is control -- open content is often still controlled content.
I heard that if you build the open-access resources and demonstrate their value, the money will come later.
I heard that content should be given away for free and that the money is to be made talking about the content.
I heard that reputation and an audience are the most valuable currency anyway.
I heard that the academy's core mission -- education, research and public service -- makes it a moral imperative to have all scholarly knowledge fully accessible to the public.
I heard that if knowledge is not made widely available and usable then its status as knowledge is in question.
I heard that libraries may become the digital publishing centers of tomorrow through simple, open-access platforms, overhauling the print journal system and redefining how scholarship is disseminated throughout the world.
And I heard a lot about copyright...
I heard that probably about 50% of the production budget of an average documentary film goes toward rights clearances.
I heard that many of those clearances are for "underlying" rights to third-party materials appearing in the background or reproduced within reproduced footage. I heard that these are often things like incidental images, video or sound; or corporate logos or facades of buildings that happen to be caught on film.
I heard that there is basically no "fair use" space carved out for visual and aural media.
I heard that this all but paralyzes our ability as a culture to fully examine ourselves in terms of the media that surround us.
I heard that the various alternative copyright movements are not necessarily all pulling in the same direction.
I heard that there is an "inter-operability" problem between alternative licensing schemes -- that, for instance, Wikipedia's GNU Free Documentation License is not inter-operable with any Creative Commons licenses.
I heard that since the mass market content industries have such tremendous influence on policy, that a significant extension of existing copyright laws (in the United States, at least) is likely in the near future.
I heard one person go so far as to call this a "totalitarian" intellectual property regime -- a police state for content.
I heard that one possible benefit of this extension would be a general improvement of internet content distribution, and possibly greater freedom for creators to independently sell their work since they would have greater control over the flow of digital copies and be less reliant on infrastructure that today only big companies can provide.
I heard that another possible benefit of such control would be price discrimination -- i.e. a graduated pricing scale for content varying according to the means of individual consumers, which could result in fairer prices. Basically, a graduated cultural consumption tax imposed by media conglomerates
I heard, however, that such a system would be possible only through a substantial invasion of users' privacy: tracking users' consumption patterns in other markets (right down to their local grocery store), pinpointing of users' geographical location and analysis of their socioeconomic status.
I heard that this degree of control could be achieved only through persistent surveillance of the flow of content through codes and controls embedded in files, software and hardware.
I heard that such a wholesale compromise on privacy is all but inevitable -- is in fact already happening.
I heard that in an "information economy," user data is a major asset of companies -- an asset that, like financial or physical property assets, can be liquidated, traded or sold to other companies in the event of bankruptcy, merger or acquisition.
I heard that within such an over-extended (and personally intrusive) copyright system, there would still exist the possibility of less restrictive alternatives -- e.g. a peer-to-peer content cooperative where, for a single low fee, one can exchange and consume content without restriction; money is then distributed to content creators in proportion to the demand for and use of their content.
I heard that such an alternative could theoretically be implemented on the state level, with every citizen paying a single low tax (less than $10 per year) giving them unfettered access to all published media, and easily maintaining the profit margins of media industries.
I heard that, while such a scheme is highly unlikely to be implemented in the United States, a similar proposal is in early stages of debate in the French parliament.
And I heard a lot about peer-to-peer...
I heard that p2p is not just a way to exchange files or information, it is a paradigm shift that is totally changing the way societies communicate, trade, and build.
I heard that between 1840 and 1850 the first newspapers appeared in America that could be said to have mass circulation. I heard that as a result -- in the space of that single decade -- the cost of starting a print daily rose approximately %250.
I heard that modern democracies have basically always existed within a mass media system, a system that goes hand in hand with a centralized, mass-market capital structure.
I heard that we are now moving into a radically decentralized capital structure based on social modes of production in a peer-to-peer information commons, in what is essentially a new chapter for democratic societies.
I heard that the public sphere will never be the same again.
I heard that emerging practices of "remix culture" are in an apprentice stage focused on popular entertainment, but will soon begin manifesting in higher stakes arenas (as suggested by politically charged works like "The French Democracy" or this latest Black Lantern video about the Stanley Williams execution in California).
I heard that in a networked information commons the potential for political critique, free inquiry, and citizen action will be greatly increased.
I heard that whether we will live up to our potential is far from clear.
I heard that there is a battle over pipes, the outcome of which could have huge consequences for the health and wealth of p2p.
I heard that since the telecomm monopolies have such tremendous influence on policy, a radical deregulation of physical network infrastructure is likely in the near future.
I heard that this will entrench those monopolies, shifting the balance of the internet to consumption rather than production.
I heard this is because pre-p2p business models see one-way distribution with maximum control over individual copies, downloads and streams as the most profitable way to move content.
I heard also that policing works most effectively through top-down control over broadband.
I heard that the Chinese can attest to this.
I heard that what we need is an open spectrum commons, where connections to the network are as distributed, decentralized, and collaboratively load-sharing as the network itself.
I heard that there is nothing sacred about a business model -- that it is totally dependent on capital structures, which are constantly changing throughout history.
I heard that history is shifting in a big way.
I heard it is shifting to p2p.
I heard this is the most powerful mechanism for distributing material and intellectual wealth the world has ever seen.
I heard, however, that old business models will be radically clung to, as though they are sacred.
I heard that this will be painful.
Posted by ben vershbow at 09:47 AM
| Comments (5)
tags: Copyright and Copyleft , Education , Network_Freedom , Publishing, Broadcast, and the Press , Remix , academia , academy , broadband , conferences_and_excursions , copyleft , copyright , creative_commons , cyberlaw , democracy , economics , economics_of_open_content , film , freedom , internet , media , monopoly , music , network , open_content , open_spectrum , p2p , politics , publishing , scholarship , technology , wikipedia
letters from second life 01.25.2006, 4:10 PM
Last week, Bob mentioned that Larry Lessig, law profressor and intellectual property scholar, was being interviewed in Second Life, the virtual world created by Linden Lab. Having heard a lot of Second Life before, I was pleased to have a reason and opportunity to create an account and explore it. Basically I quickly learned that it's Metaverse, as described in Neil Stephenson's Snowcrash, in operation today, and I'm now a part of it too.
I already covered the actual interview. Here are a few observations from my introduction to SL.
Second Life is a humbling place, especially for beginners. Everything ,even the simplest things, must be relearned. It took me 5 minutes to learn how to sit down, another 5 minutes to read something, and on and on. Traveling to the site of Lessig event was an even more daunting task. I was given the location of this event, a name and coordinates, without any idea of what to do with them. Second Life is a vast space, and it wasn't clear to me how to get from one point to another. I had no idea how to travel in SL, and had to ask around someone.
I presume it is evident that I'm very new to SL, by my constant trampling over people and inanimate objects. So, I continue walking into trees and rocks until I come across someone whose title contains "Mentor," and figure that this is a good person to ask for help. Not knowing how to strike up a private conversation, I start talking out loud, not even sure if anyone is even going to pay attention.
(I will come to learn that you travel from place to place via teleportation.)
I am relieved to discover that people are basically nice in SL, maybe even nicer than in New York. This fellow avatar is happy to chat and answer questions. Second List has a feature called "Friends" which operates like Buddies in Instant Messaging. However, I'm not sure what the social protocol for making friends is, so I make no assumptions. As I was typing "can we be friends?" I sigh with the realization that I am, in fact, back in fourth grade.
People around me have much more sophisticated outfits than I do. So, I try out the free clothing features. I darken my pants to a deep blue and my shoes black. Then, my default shirt gets turned into a loose white t-shirt. Somehow I end up a bit like a GAP model crossed with Max Headroom. After making my first "friend," another complete stranger comes up to me and just starts giving me clothes. Apparently, my clothes still need a little work. I try on the cowboy boots and faded jeans. Happy that I've moved beyond the standard issue clothes, I thank my benefactor and begin to make my way to the event.
The builders of Second Life force people to rely on other people within the virtual world. However, assistance in the real world certainly helps too. Entering Second Life, the feeling of displacement is quite clear, as if I arrived to a new city in the real world with a single address, where I don't know anyone or how to navigate the city. The virtual world often mimics the real world, but my surprise each time I learn this fact is still ongoing. It definitely helps to know people, both in where to go that's interesting and how to do things.
After teleporting to the event, I found myself around people who had common interests, which was great and similar to attending a lecture in the real world. At different times, I struck up a conversation with an avatar who is a publisher on the West Coast and then talked to an academic who runs a media center. In both cases, I was talking to the person literally "next" to me.
When I first heard about the interview, I learned at there was limited spacing. Which seemed strange to me, as it was taking place in a viritual space. When I arrived at the event place, I saw the ampitheater with video screens, that would show a live web stream of Lessig. The limited seating made more sense, seeing the seat of the theater. I also believe that the SL servers also have a finite capacity for the number of people to be located within a small area, because movement was jerky around concentrated groups of people. I guess I'll have to wait for the Second Life Woodstock.
The space was crowded with people walking around, chatting, and getting up their free digital copy of Lessig's book, "Free Culture." (I've included a picture of me reading Free Culture in Second Life. You can actually read the text.) The interview is about to begin, as an avatar with large red wings walks by me. I say out loud, "I know she was going to sit in front of me." Adding, "Just kidding," in case I might be offending someone, who knows who this person could be. Fortunately, she found a seat outside my sight line without incident, and the introductory remarks began.
There was a strange duality where I had to both learn what was being said, but also how to navigate the environment of a lecture as well. The interview proceeds within the social norms of a lecture. People are mostly quiet, clap and for the moderator runs the question and answer session. Afterwards, I line up to get Lessig to "sign" my virtual book at the virtual booksigning, as in my virtual public event. I finally stumble my way through the line, all the while asking many question on what I'm supposed to do. With my signed book in hand, I look at the sky, which is quite dark. I log out and return to the real world.
Posted by ray cha at 04:10 PM
| Comments (0)
tags: Copyright and Copyleft , Games , Mediated Existence , VR , free_culture , gaming , larry_lessig , metaverse , neal_stephenson , second_life , video_games , virtual , virtual_reality
fair use and the networked book 01.23.2006, 3:29 PM
I just finished reading the Brennan Center for Justice's report on fair use. This public policy report was funded in part by the Free Expression Policy Project and describes, in frightening detail, the state of public knowledge regarding fair use today. The problem is that the legal definition of fair use is hard to pin down. Here are the four factors that the courts use to determine fair use:
- the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes;
- the nature of the copyrighted work;
- the amount and substantiality of the portion used in relation to the copyrighted work as a whole; and
- the effect of the use upon the potential market for or value of the copyrighted work.
Unfortunately, these criteria are open to interpretation at every turn, and have provided little with which to predict any judicial ruling on fair use. In a lawsuit, no one is sure of the outcome of their claim. This causes confusion and fear for individuals and publishers, academics and their institutions. In many cases where there is a clear fair use argument, the target of copyright infringement action (cease and desist, lawsuit) does not challenge the decision, usually for financial reasons. It’s just as clear that copyright owners pursue the protection of copyright incorrectly, with plenty of misapprehension about what qualifies for fair use. The current copyright law, as it has been written and upheld, is fraught with opportunities for mistakes by both parties, which has led to an underutilization of cultural assets for critical, educational, or artistic purposes.
This restrictive atmosphere is even more prevalent in the film and music industries. The RIAA lawsuits are a well-known example of the industry protecting its assets via heavy-handed lawsuits. The culture of shared use in the movie industry is even more stifling. This combination of aggressive control by the studio and equally aggressive piracy is causing a legislative backlash that favors copyright holders at the expense of consumer value. The Brennan report points to several examples where the erosion of fair use has limited the ability of scholars and critics to comment on these audio/visual materials, even though they are part of the landscape of our culture.
That's why creative commons and open-access content is so important. It clearly states how and to what extent you can use copyrighted media. While fair use will always exist (because some corporations and individuals will maintain complete control over their creative assets), open-access licensing for media will simultaneously invigorate cultural discourse and reduce the incentive to infringe on copyright.
the economics of open content 01.23.2006, 9:31 AM
For the next two days, Ray and I are attending what hopes to be a fascinating conference in Cambridge, MA -- The Economics of Open Content -- co-hosted by Intelligent Television and MIT Open CourseWare.
This project is a systematic study of why and how it makes sense for commercial companies and noncommercial institutions active in culture, education, and media to make certain materials widely available for free—and also how free services are morphing into commercial companies while retaining their peer-to-peer quality.
They've assembled an excellent cross-section of people from the emerging open access movement, business, law, the academy, the tech sector and from virtually every media industry to address one of the most important (and counter-intuitive) questions of our age: how do you make money by giving things away for free?
Rather than continue, in an age of information abundance, to embrace economic models predicated on information scarcity, we need to look ahead to new models for sustainability and creative production. I look forward to hearing from some of the visionaries gathered in this room.
More to come...
Posted by ben vershbow at 09:31 AM
| Comments (0)
tags: Copyright and Copyleft , Education , Libraries, Search and the Web , academia , conferences_and_excursions , copyleft , copyright , free_software , gift_economy , library , open_access , open_content , publishing , scholarship
lessig in second life 01.20.2006, 9:27 AM
Wednesday evening, I attended an interview with Larry Lessig, which took place in the virtual world of Second Life. New World Notes announced the event and is posting coverage and transcripts of the interview. As it was my first experience in SL, I will post more on the experience of attending an interview/ lecture in a virtual space. For now, I am going to comment upon two quotes that Lessig covered as it relates to our work at the institute.
Lawrence Lessig: Because as life moves online we should have the SAME FREEDOMS (at least) that we had in real life. There's no doubt that in real life you could act out a movie or a different ending to a movie. There's no doubt that would have been "free" of copyright in real life. But as we move online things that were before were free now are regulated.
Yesterday, Bob made the point that our memories increasingly exist outside of ourselves. At the institute, we have discussed the mediated life, and a substantial part of that mediation occurs as we continue to digitize more parts of our lives, from photo albums to diaries. Things we once created in the physical world now reside on the network, which means that it is being published. Photo albums documenting our trips to Disneyland or the Space Needle (whose facade is trademarked and protected) that one rested within the home, are uploaded to flickr, potentially accessible to anyone browsing the Internet, a regulated space. This regulation has enormous influence on the creative outlets of everyone, not just professionals. Without trying to sound overly naive, my concern is not just that speech and discourse of all people are being compromised. As companies become more litigious towards copyright infringement (especially when their arguments are weak), the safe guards of the courts and legislation are not protecting its constituents.
Lawrence Lessig: Copyright is about creating incentives. Incentives are prospective. No matter what even the US Congress does, it will not give Elvis any more incentive to create in 1954. So whatever the length of copyright should be prospectively, we know it can make no sense of incentives to extend the term for work that is already created.
The increasing accessibility of digital technology allows people to become creators and distributors of content. Lessig notes that with each year, the increasing evidence from cases such as the Google Book Search controversy show the inadequacy of current copyright legislation. Further, he insightfully suggests to learn from the creations that young people produce such as anime music videos. Their completely different approach to intellectual property informs the cultural shift that is running counter to the legal status quo. Lessig suggest that these creative works have the potential to inform policy makers that these attitudes are moving toward the original intentions of copyright law. Then, policy makers hopefully may begin to question why these works are currently considered illegal.
The courts' failure to clearly define an interpretation of fair use puts at risk the discourse that a functioning democracy requires. The stringent attitudes towards using copyrighted material goes against the spirit of the original intentions of the law. Although, it may not be a role of the government and the courts to actively encourage creativity. It is sad that bipartisan government actions and courts rulings actively discourage innovation and creativity.
the book is reading you 01.19.2006, 1:42 PM
I just noticed that Google Book Search requires users to be logged in on a Google account to view pages of copyrighted works.
They provide the following explanation:
Why do I have to log in to see certain pages?
Because many of the books in Google Book Search are still under copyright, we limit the amount of a book that a user can see. In order to enforce these limits, we make some pages available only after you log in to an existing Google Account (such as a Gmail account) or create a new one. The aim of Google Book Search is to help you discover books, not read them cover to cover, so you may not be able to see every page you're interested in.
So they're tracking how much we've looked at and capping our number of page views. Presumably a bone tossed to publishers, who I'm sure will continue suing Google all the same (more on this here). There's also the possibility that publishers have requested information on who's looking at their books -- geographical breakdowns and stats on click-throughs to retailers and libraries. I doubt, though, that Google would share this sort of user data. Substantial privacy issues aside, that's valuable information they want to keep for themselves.
That's because "the aim of Google Book Search" is also to discover who you are. It's capturing your clickstreams, analyzing what you've searched and the terms you've used to get there. The book is reading you. Substantial privacy issues aside, (it seems more and more that's where we'll be leaving them) Google will use this data to refine Google's search algorithms and, who knows, might even develop some sort of personalized recommendation system similar to Amazon's -- you know, where the computer lists other titles that might interest you based on what you've read, bought or browsed in the past (a system that works only if you are logged in). It's possible Google is thinking of Book Search as the cornerstone of a larger venture that could compete with Amazon.
There are many ways Google could eventually capitalize on its books database -- that is, beyond the contextual advertising that is currently its main source of revenue. It might turn the scanned texts into readable editions, hammer out licensing agreements with publishers, and become the world's biggest ebook store. It could start a print-on-demand service -- a Xerox machine on steroids (and the return of Google Print?). It could work out deals with publishers to sell access to complete online editions -- a searchable text to go along with the physical book -- as Amazon announced it will do with its Upgrade service. Or it could start selling sections of books -- individual pages, chapters etc. -- as Amazon has also planned to do with its Pages program.
Amazon has long served as a valuable research tool for books in print, so much so that some university library systems are now emulating it. Recent additions to the Search Inside the Book program such as concordances, interlinked citations, and statistically improbable phrases (where distinctive terms in the book act as machine-generated tags) are especially fun to play with. Although first and foremost a retailer, Amazon feels more and more like a search system every day (and its A9 engine, though seemingly always on the back burner, is also developing some interesting features). On the flip side Google, though a search system, could start feeling more like a retailer. In either case, you'll have to log in first.
Posted by ben vershbow at 01:42 PM
| Comments (5)
tags: Copyright and Copyleft , Libraries, Search and the Web , POD , amazon , books , e-commerce , e-publishing , ebooks , google , google_book_search , google_print , internet , print_on_demand , privacy , publishing , search , web
ESBNs and more thoughts on the end of cyberspace 01.12.2006, 7:31 AM
Anyone who's ever seen a book has seen ISBNs, or International Standard Book Numbers -- that string of ten digits, right above the bar code, that uniquely identifies a given title. Now come ESBNs, or Electronic Standard Book Numbers, which you'd expect would be just like ISBNs, only for electronic books. And you'd be right, but only partly. ESBNs, which just came into existence this year, uniquely identify not only an electronic title, but each individual copy, stream, or download of that title -- little tracking devices that publishers can embed in their content. And not just books, but music, video or any other discrete media form -- ESBNs are media-agnostic.
"It's all part of the attempt to impose the restrictions of the physical on the digital, enforcing scarcity where there is none," David Weinberger rightly observes. On the net, it's not so much a matter of who has the book, but who is reading the book -- who is at the book. It's not a copy, it's more like a place. But cyberspace blurs that distinction. As Alex Pang explains, cyberspace is still a place to which we must travel. Going there has become much easier and much faster, but we are still visitors, not natives. We begin and end in the physical world, at a concrete terminal.
When I snap shut my laptop, I disconnect. I am back in the world. And it is that instantaneous moment of travel, that light-speed jump, that has unleashed the reams and decibels of anguished debate over intellectual property in the digital era. A sort of conceptual jetlag. Culture shock. The travel metaphors begin to falter, but the point is that we are talking about things confused during travel from one world to another. Discombobulation.
This jetlag creates a schism in how we treat and consume media. When we're connected to the net, we're not concerned with copies we may or may not own. What matters is access to the material. The copy is immaterial. It's here, there, and everywhere, as the poet said. But when you're offline, physical possession of copies, digital or otherwise, becomes important again. If you don't have it in your hand, or a local copy on your desktop then you cannot experience it. It's as simple as that. ESBNs are a byproduct of this jetlag. They seek to carry the guarantees of the physical world like luggage into the virtual world of cyberspace.
But when that distinction is erased, when connection to the network becomes ubiquitous and constant (as is generally predicted), a pervasive layer over all private and public space, keeping pace with all our movements, then the idea of digital "copies" will be effectively dead. As will the idea of cyberspace. The virtual world and the actual world will be one.
For publishers and IP lawyers, this will simplify matters greatly. Take, for example, webmail. For the past few years, I have relied exclusively on webmail with no local client on my machine. This means that when I'm offline, I have no mail (unless I go to the trouble of making copies of individual messages or printouts). As a consequence, I've stopped thinking of my correspondence in terms of copies. I think of it in terms of being there, of being "on my email" -- or not. Soon that will be the way I think of most, if not all, digital media -- in terms of access and services, not copies.
But in terms of perception, the end of cyberspace is not so simple. When the last actual-to-virtual transport service officially shuts down -- when the line between worlds is completely erased -- we will still be left, as human beings, with a desire to travel to places beyond our immediate perception. As Sol Gaitan describes it in a brilliant comment to yesterday's "end of cyberspace" post:
In the West, the desire to blur the line, the need to access the "other side," took artists to try opium, absinth, kef, and peyote. The symbolists crossed the line and brought back dada, surrealism, and other manifestations of worlds that until then had been held at bay but that were all there. The virtual is part of the actual, "we, or objects acting on our behalf are online all the time." Never though of that in such terms, but it's true, and very exciting. It potentially enriches my reality. As with a book, contents become alive through the reader/user, otherwise the book is a dead, or dormant, object. So, my e-mail, the blogs I read, the Web, are online all the time, but it's through me that they become concrete, a perceived reality. Yes, we read differently because texts grow, move, and evolve, while we are away and "the object" is closed. But, we still need to read them. Esse rerum est percipi.
Just the other night I saw a fantastic performance of Allen Ginsberg's Howl that took the poem -- which I'd always found alluring but ultimately remote on the page -- and, through the conjury of five actors, made it concrete, a perceived reality. I dug Ginsburg's words. I downloaded them, as if across time. I was in cyberspace, but with sweat and pheremones. The Beats, too, sought sublimity -- transport to a virtual world. So, too, did the cyberpunks in the net's early days. So, too, did early Christian monastics, an analogy that Pang draws:
...cyberspace expresses a desire to transcend the world; Web 2.0 is about engaging with it. The early inhabitants of cyberspace were like the early Church monastics, who sought to serve God by going into the desert and escaping the temptations and distractions of the world and the flesh. The vision of Web 2.0, in contrast, is more Franciscan: one of engagement with and improvement of the world, not escape from it.
The end of cyberspace may mean the fusion of real and virtual worlds, another layer of a massively mediated existence. And this raises many questions about what is real and how, or if, that matters. But the end of cyberspace, despite all the sweeping gospel of Web 2.0, continuous computing, urban computing etc., also signals the beginning of something terribly mundane. Networks of fiber and digits are still human networks, prone to corruption and virtue alike. A virtual environment is still a natural environment. The extraordinary, in time, becomes ordinary. And undoubtedly we will still search for lines to cross.
Posted by ben vershbow at 07:31 AM
| Comments (1)
tags: Copyright and Copyleft , DRM , ESBN , ISBN , Mediated Existence , Publishing, Broadcast, and the Press , Web2.0 , copyright , cyberspace , ebooks , ginsberg , media_consumption , poetry , publishing , reality
defending the creative commons license 12.30.2005, 3:47 PM
interesting question came up today in the office. there's a site, surferdiary.com, that reposts every entry on if:book. they do the same for several other sites, presumably as a way to generate traffic to their site and ultimately to gather clicks on their google supplied ads. if:book entries are posted with a creative commons license which allows reuse with proper attribution but forbids commercial use. surferdiary's use seems to be thoroughly commercial. some of my colleagues think we should go after them as a way of defending the creative commons concept. would love to know what people think?
another view on the stacey/gamma flap 12.28.2005, 12:42 PM
For an alternative view of Lisa's earlier post ... i wonder if Gamma's submission of Adam Stacey's image with the "Adam Stacey/Gamma" attribution doesn't show the strength of the Creative Commons concept. As i see it, Stacey published his image without any restrictions beyond attribution. Gamma, a well-respected photo agency started distributing the image attributed to Stacey. Isn't this exactly what the CC license was supposed to enable — the free-flow of information on the net. perhaps Stacey chose the wrong license and he didn't mean for his work to be distributed by a for-profit company. If so, that is a reminder to all of us to be careful about which Creative Commons license we choose. One thing i'm not clear on is whether Gamma referenced the CC license. They are supposed to do that and if they didn't they should have.
phone photo of london underground nominated for time best photo;
photo agency claims credit for creative commons work 12.28.2005, 10:26 AM
Moblog co-founder Alfie Dennen is furious that the photo agency Gamma has claimed credit for a well-known photo of last summer's London subway bombing —first circulated on Moblog under a Creative Commons liscence — that was chosen for Time's annual Best Photo contest. Dennen and others in the blogosphere are hoping that photographer Adam Stacey might take legal action against Gamma for what seems to be a breach of copyright.
We at the Institute are still trying to figure out what to make of this. Like everyone else who has been observing the increasing popularity of the Creative Commons license, we've been wondering when and how the license will be tested in court. However, this might not be the best possible test case. On one hand, it seems to be a somewhat imperious "claiming" of a photo widely celebrated for being produced by a citizen journalist who was committed to its free circulation. One the other hand, it seems unclear whether Dennen and/or Stacey are correct in their assertion that the CC license that was used really prohibits Gamma from attaching their name to the photo.
The photo in question, a shot of gasping passengers evacuating the London Underground in the moments after last summer's bombing (in the image above, it's the second photo clockwise), was snapped by Stacey using the camera on his cellphone. Time's nomination of the photo most likely reflects the fact that the photo itself — and Stacey — became something of a media phenomenon in the weeks following the bombing. The image was posted on Moblog about 15 minutes after the bombing, and then widely circulated in both print and online media venues. Stacey subsequently appeared on NPR's All Things Considered, and the photo was heralded as a signpost that citizen journalism had come into its own.
While writing about the photo's appearance in Time, Dennen noticed that Time had credited the photo to Adam Stacey/Gamma instead of Adam Stacey/Creative Commons. According to Dennen, Stacey had been contacted by Gamma and had turned down their offer to distribute the photo, so the attribution came as an unpleasant shock. He claims that the license chosen by Stacey clearly indicates that the photo be given Creative Commons attribution. But is this really clear? The photo is attributed to Stacey, but not to Creative Commons: does this create a grey area? The license does allow commercial use of Stacey's photo, so if Gamma was making a profit off the image, that would be legal as well.
Dennen writes on his weblog that he contacted Gamma for an explanation, arguing that after Stacey told the agency that he wanted to distribute the photo through Creative Commons, they should have understood that they could use it, but not claim it as their own. Gamma responded in an email that, "[we] had access to this pix on the web as well as anyone, therefore we downloaded it and released it under Gamma credit as all agencies did or could have done since there was no special requirement regarding the credit." They also claimed that in their conversation with Stacey, Creative Commons never came up, and that a "more complete answer" to the reason for the attribution would be available after January 3rd, when the agent who spoke with Stacey returned from Christmas vacation.
Until then, it's difficult to say whether Gamma's claim of credit for the photo is accidental or deliberate disregard. Dennen also says that he's contacting Time to urge them to issue a correction, but he hasn't gotten a response yet. I'll follow this story as it develops.
google book search debated at american bar association 12.15.2005, 3:50 PM
Last night I attended a fascinating panel discussion at the American Bar Association on the legality of Google Book Search. In many ways, this was the debate made flesh. Making the case against Google were high-level representatives from the two entities that have brought suit, the Authors' Guild (Executive Director Paul Aiken) and the Association of American Publishers (VP for legal counsel Allan Adler). It would have been exciting if Google, in turn, had sent representatives to make their case, but instead we had two independent commentators, law professor and blogger Susan Crawford and Cameron Stracher, also a law professor and writer. The discussion was vigorous, at times heated -- in many ways a preview of arguments that could eventually be aired (albeit under a much stricter clock) in front of federal judges.
The lawsuits in question center around whether Google's scanning of books and presenting tiny snippet quotations online for keyword searches is, as they claim, fair use. As I understand it, the use in question is the initial scanning of full texts of copyrighted books held in the collections of partner libraries. The fair use defense hinges on this initial full scan being the necessary first step before the "transformative" use of the texts, namely unbundling the book into snippets generated on the fly in response to user search queries.
...in case you were wondering what snippets look like
At first, the conversation remained focused on this question, and during that time it seemed that Google was winning the debate. The plaintiffs' arguments seemed weak and a little desperate. Aiken used carefully scripted language about not being against online book search, just wanting it to be licensed, quipping "we're just throwing a little gravel in the gearbox of progress." Adler was a little more strident, calling Google "the master of misdirection," using the promise of technological dazzlement to turn public opinion against the legitimate grievances of publishers (of course, this will be settled by judges, not by public opinion). He did score one good point, though, saying Google has betrayed the weakness of its fair use claim in the way it has continually revised its description of the program.
Almost exactly one year ago, Google unveiled its "library initiative" only to re-brand it several months later as a "publisher program" following a wave of negative press. This, however, did little to ease tensions and eventually Google decided to halt all book scanning (until this past November) while they tried to smooth things over with the publishers. Even so, lawsuits were filed, despite Google's offer of an "opt-out" option for publishers, allowing them to request that certain titles not be included in the search index. This more or less created an analog to the "implied consent" principle that legitimates search engines caching web pages with "spider" programs that crawl the net looking for new material.
In that case, there is a machine-to-machine communication taking place and web page owners are free to insert programs that instruct spiders not to cache, or can simply place certain content behind a firewall. By offering an "opt-out" option to publishers, Google enables essentially the same sort of communication. Adler's point (and this was echoed more succinctly by a smart question from the audience) was that if Google's fair use claim is so air-tight, then why offer this middle ground? Why all these efforts to mollify publishers without actually negotiating a license? (I am definitely concerned that Google's efforts to quell what probably should have been an anticipated negative reaction from the publishing industry will end up undercutting its legal position.)
Crawford came back with some nice points, most significantly that the publishers were trying to make a pretty egregious "double dip" into the value of their books. Google, by creating a searchable digital index of book texts -- "a card catalogue on steroids," as she put it -- and even generating revenue by placing ads alongside search results, is making a transformative use of the published material and should not have to seek permission. Google had a good idea. And it is an eminently fair use.
And it's not Google's idea alone, they just had it first and are using it to gain a competitive advantage over their search engine rivals, who in their turn, have tried to get in on the game with the Open Content Alliance (which, incidentally, has decided not to make a stand on fair use as Google has, and are doing all their scanning and indexing in the context of license agreements). Publishers, too, are welcome to build their own databases and to make them crawl-able by search engines. Earlier this week, Harper Collins announced it would be doing exactly that with about 20,000 of its titles. Aiken and Adler say that if anyone can scan books and make a search engine, then all hell will break loose and millions of digital copies will be leaked into the web. Crawford shot back that this lawsuit is not about net security issues, it is about fair use.
But once the security cat was let out of the bag, the room turned noticeably against Google (perhaps due to a preponderance of publishing lawyers in the audience). Aiken and Adler worked hard to stir up anxiety about rampant ebook piracy, even as Crawford repeatedly tried to keep the discussion on course. It was very interesting to hear, right from the horse's mouth, that the Authors' Guild and AAP both are convinced that the ebook market, tiny as it currently is, is within a few years of exploding, pending the release of some sort of ipod-like gadget for text. At that point, they say, Google will have gained a huge strategic advantage off the back of appropriated content.
Their argument hinges on the fourth determining factor in the fair use exception, which evaluates "the effect of the use upon the potential market for or value of the copyrighted work." So the publishers are suing because Google might be cornering a potential market!!! (Crawford goes further into this in her wrap-up) Of course, if Google wanted to go into the ebook business using the material in their database, there would have to be a licensing agreement, otherwise they really would be pirating. But the suits are not about a future market, they are about creating a search service, which should be ruled fair use. If publishers are so worried about the future ebook market, then they should start planning for business.
To echo Crawford, I sincerely hope these cases reach the court and are not settled beforehand. Larger concerns about Google's expansionist program aside, I think they have made a very brave stand on the principle of fair use, the essential breathing space carved out within our over-extended copyright laws. Crawford reminded the room that intellectual property is NOT like physical property, over which the owner has nearly unlimited rights. Copyright is a "temporary statutory monopoly" originally granted ("with hesitation," Crawford adds) in order to incentivize creative expression and the production of ideas. The internet scares the old-guard publishing industry because it poses so many threats to the security of their product. These threats are certainly significant, but they are not the subject of these lawsuits, nor are they Google's, or any search engine's, fault. The rise of the net should not become a pretext for limiting or abolishing fair use.
sober thoughts on google: privatization and privacy 11.30.2005, 8:18 AM
Siva Vaidhyanathan has written an excellent essay for the Chronicle of Higher Education on the "risky gamble" of Google's book-scanning project -- some of the most measured, carefully considered comments I've yet seen on the issue. His concerns are not so much for the authors and publishers that have filed suit (on the contrary, he believes they are likely to benefit from Google's service), but for the general public and the future of libraries. Outsourcing to a private company the vital task of digitizing collections may prove to have been a grave mistake on the part of Google's partner libraries. Siva:
The long-term risk of privatization is simple: Companies change and fail. Libraries and universities last.....Libraries should not be relinquishing their core duties to private corporations for the sake of expediency. Whichever side wins in court, we as a culture have lost sight of the ways that human beings, archives, indexes, and institutions interact to generate, preserve, revise, and distribute knowledge. We have become obsessed with seeing everything in the universe as "information" to be linked and ranked. We have focused on quantity and convenience at the expense of the richness and serendipity of the full library experience. We are making a tremendous mistake.
This essay contains in abundance what has largely been missing from the Google books debate: intellectual courage. Vaidhyanathan, an intellectual property scholar and "avowed open-source, open-access advocate," easily could have gone the predictable route of scolding the copyright conservatives and spreading the Google gospel. But he manages to see the big picture beyond the intellectual property concerns. This is not just about economics, it's about knowledge and the public interest.
What irks me about the usual debate is that it forces you into a position of either resisting Google or being its apologist. But this fails to get at the real bind we all are in: the fact that Google provides invaluable services and yet is amassing too much power; that a private company is creating a monopoly on public information services. Sooner or later, there is bound to be a conflict of interest. That is where we, the Google-addicted public, are caught. It's more complicated than hip versus square, or good versus evil.
Here's another good piece on Google. On Monday, The New York Times ran an editorial by Adam Cohen that nicely lays out the privacy concerns:
Google says it needs the data it keeps to improve its technology, but it is doubtful it needs so much personally identifiable information. Of course, this sort of data is enormously valuable for marketing. The whole idea of "Don't be evil," though, is resisting lucrative business opportunities when they are wrong. Google should develop an overarching privacy theory that is as bold as its mission to make the world's information accessible - one that can become a model for the online world. Google is not necessarily worse than other Internet companies when it comes to privacy. But it should be doing better.
Two graduate students in Stanford in the mid-90s recognized that search engines would the most important tools for dealing with the incredible flood of information that was then beginning to swell, so they started indexing web pages and working on algorithms. But as the company has grown, Google's admirable-sounding mission statement -- "to organize the world's information and make it universally accessible and useful" -- has become its manifest destiny, and "information" can now encompass the most private of territories.
At one point it simply meant search results -- the answers to our questions. But now it's the questions as well. Google is keeping a meticulous record of our clickstreams, piecing together an enormous database of queries, refining its search algorithms and, some say, even building a massive artificial brain (more on that later). What else might they do with all this personal information? To date, all of Google's services are free, but there may be a hidden cost.
"Don't be evil" may be the company motto, but with its IPO earlier this year, Google adopted a new ideology: they are now a public corporation. If web advertising (their sole source of revenue) levels off, then investors currently high on $400+ shares will start clamoring for Google to maintain profits. "Don't be evil to us!" they will cry. And what will Google do then?
Posted by ben vershbow at 08:18 AM
| Comments (7)
tags: Copyright and Copyleft , Libraries, Search and the Web , books , copyright , ethics , google , google_book_search , google_print , intellectual_property , libraries , library , literature , privacy , publishing , university
having browsed google print a bit more... 11.14.2005, 4:53 AM
...I realize I was over-hasty in dismissing the recent additions made since book scanning resumed earlier this month. True, many of the fine wines in the cellar are there only for the tasting, but the vintage stuff can be drunk freely, and there are already some wonderful 19th century titles, at this point mostly from Harvard. The surest way to find them is to search by date, or by title and date. Specify a date range in advanced search or simply enter, for example, "date: 1890" and a wealth of fully accessible texts comes up, any of which can be linked to from a syllabus. An astonishing resource for teachers and students.
The conclusion: Google Print really is shaping up to be a library, that is, of the world pre-1923 -- the current line of demarcation between copyright and the public domain. It's a stark reminder of how over-extended copyright is. Here's an 1899 english printing of The Mahabharata:
A charming detail found on the following page is this old Harvard library stamp that got scanned along with the rest:
Posted by ben vershbow at 04:53 AM
| Comments (0)
tags: Copyright and Copyleft , Libraries, Search and the Web , OCR , copyright , ebook , fair_use , google , google_print , library , mahabharata , scan
google print's not-so-public domain 11.03.2005, 4:16 PM
Google's first batch of public domain book scans is now online, representing a smattering of classics and curiosities from the collections of libraries participating in Google Print. Essentially snapshots of books, they're not particularly comfortable to read, but they are keyword-searchable and, since no copyright applies, fully accessible.
The problem is, there really isn't all that much there. Google's gotten a lot of bad press for its supposedly cavalier attitude toward copyright, but spend a few minutes browsing Google Print and you'll see just how publisher-centric the whole affair is. The idea of a text being in the public domain really doesn't amount to much if you're only talking about antique manuscripts, and these are the only books that they've made fully accessible. Daisy Miller's copyright expired long ago but, with the exception of Harvard's illustrated 1892 copy, all the available scanned editions are owned by modern publishers and are therefore only snippeted. This is not an online library, it's a marketing program. Google Print will undeniably have its uses, but we shouldn't confuse it with a library.
(An interesting offering from the stacks of the New York Public Library is this mid-19th century biographic registry of the wealthy burghers of New York: "Capitalists whose wealth is estimated at one hundred thousand dollars and upwards...")
Posted by ben vershbow at 04:16 PM
| Comments (0)
tags: Copyright and Copyleft , Libraries, Search and the Web , OCR , books , copyright , ebook , google , google_print , library , literature , public_domain , scan
the creeping (digital) death of fair use 11.02.2005, 1:13 PM
Meant to post about this last week but it got lost in the shuffle... In case anyone missed it, Tarleton Gillespie of Cornell has published a good piece in Inside Higher Ed about how sneaky settings in course management software are effectively eating away at fair use rights in the academy. Public debate tends to focus on the music and movie industries and the ever more fiendish anti-piracy restrictions they build into their products (the latest being the horrendous "analog hole"). But a similar thing is going on in education and it is decidely under-discussed.
Gillespie draws our attention to the "Copyright Permissions Building Block," a new add-on for the Blackboard course management platform that automatically obtains copyright clearances for any materials a teacher puts into the system. It's billed as a time-saver, a friendly chauffeur to guide you through the confounding back alleys of copyright.
But is it necessary? Gillespie, for one, is concerned that this streamlining mechanism encourages permission-seeking that isn't really required, that teachers should just invoke fair use. To be sure, a good many instructors never bother with permissions anyway, but if they stop to think about it, they probably feel that they are doing something wrong. Blackboard, by sneakily making permissions-seeking the default, plays to this misplaced guilt, lulling teachers away from awareness of their essential rights. It's a disturbing trend, since a right not sufficiently excercised is likely to wither away.
Fair use is what oxygenates the bloodstream of education, allowing ideas to be ideas, not commodities. Universities, and their primary fair use organs, libraries, shouldn't be subjected to the same extortionist policies of the mainstream copyright regime, which, like some corrupt local construction authority, requires dozens of permits to set up a simple grocery store. Fair use was written explicitly into law in 1976 to guarantee protection. But the market tends to find a way, and code is its latest, and most insidious, weapon.
Amazingly, few academics are speaking out. John Holbo, writing on The Valve, wonders:
Why aren’t academics - in the humanities in particular - more exercised by recent developments in copyright law? Specifically, why aren’t they outraged by the prospect of indefinite copyright extension?...
...It seems to me odd, not because overextended copyright is the most pressing issue in 2005 but because it seems like a social/cultural/political/economic issue that recommends itself as well suited to be taken up by academics - starting with the fact that it is right here on their professional doorstep...
Most obviously on the doorstep is Google, currently mired in legal unpleasantness for its book-scanning ambitions and the controversial interpretation of fair use that undergirds them. Why aren't the universities making a clearer statement about this? In defense? In concern? Soon, when search engines move in earnest into video and sound, the shit will really hit the fan. The academy should be preparing for this, staking out ground for the healthy development of multimedia scholarship and literature that necessitates quotation from other "texts" such as film, television and music, and for which these searchable archives will be an essential resource.
Fair use seems to be shrinking at just the moment it should be expanding, yet few are speaking out.
google is sued... again 10.20.2005, 8:08 AM
This time by publishers. Penguin Group USA, McGraw-Hill, Pearson Education, Simon & Schuster and John Wiley & Sons. The gripe is the same as with the Authors' Guild, which filed suit last month alleging "massive copyright infringement." Publishers fear a dangerous precedent is set by Google's scanning of books to construct what amounts to a giant card catalogue on the web. Google claims "fair use" (see rationale), again pointing out that for copyrighted works only tiny "snippets" of text are displayed around keywords (though perhaps this is not yet fully in effect - I was searching around in this book and was able to look at quite a lot).
Google calls the publishers' suit "near-sighted." And it probably is. The benefit to readers and researchers will be tremendous, as will (Google is eager to point out) the exposure for authors and publishers. But Google Print is undoubtedly an earth-shaking program. Look at the reaction in Europe, where alarm bells rung by France warned of cultural imperialism, an english-drenched web. Heads of state and culture convened and initial plans for a European digital library have been drawn up.
What the transatlantic flap makes clear is that Google's book scanning touches a deep nerve, and the argument over intellectual property, signficant though it is, distracts from a more profound human anxiety -- an anxiety about the form of culture and the shape of thoughts. If we try to grope back through the millennia, we can find find an analogy in the invention of writing.
The shift from oral to written language froze speech into stable strings that could be transmitted and stored over distance and time. This change not only affected the modes of communication, it dramatically refigured the cognitive makeup of human beings (as McLuhan, Ong and others have described). We are currently going through another such shift. The digital takes the freezing medium of text and throws it back into fluidity. Like the melting of polar ice caps, it unsettles equilibriums, changes weather patterns. It is a lot to adjust to, and we wonder if our great-great-grandchildren will literally think differently from us.
But in spite of this disorienting new fluidity, we still have print, we still have the book. And actually, Google Print in many ways affirms this since its search returns will point to print retailers and brick-and-mortar libraries. Yet the fact remains that the canon is being scanned, with implications we can't fully perceive, and future uses we can't fully predict, and so it is understandable that many are unnerved. The ice is really beginning to melt.
In Phaedrus, Plato expresses a similar anxiety about the invention of writing. He tells the tale of Theuth, an Egyptian deity who goes around spreading the new technology, and one day encounters a skeptic in King Thamus:
...you who are the father of letters, from a paternal love of your own children have been led to attribute to them a power opposite to that which they in fact possess. For this discovery of yours will create forgetfulness in the minds of those who learn to use it; they will not exercise their memories, but, trusting in external, foreign marks, they will not bring things to remembrance from within themselves. You have discovered a remedy not for memory, but for reminding. You offer your students the appearance of wisdom, not true wisdom. They will be hearers of many things and will have learned nothing; they will appear to be omniscient and will generally know nothing; they will be tiresome company, having the show of wisdom without the reality.
As I type, I'm exhibiting wisdom without the reality. I've read Plato, but nowhere near exhaustively. Yet I can slash and weave texts on the web in seconds, throw together a blog entry and send it screeching into the commons. And with Google Print I can get the quote I need and let the rest of the book rot behind the security fence. This fluidity is dangerous because it makes connections so easy. Do we know what we are connecting?
Posted by ben vershbow at 08:08 AM
| Comments (5)
tags: Copyright and Copyleft , Libraries, Search and the Web , Transliteracies , copyright , google , literacy , mcluhan , ong , plato , publishing , search , web
copyright lawyers remain richest professionals 09.20.2005, 12:50 PM
Or so is the case in Korea, where the custodians of intellectual property law ranked first (apparently for the sixth straight year) in a recent personal income survey. An interesting nugget blown down the pipeline from Korean newspaper Chosun Ilbo, in an article barely longer than its headline. Though I am only able to explore the English-language edition, it seems to be a newspaper with no end of information, but little in the way of analysis. One has the feeling of reading oil, a lubricant for the economic wheels that have delivered a war-torn and psychologically divided nation into material prosperity. Korea is now a major regional power of the so-called global information economy.
The Chosun trifle nicely animates the highly abstract, but fascinating "A Hacker Manifesto" by McKenzie Wark, which I recently began reading. The manifesto is a Marxist tract for the information age, redefining the eternal class struggle in terms of intellectual property - the post-capital form of property - which is controlled by a new ruling class, the "vectoralists." The vectoralists - Bill Gates, Rupert Murdoch, or the big pharmaceutical companies would be the most obvious examples - control the vectors, or channels, of communication, and seek to subjugate the "hackers," who Wark defines as a newly coherent class of idea makers - programmers, inventors, artists and philosophers. It's an important book, and convincingly argues why the intellectual property debate is central in the struggle for liberty.
That the vectoralist class has replaced capital as the dominant exploiting class can be seen in the form that the leading corporations take. These firms divest themselves of their productive capacity, as this is no longer a source of power. They rely on a competing mass of capitalist contractors for the manufacture of their products. Their power lies in monopolizing intellectual property -- patents, copyrights and trademarks -- and the means of reproducing their value -- the vectors of communication. The privatization of information becomes the dominant, rather than a subsidiary, aspect of commodified life.
He goes on to quote from Naomi Klein:
"There is a certain logic to this progression: first, a select group of manufacturers transcend their connection to earthbound products, then, with marketing elevated as the pinnacle of their business, they attempt to alter marketing's social status as a commercial interruption and replace it with seamless integration."
Posted by ben vershbow at 12:50 PM
| Comments (1)
tags: Copyright and Copyleft , IT , capitalism , class , communism , copyleft , copyright , hacker , hacking , intellectualproperty , korea , law , lawyer , manifesto , marxism , naomiklein , patent , seoul , vector , vectoralist , wark
copyright 101 08.25.2005, 12:36 PM
Richard Lanham, the godfather of electronic text, has written a wonderful piece in Academic Commons calling for a course in copyright for all undergraduates. Lanham, a UCLA English professor who has had a significant second career as an expert witness in copyright cases, gives one of the more cogent summaries of the copyright morass we find ourselves in as the digital tide overwhelms previous notions of property and ownership.
the open source curriculum: MIT's opencourseware 08.16.2005, 12:20 PM
Jimmy Wales of Wikipedia dreams of a free curriculum - open, high quality course materials built by a grassroots movement of volunteers (much like the one that is building the web's largest encyclopedia). But Wales is not alone in his dreaming. The Massachusetts Institute of Technology also wants to spread the wealth - but not through a groundswell.
OpenCourseWare is all about the heights. OCW publishes syllabi, course calendars, readings, exams and other study materials from over 1,100 MIT classes - "a free and open educational resource for faculty, students, and self-learners around the world." Sounds good. And it is pretty good, but it's important to know one crucial fact: at this stage, many, if not most, course readings are only listed for reference. Anything in the public domain is available for download (or is linked to a free resource like Project Gutenberg), but most of the courseware is not, in effect, open.
OpenCourseWare is most powerful as an idea, the same idea trumpeted by Wales, though they are pushing from opposite sides. MIT dispenses manna from the ivory tower while Wiki Books rallies instructors from middle and lower-tier American universites and developing countries. Both movements are in their infancy - largely untested.
There is some evidence that the OCW model is beginning to spread. Tufts University has launched its own OpenCourseWare project, as has The Johns Hopkins Bloomberg School of Public Health, and several universities in Japan (see OCW Japan portal). But to say that MIT has more institutional heft than the Wikimedia Foundation would be a serious understatement. It's relatively easy for them to launch a project like this, with the MIT stamp, and to quickly generate a favorable buzz. But in the end, how valuable will OCW be if you can't get your hands on the bulk of the materials? As more content becomes freely available through public-spirited ventures like Wiki Books and Creative Commons, as well as a myriad of independent online textbooks, OCW might need to populate its courses with such materials in order to stay relevant and useful.
But will an elite institution like MIT be willing in the end to incorporate texts and materials forged in the far-flung suburbs of the academy? MIT syllabi are stocked with quality scholarship - expensive, well-bred stuff. It's difficult to imagine Wiki Books taking a seat among such high class company. And so it's equally difficult to tell, for an institution like MIT, whether OCW is a sign of healthy adaptation or inevitable erosion. Questions like these point to the profound changes that will rock the modern university as the web levels and obsolesces the old hierarchies - as profound as the upheavals in Europe around the dawn of moveable type.
electronic textbook program gets real (slightly) 08.15.2005, 1:18 PM
So, the pilot e-textbook program (see post) on trial this fall at Princeton, the University of Utah and nearly a dozen other universities, is modifying inititial plans to make digital textbooks expire after five months, extending terms to at least a year, and, in some cases, scrapping the limit altogether. Congratulations to publishers for bravely pushing their program to the bare minimum.
See "Publishers loosen rules on e-textbooks" in CNET.
google halts book scans until november 08.12.2005, 2:28 PM
Faced with intense pressure from publishers since it announced its Print and Library projects, Google has decided to back down, at least somewhat, from its ambitious program to scan major library collections and make them searchable online. Until now, Google has defended its project as falling under "fair use," but publishers have not been convinced. From the Google blog:
We think most publishers and authors will choose to participate in the publisher program in order to introduce their work to countless readers around the world. But we know that not everyone agrees, and we want to do our best to respect their views too. So now, any and all copyright holders – both Google Print partners and non-partners – can tell us which books they’d prefer that we not scan if we find them in a library. To allow plenty of time to review these new options, we won’t scan any in-copyright books from now until this November.
MIT Technology Review hones in on Google's hubris:
Seems copyright owners have problems with the effort, and who can really blame them--copyright protection is, after all, one way publishers make their money. Somewhat amazingly, Google wants copyright owners to opt-out of their program, instead of Google having to do the work of contacting copyright owners to get them to opt-in.
tired of feeling so used, textbook publishers go digital 08.10.2005, 3:42 PM
CNET News reports that ten schools, including Princeton, the University of Oregon, and the University of Utah, are to participate this fall in a trial program in which college bookstores will offer digital editions of high-demand titles at a 33% mark-down from print prices. In exchange for these enormous savings, students get to download one, intensely straight-jacketed .pdf file - a book that is readable on only one machine, cannot be printed out in full, and will expire after 150 days.
Some of America's biggest textbook publishers, including McGraw-Hill, Houghton Mifflin, John Wiley & Sons, and Thomson Learning are offering digital titles in the program through wholesaler MBS Textbook Exchange. Their aim? To tempt cash-strapped students away from used textbooks, the bane of the textbook industry. All in all, it's a cynical move that implicitly acknowledges the absurdly inflated price of print textbooks, yet offers only token relief, trying to pass off self-destructing, digital facsimiles as a reasonable substitute for a perfectly durable, slightly dinged used book.
What the textbook publishers ought to be doing is cultivating a more creative vision of the digital textbook, and getting over their terror of online distribution, which they can only see as an intellectual property disaster. Textbook publishers should take a look around and see that there are ways to make good business online. Charge for the service, not the copy - explore syndicated content that students can subscribe to at reasonable rates. Develop new kinds of multimedia titles that can truly take advantage of the online environment. Stop spending millions on digital rights management, stop worrying about your precious copies getting stolen.
On the web, everything is a copy, and it's pointless trying to police this reality. What's meaningful is access, what's meaningful is staying up to date. Develop a good service, with consistently updated, valuable content, and students and professors will buy in. If the textbook industry does not wake up and adapt, they could find themselves in the ash heap. More on that to come.
publishers fire another volley at google library 07.18.2005, 12:57 PM
Last week, the Association for Learned and Professional Society Publishers (ALPSP) joined the escalating chorus of concern over the legality of Google's library project, echoing a letter from the Association of American University Presses in May warning that by digitizing library collections without the consent of publishers, Google was about to perpetrate a massive violation of copyright law. The library project has been a troublesome issue for the search king ever since it was announced last December. Resistance first came from across the Atlantic where French outrage led a unified European response to Google's perceived anglo-imperialism, resulting in plans to establish a European digital library. More recently, it has come from the anglos themselves, namely publishers, who, in the case of the ALPSP, "absolutely dispute" Google's claim that the project falls within the "fair use" section of the US Copyright Act. From the ALPSP statement (download PDF):
The Association of Learned and Professional Society Publishers calls on Google to cease unlicensed digitisation of copyright materials with immediate effect, and to enter into urgent discussions with representatives of the publishing industry in order to arrive at an appropriate licensing solution for ‘Google Print for Libraries’. We cannot believe that a business which prides itself on its cooperation with publishers could seriously wish to build part of its business on a basis of copyright infringement.
In the relatively brief history of intellectual property, libraries have functioned as a fair use zone - a haven for the cultivation of minds, insulated from the marketplace of ideas. As the web breaks down boundaries separating readers from remote collections, with Google stepping in as chief wrecking ball, the idea of fair use is being severely tested.
who owns ideas? 06.25.2005, 5:26 PM
It is the nature of digital technologies that every use produces a copy. Thus, it is the nature of a copyright regime like the United States', designed to regulate copies, that every use in the digital world produces a copyright question: Has this use been licensed? Is it permitted? And if not permitted, is it "fair"? Thus, reading a book in analog space may be an unregulated act. But reading an e-book is a licensed act, because reading an e-book produces a copy. Lending a book in analog space is an unregulated act. But lending an e-book is presumptively regulated. Selling a book in analog space is an unregulated act. Selling an e-book is not. In all these cases, and many more, ordinary uses that were once beyond the reach of the law now plainly fall within the scope of copyright regulation. The default in the analog world was freedom; the default in the digital world is regulation.
I'm going on a brief hiatus, so that'll be my last link for a little while. But keep checking back - Bob, Kim and Dan will be keeping the home fires burning.
pay for the service, not the copy 06.14.2005, 12:51 PM
The other day, I came across an interesting experiment with a new model of distribution and ownership on the web, something that writers, publishers and journalists should pay attention to. KeepMedia charges $4.95 a month for unlimited access to 200 mainstream periodicals (see list) spanning the last 12 years up to the present day. That's significantly less than what I pay annually for my handful of print periodical subscriptions, and gives me access to much more material (kind of like LexisNexis for the masses). Plus, you do get to "keep" - that's part of how it works (indeed, their logo is a kangaroo with a stack of magazines stuffed in her pouch). KeepMedia allows you to attach notes to articles and to store away "clippings." It also makes it easy to track subjects across publications, and has automated recommendations for related stories. I assume that stored articles will get caged off if you stop subscribing. That's what makes me nervous about the pay-for-the-service model. You don't actually get to keep anything for the long haul, unless you print it out. But KeepMedia suggests one way that newspapers and publishers might adapt to the digital age.
Right now, publishers are still stuck on the idea of individual "copies." The web - an enormous, interconnected copying machine - is inherently hostile to this idea. So publishers generally insist on digital rights management (DRM) - coded controls that restrict what you can do with a piece of media. This, almost invariably, is infuriating, and ends up unfairly punishing people who have willingly paid a fair price for an item. Pay-for-the-service models won't solve the problem entirely, but they do get away from the idea of "copies." On the web, copies are cheap, or free. But access to a library or database is valuable. It's not about how many copies are sold, it's about how many people are reading. So charge at the gate. Once people are inside, it's all you can eat. This is nothing new. People play a flat rate for cable television, which is essentially a list of publications. You pay extra for premium channels, or pay-per-view special features, but your basic access is assured. What and how much you watch is up to you. Yahoo! is trying this right now for music. Why not do the same for newspapers, or for books? The web is combining publishing with broadcasting. Publishers and broadcasters need to adapt.
reading manga on Sony Librie 06.13.2005, 12:57 PM
Came across this Flickr photoset of Japanese comics on a Librie - Sony's electric ink ebook reader. Even in a photo, the reflective, print-like quality of the screen is striking. People have generally raved about the Librie's display, but are outraged by its senseless DRM policies: books self-destruct after 60 days. (discussed here and here)
Once E ink enters the mainstream, people might flock to electronic books as rapidly and enthusiastically as they did to digital photography. Screen display technology will undoubtedly advance. The DRM problem is trickier.
(Incidentally, I found this image while browsing recent blog posts under the "ebook" tag on Technorati. Flickr images tagged with "ebook" are placed alongside. An example of how these social tagging systems are becoming interconnected.)
web news as gated community 06.10.2005, 10:25 AM
Just found out about this on diglet.. Launched in April, The National Digital Newspaper Program (NDNP) is a joint effort of the Library of Congress and the National Endowment of the Humanities to create a comprehensive web archive of the nation's public domain newspapers.
Ultimately, over a period of approximately 20 years, NDNP will create a national, digital resource of historically significant newspapers from all the states and U.S. territories published between 1836 and 1922. This searchable database will be permanently maintained at the Library of Congress (LC) and be freely accessible via the Internet.
(A similar project is getting underway in France.)
It's frustrating that this online collection will stop at 1922. Ordinary libraries maintain up-to-date periodical archives and make them available to anyone if they're willing to make the trip. But if they put those collections on the web, they'll be sued. Archives are one of the few ways newspapers have figured out to make money on the web, so they're not about to let libraries put their microfilm and periodical reading rooms online. The paradigm has flipped.. in print, you pay for the current day's edition, but the following day it ends up in the trash, or wrapping a fish. The passage of 24 hours makes it worthless. On the web, most news is free. It's the fish wrap that costs you.
The web has utterly changed what things are worth. For most people, when a news site asks them to pay, they high tail it out of there and never look back. Even being asked to register is enough to deter many readers. But come September, the New York Times will start charging a $50 annual fee for what it considers its most unique commodities - editorials, op-eds, and selected other features. Is a full subscription site not far off? With their prestige and vast readership, the Times might be able to pull it off. But smaller papers are afraid to start charging, even as they watch their print circulation numbers plummet. If one paper puts up a tollbooth, they instantly become irrelevant to millions of readers. There will always be a public highway somewhere nearby.
A friend at the Columbia School of Journalism told me that the only way newspapers can be profitable on the web is if they all join together in some sort of league and charge bulk subscription fees for universal access. If there's a wholesale move to the pay model, then readers will have no choice but to shell out. It will be like paying for cable service, where each newspaper is a separate channel. The only time you register is when you pay the initial fee. From then on, it's clear sailing.
It's a compelling idea, but could just be collective suicide for the newspapers. There will always be free news on offer somewhere. Indian and Chinese wire services might claim the market while the prestigious western press withers away. Or people will turn to state-funded media like the BBC or Xinhua. Then again, people might be willing to pay if it means unfettered access to high quality, independent journalism. And with newspapers finally making money on web subscriptions, maybe they'd start loosening up about their archives.
"an invaluable resource that they had an extremely limited role in creating" 06.09.2005, 2:11 PM
Good piece today in Wired on the transformation of scientific journals. There's a general feeling that commercial publishers like Reed Elsevier enjoy unreasonable control over an evolving body of research that should be freely available to the public. With exorbitant subscription fees, affordable only for large institutions, most journals are effectively inaccessible, and the authors retain few or no reproduction rights. Recently, however, free article databases have sprung up on the web - The Public Library of Science (PLoS), BioMed Central, and NIH's PubMed - some of which, like PLoS, have begun publishing their own journals. It's a welcome change, considering how much labor and treasure is poured into scientific publications (from funders, private and public, and from the scientists themselves), and yet how little is gotten in return. Shifting to a non-profit model, as PLoS has done, preserves much of the financial architecture that supports the production of journals, but totally revolutionizes the distribution.
PLoS journals are free and allow authors to retain their copyrights, as long as they allow their work to be freely shared and distributed (with full credit given, naturally). They also require that authors pay $1,500 from their grants, or directly from their sponsors or institutions, to have their work published. These groups pay the bulk of the $10 billion that goes to scientific and medical publishers each year, and what do they get in return? Limited access to the research they funded, and no right to reuse the information.
"It's ridiculous to give publishers complete control of an invaluable resource that they had an extremely limited role in creating," Eisen said (Eisen teaches genetics and is a founder of PLoS).
But what is in many ways the tougher question is how to shift the architecture of prestige - peer review - to these new kinds of journals.
self-destructing books 05.27.2005, 11:07 AM
In January I bought my first ebook (ISBN: B0000E68Z2), which is published by Wiley. I have one copy on my laptop and a backup on my external harddrive. Last week, I downloaded and installed Adobe Professional (writer 6.0) from our company network (Norwegian School of Management, BI) - during the installation some files from the Adobe version that I downloaded and installed when I bought the ebook (from Amazon.com UK) were deleted. Since then, I have not been able to access my ebook - I have tried to get help from our computer staff but they have not been able to help me.
Adobe thinks that I'm using another computer, while I'm not - and it didn't help to activate the computer through some Adobe DRM Activator stuff. Now I have spent at least 10 hours trying to access my ebook - hope you can help...
Boing Boing points to this story illustrating the fundamental flaws of digital rights management (DRM) - about a Norwegian prof who paid $172 for an ebook on Amazon UK only to have it turn to unreadable code jibberish after updating his Acrobat software. He made several pleas for help - to Acrobat, to Wiley (the publisher), and to Amazon. All were in vain. It turns out that after reading the story in Boing Boing (in the past 24 hours, I guess), Wiley finally sent a replacement copy. But the problem of built-in obsolescence in ebooks goes unaddressed.
I'm convinced that encrypting single "copies" is lunacy. For everything we gain with electronic texts - search, multimedia, connection to the network etc. - we lose much in the way of permanence and tactility. DRM software only makes the loss more painful. Publishers need to get away from the idea of selling "copies" and start experimenting with charging for access to a library of titles. You pay for the service, not for the copy. Digital books are immaterial - so the idea of the "copy" has to be revised.
Another example of old thinking with new media is the New York Public Library's ebook collection. That "copies" of electronic titles are set to expire after 21 days is not surprising. The "copy" is "returned" automatically and you sweep the expired file like a husk into the trash. What's incredible is that the library only allows one "copy" to be checked out a time, entirely defeating one of the primary virtues of electronic books: they can always be in circulation. Clearly terrified by the implications of the new medium (or of the retribution of publishers), the NYPL keeps ebooks on an even tighter tether than they do their print books. As a result, they've set up a service that's too frustrating to use. They should rethink this idea of the single "copy" and save everyone the "quote" marks.
brush up your shakespeare 05.19.2005, 10:15 AM
"America's entertainment industry is committing slow, spectacular suicide, while one of Europe's biggest broadcasters -- the BBC -- is rushing headlong to the future, embracing innovation rather than fighting it. Unlike Hollywood, the BBC is eager and willing to work with a burgeoning group of content providers whose interests are aligned with its own: its audience."
Above is a clip from a 1913 silent film version of Hamlet, downloadable for free from the British Film Institute under the aegis of the Creative Archive - one of the few bits of free content made available so far. It feels good to make a video quotation with total impunity. Perhaps others will be inspired to take a page from BBC's book.
Google talks to the librarians 04.30.2005, 10:32 AM
Joy Weese Moll, a soon-to-be graduate of the School of Information Science and Learning Technologies at the University of Missouri, and author of the blog Wanderings of a Student Librarian, has written a useful overview of Google's Print and Scholar initiatives - actually a session report from the Association of College & Research Libraries conference earlier this month. Summarized by Moll are suprisingly harmonious remarks by Adam Smith, product manager for Google's library-related projects, and John Price Wilkin, a top librarian at the University of Michigan (and one of Google's pilot partners).
"Smith made it very clear that this project is in its infancy. Google considers itself to be an international company and intends to participate in digitization projects in other countries and other languages. Smith acknowledged that Google cannot digitize everything. Rather, Google wants to be a catalyst for digitization efforts, not the only game in town. Google’s digitization project will help them build tools that will improve the searching of digital libraries created by universities, governments, and other organizations."
Among other things, Wilkin points out that the mass digitization library collections "has already proven to be a factor in driving clarification of intellectual property rights, including the orphan copyright issue."
find it rip it mix it share it 04.15.2005, 8:14 AM
That's the slogan for the just-launched Creative Archive License Group - a DRM-free audio/video/still image repository maintained by the BBC to provide "fuel for the creative nation." Other members include Channel 4, Open University, and the British Film Institute (bfi). Imagine if the big three US networks, PBS, NPR and the MOMA film archive were to do such a thing...
yahoo! launches creative commons search 03.24.2005, 10:54 AM
Yahoo! has unveiled a new Creative Commons search tool that makes it easier to find "some rights reserved," or flexible-copyrighted, content. This is very progressive move on Yahoo's part, and a big boost for the alternative copyright movement. Three cheers for Yahoo! for endorsing a less restrictive model for creative work!
At the moment, Yahoo! allows you to search for CC material either on the web or in Creative Commons' own library. At least for now, it's not possible to search within different media types - i.e. video, image, etc - though you can distinguish in your search between content available "for commercial purposes," and content that you can "modify, adapt, or build upon."
UPDATE: Larry Lessig, chair of the Creative Commons project, comments on Yahoo! move:
"This is exciting news for us. It confirms great news about Yahoo!. I met their senior management last October. They had, imho, precisely the right vision of a future net. Not a platform for delivering whatever, but instead a platform for communities to develop. With the acquisition of Flickr, the step into blogging and now this tool to locate the welcome mats spread across the net, that vision begins to turn real."
"Yahoo adds search for 'flexible' copyright content".
your way of life could soon be illegal.. 03.22.2005, 4:17 PM
Between now and March 29, when oral arguments begin before the Supreme Court in MGM v. Grokster, the Electronic Frontier Foundation is assembling a list, one invention per day, of technologies that could be considered illegal if the movie and music industries prevail in redefining the scope of permissable copying. Email, blogs, VCRs, and xerox machines are among the gadgets listed so far.
"Ever since the Betamax ruling in 1984, inventors have been free to create new copying technologies as long as they are capable of substantial noninfringing (legal) uses. But by the end of this year, all that could change. In MGM v. Grokster, Hollywood and the recording industry are asking for the power to sue out of existence any technology that appears to be a threat, even if it passes the Betamax test. That puts at risk any copying technology that Betamax currently protects as well as any new technologies Hollywood doesn't like.
"To raise awareness about what's at stake in the Grokster case, EFF is profiling one Betamax-protected gadget every weekday until the oral arguments before the Supreme Court on March 29. Some of these examples are in fun, some more serious, but all represent general-purpose technologies that can be used for both infringing and noninfringing purposes. Check them out and pass the word along."
(via Boing Boing)
baking google's cookies 03.22.2005, 8:24 AM
Bibliotheke points to the recent adventures of Greg Duffy, a talented Texas college student who figured out how to read entire copyrighted books in Google Print by "baking" the cookies (data sent from to your computer from a web browser to store preferences for specific sites and pages) Google uses to impose search limits on protected material. Duffy took on the challenge largely out of curiousity, but doesn't deny that he fantasizes about his chutzpah landing him a job at Google. He hasn't been hired yet, but he did manage to attract a great deal of attention and over 10,000 hits to his site from more than 60 countries. And in the sudden commotion, he mysteriously disappeared from Google's web search results, only to reappear shorly after Google Print had been fixed to repel the hack. Any connection between the two events was cheerily denied by a Google representative writing in the comments on Duffy's blog under the nom de plume "Google Guy." Conspiracy theories abound, but Duffy has retained an excellent sense of humor throughout the whole affair, and still makes no secret of his hopes that sheer audacity and display of chops might yet get him hired by the juggernaut he so admires and loves to tease.
It's a bit tech-heavy, but it's worth reading his post and the updates that follow, if for no other reason than for his amusing riff on the cookie motif.
"So recently I wrote some software to grab and store up a bunch of cookies, keep them for more than 24 hours, and then automate searching for pages by this method. If I wanted to view page 100, the software would search for it and attempt to extract the image with a regular expression. If that doesn't work, it will search for page 99 and extract the "next page" link to get to page 100. It will continue doing this for page 101, 98, and 102 until it finds the correct page. Whenever a cookie would hit the hard limit, I'd replace it with a new cookie from the queue. By grabbing the "next" and "previous" links automatically in this "inductive" fashion and using the search for skipping, I could view an entire book on Google Print with one click every time. I later modified the software to spit out a PDF of the book. I used simple components like GoogleCookie (cookie with accessible properties), GoogleCookieOven (queue with "baking time", i.e. it only pops when the head of the queue is old enough to get the ability to search), and GoogleCookieBaker (thread that keeps the oven full of baking cookies by querying Google for new ones when the number drops below a certain threshold)."
franco-googlian wars continue... 03.21.2005, 12:49 PM
"...news agency Agence France Press (AFP) is claiming damages of at least $17.5 million and a court order barring Google News from displaying AFP photographs, news headlines or story leads..." (story)
This recalls Virginia's post a couple months back on "the future of the news." Will news aggregators and headline-scouring robots be accused of copyright infringement? Will other news providers follow AFP's lead?
a book by Lawrence Lessig and you 03.16.2005, 3:00 PM
Lawrence Lessig is inviting everyone to help revise and update his landmark 1999 book Code and Other Laws of Cyberspace on a public wiki, as a way of drawing "upon the creativity and knowledge of the community." (story in Mercury News)
From the site: "This is an online, collaborative book update; a first of its kind. Once the the project nears completion, Professor Lessig will take the contents of this wiki and ready it for publication. The
resulting book, Code v.2, will be published in late 2005 by Basic Books. All royalties, including the book advance, will be donated to Creative Commons."
As an experiment with networked books, this has a couple of big things going for it. For one, it is a pre-existent work with a large reader community. Like a stone tossed in the water, it creates ripples. Version 2 might benefit by incorporating these ripples. Secondly, Lessig will retain ultimate editorial authority, so we can be pretty sure that the final revision will be focused and well-shaped. And lastly, Lessig's subject is so vast, so multi-dimensional, that the book will almost certainly benefit from broad reader/writer input. And for someone like Lessig, who is as much an activist as a scholar, constantly running around the world spreading his ideas, it is a nice way of asking for assistance in the time-consuming process of updating of a book that the world needs sooner rather than later.
Incidentally, Lessig will be appearing on April 7 at the New York Public Library with Wilco frontman Jeff Tweedy to discuss the question, "Who Owns Culture?" moderated by Steven Johnson. (thanks, NEWSgrist)
Tweedy says: "A piece of art is not a loaf of bread. When someone steals a loaf of bread from the store, that's it. The loaf of bread is gone. When someone downloads a piece of music, it's just data until the listener puts that music back together with their own ears, their mind, their subjective
another great brief in the fight for p2p 03.03.2005, 10:20 PM
There's a growing body of legal literature defending peer-to-peer file sharing in the lead-up to the Supreme Court showdown, MGM vs. Grokster. Here's one of the latest additions, an amicus brief filed today by the Free Software Foundation and New Yorkers For Fair Use. The following excerpt nicely skewers the petioners (thanks again, Boing Boing):
"At the heart of Petitioners' argument is an arrogant and unreasonable claim--even if made to the legislature empowered to determine such a general issue of social policy--that the Internet must be designed for the convenience of their business model, and to the extent that its design reflects other concerns, the Internet should be illegal.
Petitioners' view of what constitutes the foundation of copyright law in the digital age is as notable for its carefully-assumed air of technical naivete as for the audacity with which it identifies their financial interest with the purpose of the entire legal regime.
Despite petitioners' apocalyptic rhetoric, this case follows a familiar pattern in the history of copyright: incumbent rights-holders have often objected to new technologies of distribution that force innovation on the understandably reluctant monopolist."
mgm vs. grokster: brief update 03.02.2005, 10:30 PM
With MGM vs. Grokster fast approaching (initial hearings have been set for March 29), several amicus briefs have recently been filed with the Supreme Court in impassioned and eloquent defense of peer-to-peer file sharing. Notable among them are a brief filed Tuesday by a group of 17 computer scientists, and another filed today by 22 media studies scholars. Each accuses both the court and the petitioners (MGM) of "fundamental misunderstanding." Of technology, in the view of the scientists. And in the view of the scholars, of "fair use" and the importance of p2p in the academy and in the construction of collective memory. To drive home this last point, the scholars direct our attention to the landmark 1984 Sony vs. Universal case in which the legality of VCRs (VTRs at the time) was challenged and ultimately upheld. There's no doubt that MGM vs. Grokster is the Sony vs. Universal for this generation.
From the media scholars:
"...the unambiguous declaration by the Ninth Circuit Court of Appeals inGrokster -- that the standards this Court set forth in Sony are alive and appropriate for this digital age -- does grant educators comfort and confidence. Nor do certain “compromise” positions outlined in briefs submitted in support of neither party in this case protect the interests of educators and researchers. Ultimately, we wish to encourage the Court to consider that Sony did more than legalize home taping and “time shifting.” It democratized participation in the project of recording the collective memory of this dynamic nation. Sony went beyond the traditional parameters of fair use and showed the potential for an emerging set of clearly articulated “users’ rights.” Teachers, scholars, critics, journalists, fans, and hobbyists would all benefit greatly under a regime that offered them clarity and confidence about how they interact with works and the copyright system that governs them."
little red book 02.09.2005, 2:41 PM
Very interesting review of McKenzie Wark's A Hacker Manifesto, recently published by Harvard University Press. In the manifesto (shorter version), Wark outlines a class struggle over "vectors" - the information channels of a society. In his words:
"With the commodification of information comes its vectoralisation. Extracting a surplus from information requires technologies capable of transporting information through space, but also through time. The archive is a vector through time just as communication is a vector that crosses space. The vectoral class comes into its own once it is in possession of powerful technologies for vectoralising information. The vectoral class may commodify information stocks, flows, or vectors themselves. A stock of information is an archive, a body of information maintained through time that has enduring value. A flow of information is the capacity to extract information of temporary value out of events and to distribute it widely and quickly. A vector is the means of achieving either the temporal distribution of a stock, or the spatial distribution of a flow of information. Vectoral power is generally sought through the ownership of all three aspects."
what's at stake 02.04.2005, 11:39 AM
High-definition TV pioneer and Dallas Mavericks owner Mark Cuban talks about what's at stake in the upcoming Supreme Court case MGM vs. Grokster in an article drawn from a recent post on his blog. Skip down to the section titled "Taking a Wrong Turn." There, Cuban describes what could be lost if entertainment industry giants are able to convince the court that peer-to-peer file sharing is first and foremost a tool for theft.
"In the MGM v. Grokster case, the fewer than 50 companies who control less than 1 percent of all digital information are trying to take control of innovation in the technology industry and pry it away from the rest of us. Everything our imagination creates and touches that can be made digital is at risk if Grokster loses.
"What innovations will be condemned by law before they have a chance to come to market, because they could have an impact on Hollywood and the music industry? We have no idea, and that is a very scary prospect."
sticking it to the gatekeepers 01.27.2005, 5:54 PM
Stranded in copyright limbo, the landmark civil rights documentary Eyes on the Prize cannot currently be released on DVD or broadcast on television. But music activism group Downhill Battle has recently taken matters into its own hands by digitizing the 14-part series and making it available for peer-to-peer distribution. In addition, they've launched the Eyes on the Screen initiative to help communities coordinate local screenings of the film in time for Black History Month.
This could go down in history as an important skirmish in the copyright wars - when the public began to act in blatant defiance of the copyright gatekeepers. Rarely have the absurdities of the modern intellectual property system been cast in such stark relief.
But the brave souls at Downhill Battle are wrong to call this act of civil disobedience "fair use" (see Wired article). Few would argue that taking a 14-part film, not in the public domain, and slapping it on public access television is fair use. The big battles over what is and isn't fair use are yet to come, and they will be crucial in defining the parameters of scholarly and artistic production in the digital era. Let's not give ammunition to those who would further tighten the screws by blurring the distinction between acts of protest and legitemate fair use. The Eyes on the Prize case is about the public interest plain and simple. About protesting a system that allows public treasures to languish in forced obscurity.
wheels of (in)justice 01.26.2005, 3:00 PM
A blizzard of court papers blew out of the entertainment industry yesterday in anticipation of peer-to-peer file sharing's big day in court. Shouts of "piracy!" and "stop thief!" were common themes in this choir of outrage, whose ranks ranged across the legal, political and entertainment worlds, from Kenneth Starr to Orrin Hatch to Avril Lavigne. The Supreme Court is set to begin hearing oral arguments on March 29 in the case that pits Grokster and StreamCast against such industry heavyweights as the RIAA and MPAA.
What's needed at such a critical junture is an extended, nuanced discussion on the nature of intellectual property in the digital age, and on how these powerful new sharing technologies can be reasonably tempered to ensure that artists receive compensation for their work while preserving the dynamic modes of exchange. But what seems more likely is a big judicial slugfest.. brace yourself for a bloody spring.
the untold (until now) history of the russian web 01.23.2005, 8:10 PM
Great piece in this week's Context, the weekly arts and ideas section of The Moscow Times, about the first history yet written about the Russian web. Feeling the Elephant (Oshchupyvaya Slona), by writer, journalist, and core member of Russia's online literati Sergei Kuznetsov, was published late last year and has already engendered a small storm of controversy for alleged omissions, mischaracterizations and the like. But Kuznetsov says he never set out to write a "proper" history, simply an insider's account - bias, warp and all - of the literary web he played a central role in creating. This lack of propriety is not altogether unfitting since there's much in Russia's neck of the web that, according to our stricter standards, isn't at all proper, and this goes beyond mail order brides.. Intellectual property is only a fledgling idea there, and you can easily find practically any text online, from Pushkin to Pelevin, including fresh-off-the-press, protected material. The most popular of these literary indexes is Maxim Moshkov's Lib.ru.
This loose, free-wheeling web culture has been both a boon and a curse to Russia's writers and readers. On the one hand, it is easy and free to publish, and likewise easy and free to read. But with the exception of the most popular authors - the churners out of mysteries and bodice rippers - it's damn hard to make a living writing in Russia (much harder than in the West, which is tough enough), and all this free literary trafficking, while rousing and diverse, bitterly emphasizes the underlying poverty. This begs the question, just as relevant here as anywhere else.. how can writers continue to make the web richer without becoming impoverished themselves?
(photo: Vladimir Filonov / Moscow Times)
writer invites remix of story 01.19.2005, 6:03 PM
Sci fi writer Benjamin Rosenbaum announced today that he has placed his short story Start the Clock under a Creative Commons Attribution-Noncommercial-Sharealike license, inviting readers and writers not only to share and reproduce (non-commercially) his work, but also to alter, rewrite, or remix it as they like.
This is not the first story Rosenbaum has made available under a CC license, but it is the first time he has explicitly welcomed derivative works and alteration of his material. Start the Clock began as part of Frank Wu's Exquisite Corpuscule project, a riff on the classic parlor game the Exquisite Corpse, in which phrases, even stories, are woven from the free associations of the players. Supposedly, the first phrase ever yielded by this method was "the exquisite corpse will drink the young wine," hence the name. So "unfreezing" this story is, in a way, only the most recent step in an ongoing experiment.
It is also marks the latest stage of a writer's hard but fruitful struggle with the notions of sharing, permission, and piracy in a digital world. Writing on his blog last July, he ruminated on the evolvution of his ideas vis a vis copyright:
"So I kept intending to write the piratical bloggers nice letters, full of appreciation, expressing how honored I was, while gently educating them on copyright law. And then magnanimously assigning them noncommercial reprint rights ex post facto, in return for a link to my site.
"It was never that inspiring a project though, and I never did it. Something felt weird about it. Like I was greeting a spontaneous expression of love with rules-lawyering. It would be a different matter if I firmly believed pirates were a scourge of artists, like Madonna and Harlan Ellison do. But I don't. I think there will be some ugly growing pains as antiquated business and revenue models adjust to cheap pervasive networking power and digitalization, but that ultimately freeloaders are useful. So it was like I'd be sending these letters on some kind of pedantic principle."
german library obtains "license to copy" 01.19.2005, 1:53 PM
Germany's national library, Deutsche Bibliothek, has been made exempt from key provisions of the European Union Copright Directive, giving it the exclusive right "to crack and duplicate DRM-protected e-books and other digital media such as CD-Audio and CD-Roms" (check out post on mobileread).
Further in mobileread: "The Deutsche Bibliothek achieved an agreement with the German Federation of the Phonographic Industry and the German Booksellers and Publishers Association after it became obvious that copy protections would not only annoy teenage school boys, but also prohibit the library from fulfilling its legal mandate to collect, process and bibliographic index important German and German-language based works."
Heartening to see a top-down hack like this.
the chilling effect of copyright law 01.18.2005, 4:35 PM
excellent article in the Toronto Globe and Mail today about the chilling effect of copyright law on documentary films. initial subject is Eyes on the Prize, the superb documentary of the civil war struggle in the U.S., which is no longer shown on TV or distributed because the rights to the historic footage have run out and the producer's cannot afford to re-purchase them in today's hyper-inflated rights market.
p2p for profit 12.22.2004, 3:05 PM
The Washington Post reports that mashboxx, the latest venture of Grokster president Wayne Rosso, intends to "clean up and legitimize" peer-to-peer music file sharing on the Internet, and to give record companies a piece of the pie (in spite of Rosso's past demonization of said companies). Mashboxx will employ SnoCap - a "copyright management interface" technology developed by Napster creator Shawn Fanning, enabling copyright owners to trace the movement of files containing their content, and to extract fees from the people sharing them. SnoCap essentially "fingerprints" files so that content owners can keep track of them and set the rules and rates of their trading.
I suspect that the rates will be too high, of the dollar-a-song variety, which seems downright exploitative given the ease and inherent cheapness of p2p networks. While I'm cautiously optimistic that the mashboxx move presages an eventual overhaul of the music industry, and may be a small step toward reconciling copyright concerns with networked free culture, this seems more like a hostile move to squeeze music sharers. Given the scale of the p2p phenomenon, a nickel-a-song could amount to sizeable profits. Remember: it's no longer about a single point of sale, but multiple points of exchange. But the recording industry is a rapacious animal, and is loathe to believe that it can actually make profit without extortion. Remember the long tail...
And this issue doesn't just pertain to music sharing. As ebooks become a more frequently trafficked commodity on p2p networks, we will see the same struggle arise. Already, questions abound about Google's library initiative and readers' access to copyrighted texts. And the New York Public Library seems to think that ebooks are a threat that must be subdued before all hell breaks loose.
Lawrence Lessig on "writing" 12.11.2004, 6:17 PM
Closing the USC conference "Scholarship in the Digital Age," Lessig spoke on "free culture" and the current legal/cultural crisis that in the next few years will define the constraints on creative production for decades to come. Due to obsessive fixation by a handful of powerful media industries on the issue of piracy, the massive potential of networked digital culture that has briefly flowered in the past decade could be destroyed by draconian laws and code controls embedded in new technologies. In Lessig's words: "never in our past have fewer exercised more legal control."
Lessig elegantly picked up one of the conference's many threads, multimedia literacy, referring to the bundle of new forms of cultural and scholarly production – remixing, reusing, networking peer-to-peer, working across multiple media – as simply "writing." This is an important step to take in thinking about these new modes of production, and is actually a matter of considerable urgency, considering the legal changes currently underway. The ultimate question to ask is (and this is how Lessig concluded his talk): are we producing a legal culture in which writing is not allowed?
Posted by ben vershbow at 06:17 PM
| Comments (0)
tags: Copyright and Copyleft , Education , Remix , USC , conference , copyleft , copyright , free_culture , free_thought , intellectual_property , lessig , p2p , writing