"people talk about 'the future' being tomorrow, 'the future' is now." 01.31.2006, 4:07 PM
The artist Nam June Paik passed away on Sunday. Paik's justifiably known as the first video artist, but thinking of him as "the guy who did things with TVs" does him the disservice of neglecting how visionary his thought was – and that goes beyond his coining of the term "electronic superhighway" (in a 1978 report for the Ford Foundation) to describe the increasingly ubiquitous network that surrounds us. Consider as well his vision of Utopian Laser Television, a manifesto from 1962 that argued for
a new communications medium based on hundreds of television channels. Each channel would narrowcast its own program to an audience of those who wanted the program without regard to the size of the audience. It wouldn't make a difference whether the audience was made of two viewers or two billion. It wouldn't even matter whether the programs were intelligent or ridiculous, commonly comprehensible or perfectly eccentric. The medium would make it possible for all information to be transmitted and each member of each audience would be free to select or choose his own programming based on a menu of infinitely large possibilities.
(Described by Ken Friedman in "Twelve Fluxus Ideas".) Paik had some of the particulars wrong – always the bugbear of those who would describe the future – but in essence this is a spot-on description of the Web we know and use every day. The network was the subject of his art, both directly – in his closed-circuit television sculptures, for example – and indirectly, in the thought that informed them. In 1978, he considered the problem of networks of distribution:
Marx gave much thought about the dialectics of the production and the production medium. He had thought rather simply that if workers (producers) OWNED the production's medium, everything would be fine. He did not give creative room to the DISTRIBUTION system. The problem of the art world in the ’60s and ’70s is that although the artist owns the production's medium, such as paint or brush, even sometimes a printing press, they are excluded from the highly centralized DISTRIBUTION system of the art world.
George Maciunas' Genius is the early detection of this post-Marxistic situation and he tried to seize not only the production's medium but also the DISTRIBUTION SYSTEM of the art world.
(from "George Maciunas and Fluxus", Flash Art, quoted in Owen F. Smith's "Fluxus Praxis: an exploration of connections, creativity and community".) As it was for the artists, so it is now for the rest of us: the problems of art are now the problems of the Internet. This could very easily be part of the ongoing argument about "who owns the pipes".
Paik's questions haven't gone away, and they won't be going away any time soon. I suspect that he knew this would be the case: "People talk about 'the future' being tomorrow," he said in an interview with Artnews in 1995, “ ‘the future' is now."
artist as blogger 01.31.2006, 11:50 AM
last spring we invited Alex Itin to be our first artist-in-residence at the institute. i first met Alex in the fall of 2000, during an art festival in Dumbo. he was set-up in a gallery painting portraits on pages of used books. i quite liked the paintings and got the perverse idea that it would be interesting to encourage someone who was using books in this way to work on an electronic book. i was working at Night Kitchen at the time. we had just released the beta version of TK3, the software we made for authoring and reading media rich electronic books. we lent Alex a Mac and he made his first electronic piece, Zoodoo - a series of paintings done on paperback pages which accompanied a beautiful Amiri Baraka poem. (if you first install the free TK3 Reader you can download Zoodoo from this page.) Alex kept experimenting and over time began animating the surface of his scanned-in paintings. while there has been a long history of filmmakers who painted on the surface of film, Alex was perhaps one of the first painters to integrate video into his paintings.
|From "Self Portait" by Alex Itin|
as a condition of his artist-in-residency we asked Alex to keep a blog in which we hoped he would write about his work as he did it. we were amazed after a few days to realize that alex was beginning to use the blog not as a way to talk about his work, but rather it was just another venue for his work. at first Alex posted paintings, drawings and photos sometimes with a text commentary. after a while he started to include animated gifs and sound. although the artist-in-residency ended almost a year ago, alex has been keeping up the blog. in fact, he's been on a creative tear the past few weeks. check out the last two entries -- the "thousand year crane" (be sure to start the music track) and the Chinese new year tree.
(disclaimer: i've been collecting Alex's work for six years now, so my interest in his success is not purely altruistic)
the comissar vanishes 01.31.2006, 11:17 AM
The Lowell Sun reports that staff members of Representative Marty Meehan (Democrat, Massachusetts) have been found editing the representatives Wikipedia entry. As has been noted in a number of places (see, for example, this Slashdot discussion), Meehan's staff edited out references to his campaign promise to leave the House after eight years, among other things, and considerably brightened the picture of Meehan painted by his biography there.
Meehan's staff editing the Wikipedia doesn't appear to be illegal, as far as I can tell, even if they're trying to distort his record. It does thrust some issues about how Wikipedia works into the spotlight – much as Beppe Grillo did in Italy last week. Sunlight disinfects; this has brought up the problem of political vandalism stemming from Washington, and Wikipedia has taken the step of banning the editing of Wikipedia by all IP address from Congress while they try to figure out what to do about it: see the discussion here.
This is the sort of problem that was bound to come up with Wikipedia: it will be interesting to see how they attempt to surmount it. In a broad sense, trying to forcibly stop political vandalism is as much of a political statement as anything anyone in the Capitol could write. Something in me recoils from the idea of the Wikipedia banning people from editing it, even if they are politicians. The most useful contribution of the Wikipedia isn't their networked search for a neutral portrait of truth, for this will always be flawed; it's the idea that the truth is inherently in flux. Just as we should approach the mass media with an incredulous eye, we should approach Wikipedia with an incredulous eye. With Wikipedia, however, we know that we need to – and this is an advance.
google gets mid-evil 01.30.2006, 3:46 PM
At the World Economic Forum in Davos last Friday, Google CEO Eric Schmidt assured a questioner in the audience that his company had in fact thoroughly searched its soul before deciding to roll out a politically sanitized search engine in China:
We concluded that although we weren't wild about the restrictions, it was even worse to not try to serve those users at all... We actually did an evil scale and decided not to serve at all was worse evil.
who owns this space? 01.30.2006, 1:05 PM
"The Onion neither publishes nor accepts letters from its readers. It is The Onion's editorial policy that the readers should have no voice whatsoever and that The Onion newspaper shall be solely a one-way conduit of information. The editorial page is reserved for the exclusive use of the newspaper staff to advance whatever opinion or agenda it sees fit, or, in certain cases, for paid advertorials by the business community."
—Passed by a majority of the editorial board, March 17, 1873.
They've had this policy for a long time, though perhaps not since 1873. I remember seeing it (or something very similar) in the first copies of The Onion I saw, picked up during high school trips to Madison in the early 1990s. I liked the text enough to crib it for my first webpage, which has (thankfully) long since dissipated into the mists of the Internet.
I thought it was funny then, and I still do. And at the risk of tearing roses to pieces to find what makes them smell that way: it's funny, I think, because it's true. Usually, the mission statement on a newspaper's editorial page bends over backward to declare that the editorial pages belong in some sense to the readers of the newspapers as well as the editors. But really, a newspaper's editorial page – or, for that matter, the newspaper – is a one-way conduit for information: the editors, not the reader, choose what appears on it. The Onion's statement is bluntly honest about who really controls the press: the owners.
Declaring a website in 1995 to be a "one-way conduit of information" was also true, by and large, although I certainly wasn't trying to make a grand statement about communication. At that point in time, a website was something that could be read; to make a website that readers could change, you needed to know something about scripting languages. Being, by and large, the same sort of dilettante I remain, I knew nothing about such things.
Ten years on the web allows much more direct two-way communication. Anyone can start a blog, post things, and have readers comment on them. Nobody involved in the process needs even a cursory knowledge of HTML for this to happen – it helps, of course, but it's not strictly necessary. This is an advance, but I don't need to say that at this point in time: the year of the blog was 2004.
At the Institute, we've been talking with McKenzie Wark, author of A Hacker Manifesto about doing a book-in-process blog, like we've been doing with Mitchell Stephens. Over lunch with Wark a couple months back, we asked him why he, very much a man of technology, didn't have a blog already – everybody else does. His answer was interesting: he prefers the give-and-take of discussions on a list server to the post and response of the blog format. But what most stuck in my mind was his qualification for this: blogs, he suggested, are too proprietary, as they always belong to someone. This inhibits equitable discussion: somebody's already in charge because they own the discussion forum.
There's something to Wark's idea. If I have a blog and post something on it, the text of my post resides somewhere on my server (it's probably somebody's else's server, but it's still my account). In most blogs, visitor can post comments. But: usually comments have to be approved by a moderator, if only to block spam. And: successful blogs even tend to disable comments entirely , at which point discourse is functionally back at the level of The Onion's editorial page. (One might note the recent experience of The Washington Post.) The authority over who is allowed to speak, and the manner in which they speak, belongs to the blog owner, who is usually not a disinterested party, being (generally) part of the conversation.
When you think about this process in terms of conversation, you realize how strange it is. Imagine David and Freddy having a conversation: David speaks freely, but for Freddy to say anything, he has to write it down and submit it to David for his approval before he can actually say it. If anyone else wanted to join the conversation, they'd also have to submit to Freddy's authority. David's policy of refusal might vary - he might refuse everything any one else says, he might allow anyone to say anything. But he's still in charge of the conversation.
A quick navel gazing moment: you might imagine that our blog is an exception to this, as it's a group blog, and a number of us regularly post on it. We've also given people outside of the Institute posting authority – during our discussion of his book, for example, we let Steven Johnson post rather than just having him comment on our posts. But the problem of authority can't be avoided. You can see it in my words: we've "given", we "let". It's ours in a sense.(1) We control who's given a login. As much as we like you, dear readers, the form in which we're conversing in enforces a distinction between you & us. Sorry.
The list server model, which Wark prefers, works differently. While there might still be a moderator, the moderator's usually not part of the conversation being moderated. If David and Freddy are having a conversation, they have to submit what they're saying to Linda before they can say it. It's still mediated – and a very odd way to have a conversation! – but it's not inherently weighted towards one party of the conversation, unless your moderator goes bad. And more importantly: the message is sent to everyone on the list. Everyone gets their own copy: the text can't be said to belong to any one recipient in particular.
List servers, however more democratic a form they might be than blogs, never took off like blogs.(2) There has never been a Year of the List Server, and one suspects there might never be one. The list server, being email based, tends to be somewhat private; some aren't even publicly accessible.
Blogs comparatively trumpet themselves: they're an easy way to announce yourself to the world. This is necessary, useful, and a good part of the reason that they've caught on. But what happens once you've announced yourself? One would like to believe that when we start blogs, we're aspiring to conversation, but the form itself would seem to discourage it.
The question remains: how can we have equitable conversations online?
* * * * *
1. This same sense of ownership is usefully articulated – if elaborated to the point of absurdity – in Donald Barthelme's short story "Some of Us Had Been Threatening Our Friend Colby" which is predicated on the idea that since Colby is the narrators' friend, he belongs to them, and they have the right to do with him as they like – in this particular case, hanging him: ". . . although hanging Colby was almost certainly against the law, we had a perfect moral right to do so because he was our friend, belonged to us in various important senses, and he had after all gone too far."
2. A similar argument might be made for the style of newsgroups, which largely flourished before blogs and even the WWW. I suspect at this point newsgroup usage is considerably below that of list servers; however, it might be useful to examine the success and failures of newsgroups as a venue for communication some other time.
illusions of a borderless world 01.27.2006, 3:57 PM
A number of influential folks around the blogosphere are reluctantly endorsing Google's decision to play by China's censorship rules on its new Google.cn service -- what one local commentator calls a "eunuch version" of Google.com. Here's a sampler of opinions:
Ethan Zuckerman ("Google in China: Cause For Any Hope?"):
It's a compromise that doesn't make me happy, that probably doesn't make most of the people who work for Google very happy, but which has been carefully thought through...
In launching Google.cn, Google made an interesting decision - they did not launch versions of Gmail or Blogger, both services where users create content. This helps Google escape situations like the one Yahoo faced when the Chinese government asked for information on Shi Tao, or when MSN pulled Michael Anti's blog. This suggests to me that Google's willing to sacrifice revenue and market share in exchange for minimizing situations where they're asked to put Chinese users at risk of arrest or detention... This, in turn, gives me some cause for hope.
Rebecca MacKinnon ("Google in China: Degrees of Evil"):
At the end of the day, this compromise puts Google a little lower on the evil scale than many other internet companies in China. But is this compromise something Google should be proud of? No. They have put a foot further into the mud. Now let's see whether they get sucked in deeper or whether they end up holding their ground.
David Weinberger ("Google in China"):
If forced to choose -- as Google has been -- I'd probably do what Google is doing. It sucks, it stinks, but how would an information embargo help? It wouldn't apply pressure on the Chinese government. Chinese citizens would not be any more likely to rise up against the government because they don't have access to Google. Staying out of China would not lead to a more free China.
Doc Searls ("Doing Less Evil, Possibly"):
I believe constant engagement -- conversation, if you will -- with the Chinese government, beats picking up one's very large marbles and going home. Which seems to be the alternative.
Much as I hate to say it, this does seem to be the sensible position -- not unlike opposing America's embargo of Cuba. The logic goes that isolating Castro only serves to further isolate the Cuban people, whereas exposure to the rest of the world -- even restricted and filtered -- might, over time, loosen the state's monopoly on civic life. Of course, you might say that trading Castro for globalization is merely an exchange of one tyranny for another. But what is perhaps more interesting to ponder right now, in the wake of Google's decision, is the palpable melancholy felt in the comments above. What does it reveal about what we assume -- or used to assume -- about the internet and its relationship to politics and geography?
A favorite "what if" of recent history is what might have happened in the Soviet Union had it lasted into the internet age. Would the Kremlin have managed to secure its virtual borders? Or censor and filter the net into a state-controlled intranet -- a Union of Soviet Socialist Networks? Or would the decentralized nature of the technology, mixed with the cultural stirrings of glasnost, have toppled the totalitarian state from beneath?
Ten years ago, in the heady early days of the internet, most would probably have placed their bets against the Soviets. The Cold War was over. Some even speculated that history itself had ended, that free-market capitalism and democracy, on the wings of the information revolution, would usher in a long era of prosperity and peace. No borders. No limits.
It's interesting now to see how exactly the opposite has occurred. Bubbles burst. Towers fell. History, as we now realize, did not end, it was merely on vacation; while the utopian vision of the internet -- as a placeless place removed from the inequities of the physical world -- has all but evaporated. We realize now that geography matters. Concrete features have begun to crystallize on this massive information plain: ports, gateways and customs houses erected, borders drawn. With each passing year, the internet comes more and more to resemble a map of the world.
Those of us tickled by the "what if" of the Soviet net now have ourselves a plausible answer in China, who, through a stunning feat of pipe control -- a combination of censoring filters, on-the-ground enforcement, and general peering over the shoulders of its citizens -- has managed to create a heavily restricted local net in its own image. Barely a decade after the fall of the Iron Curtain, we have the Great Firewall of China.
And as we've seen this week, and in several highly publicized instances over the past year, the virtual hand of the Chinese government has been substantially strengthened by Western technology companies willing to play by local rules so as not to be shut out of the explosive Chinese market. Tech giants like Google, Yahoo! , and Cisco Systems have proved only too willing to abide by China's censorship policies, blocking certain search returns and politically sensitive terms like "Taiwanese democracy," "multi-party elections" or "Falun Gong". They also specialize in precision bombing, sometimes removing the pages of specific users at the government's bidding. The most recent incident came just after New Year's when Microsoft acquiesced to government requests to shut down the My Space site of popular muckraking blogger Zhao Jing, aka Michael Anti.
We tend to forget that the virtual is built of physical stuff: wires, cable, fiber -- the pipes. Whoever controls those pipes, be it governments or telecomms, has the potential to control what passes through them. The result is that the internet comes in many flavors, depending in large part on where you are logging in. As Jack Goldsmith and Timothy Wu explain in an excellent article in Legal Affairs (adapted from their forthcoming book Who Controls the Internet? : Illusions of a Borderless World), China, far from being the boxed-in exception to an otherwise borderless net, is actually just the uglier side of a global reality. The net has been mapped out geographically into "a collection of nation-state networks," each with its own politics, social mores, and consumer appetites. The very same technology that enables Chinese authorities to write the rules of their local net enables companies around the world to target advertising and gear services toward local markets. Goldsmith and Wu:
...information does not want to be free. It wants to be labeled, organized, and filtered so that it can be searched, cross-referenced, and consumed....Geography turns out to be one of the most important ways to organize information on this medium that was supposed to destroy geography.
Who knows? When networked devices truly are ubiquitous and can pinpoint our location wherever we roam, the internet could be censored or tailored right down to the individual level (like the empire in Borges' fable that commissions a one-to-one map of its territory that upon completion perfectly covers every corresponding inch of land like a quilt).
The case of Google, while by no means unique, serves well to illustrate how threadbare the illusion of the borderless world has become. The company's famous credo, "don't be evil," just doesn't hold up in the messy, complicated real world. "Choose the lesser evil" might be more appropriate. Also crumbling upon contact with air is Google's famous mission, "to make the world's information universally accessible and useful," since, as we've learned, Google will actually vary the world's information depending on where in the world it operates.
Google may be behaving responsibly for a corporation, but it's still a corporation, and corporations, in spite of well-intentioned employees, some of whom may go to great lengths to steer their company onto the righteous path, are still ultimately built to do one thing: get ahead. Last week in the States, the get-ahead impulse happened to be consonant with our values. Not wanting to spook American users, Google chose to refuse a Dept. of Justice request for search records to aid its anti-pornography crackdown. But this week, not wanting to ruffle the Chinese government, Google compromised and became an agent of political repression. "Degrees of evil," as Rebecca MacKinnon put it.
The great irony is that technologies we romanticized as inherently anti-tyrannical have turned out to be powerful instruments of control, highly adaptable to local political realities, be they state or market-driven. Not only does the Chinese government use these technologies to suppress democracy, it does so with the help of its former Cold War adversary, America -- or rather, the corporations that in a globalized world are the de facto co-authors of American foreign policy. The internet is coming of age and with that comes the inevitable fall from innocence. Part of us desperately wanted to believe Google's silly slogans because they said something about the utopian promise of the net. But the net is part of the world, and the world is not so simple.
rethinking copyright: learning from the pro sports? 01.27.2006, 1:10 AM
As Ben has reported, the Economics of Open Content conference spent a good deal of time discussing issues of copyright and fair use. During a presentation, David Pierce from Copyright Services noted that the major media companies are mainly concerned about protecting their most valuable assets. The obvious example is Disney's extreme vested interest in protecting the Mickey Mouse, now 78 years old, from entering the public domain. Further, Pierce mentioned that these media companies fight to extend the copyright protection of everything they own in order to protect their most valuable assets. Finally, he stated that only a small portions of their total film libraries are available to consumers. Many people in attendance were intrigued by these ideas, including myself and Paul Courant from the University of Michigan. Earlier in the conference, Courant explained that 90-95% of UM's library is out of print, and presumably much of that is under copyright protection.
If this situation is true, then, staggering amounts of media are being kept from the public domain or are closed from licensing for little or no reason. A little further thinking quickly leads to alternative structures of copyright that would move media into the public domain or at the least increase its availability, while appeasing the media conglomerates economic concerns.
Rules controlling the protection of assets is nothing new. For instance, in US professional sports, fairly elaborate structures are in place determine how players can be traded. Common sense dictates that teams cannot stockpile players from other teams. In the free agency era of the National Football League, teams have limited rights to control players from signing with other teams. Each NFL team can designate a single athlete as a "franchise" player, according to the current Collecting Bargaining Agreement with the player union. This designation gives them exclusive rights in retaining their player from competing offers. Similarly, in the National Basketball Association, when the league adds a new team, existing teams are allowed to protect eight players from being drafted and signed from the expansion team(s). What can we learn from these institutions? The examples show hoarding players is not good for sports, similarly hoarding assets is not in the best interest of the public good either.
The sports example has obviously limitations. In the NBA, team rosters are limited to fifteen players. On the other hand, a media company can hold an unlimited number of assets. In turn, applying this model would allow companies to seek extensions to only a portion of their copyright assets. Defining this proportion would certainly be difficult. For instance, it is still unclear to me how this might adapt to owners of one copyrighted property.
Another variant interpretation of this model would be to move the burden of responsibility back to the copyright holder. Here, copyright holders must show active economic use and value from these properties. This strategy would force media companies to make their archives available or put the media into the public domain. These copyright holders need to overcome their fears of flooding the markets and dated claims of limited shelf space, which are simply not relevant in the digital media / e-commerce age. Further, media companies would be encouraged to license their holdings for derivatives works, which would in fact lead to more profits. In that, these implementations would increase revenue by challenging the current shortsighted marketing decisions which fail to account for the long tail economic value of their holdings. Although these materials would not enter the public domain, they would be become accessible.
Would this block innovation? Creators of content will still be able to profit from their work for decades. When limited copyright did exist in its original implementation, creative innovation was certainly not hindered. Therefore, the argument that limiting protection of all of a media company's assets in perpetuity would slow innovation is baseless. By the end of the current time copyright period, holders have ample time to extract value from those assets. In fact, infinite copyright protection slows innovation by removing incentives to create new intellectual property.
Finally, few last comments are worth noting. These models are, at best, compromises. I present them because the current state of copyright protection and extensions seems headed towards former Motion Pictures Association of America President Jack Valenti's now infamous suggestion of extending copyright to "forever less a day." Although these media companies have a huge financial stake in controlling these copyrights, I cannot overemphasize our Constitutional right to place these materials in the public domain. Article I, Section 8, clause 8 of the United States Constitution states:
Congress has the power to promote the Progress of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Rights to their respective Writings and Discoveries.
Under these proposed schemes, fair use becomes even more cruical. Conceding that the extraordinary preciousness of intellectual property as Mickey Mouse and Bugs Bunny supersedes rights found in our Constitution implies a similarly extraordinary importance of these properties to our culture and society. Thus, democratic access to these properties for use in education and critical discourse must be equally imperative to the progress of culture and society. In the end, the choice, as a society, is ours. We do not need to concede anything.
what I heard at MIT 01.26.2006, 9:47 AM
Over the next few days I'll be sifting through notes, links, and assorted epiphanies crumpled up in my pocket from two packed, and at times profound, days at the Economics of Open Content symposium, hosted in Cambridge, MA by Intelligent Television and MIT Open CourseWare. For now, here are some initial impressions -- things I heard, both spoken in the room and ricocheting inside my head during and since. An oral history of the conference? Not exactly. More an attempt to jog the memory. Hopefully, though, something coherent will come across. I'll pick up some of these threads in greater detail over the next few days. I should add that this post owes a substantial debt in form to Eliot Weinberger's "What I Heard in Iraq" series (here and here).
Naturally, I heard a lot about "open content."
I heard that there are two kinds of "open." Open as in open access -- to knowledge, archives, medical information etc. (like Public Library of Science or Project Gutenberg). And open as in open process -- work that is out in the open, open to input, even open-ended (like Linux, Wikipedia or our experiment with MItch Stephens, Without Gods).
I heard that "content" is actually a demeaning term, treating works of authorship as filler for slots -- a commodity as opposed to a public good.
I heard that open content is not necessarily the same as free content. Both can be part of a business model, but the defining difference is control -- open content is often still controlled content.
I heard that if you build the open-access resources and demonstrate their value, the money will come later.
I heard that content should be given away for free and that the money is to be made talking about the content.
I heard that reputation and an audience are the most valuable currency anyway.
I heard that the academy's core mission -- education, research and public service -- makes it a moral imperative to have all scholarly knowledge fully accessible to the public.
I heard that if knowledge is not made widely available and usable then its status as knowledge is in question.
I heard that libraries may become the digital publishing centers of tomorrow through simple, open-access platforms, overhauling the print journal system and redefining how scholarship is disseminated throughout the world.
And I heard a lot about copyright...
I heard that probably about 50% of the production budget of an average documentary film goes toward rights clearances.
I heard that many of those clearances are for "underlying" rights to third-party materials appearing in the background or reproduced within reproduced footage. I heard that these are often things like incidental images, video or sound; or corporate logos or facades of buildings that happen to be caught on film.
I heard that there is basically no "fair use" space carved out for visual and aural media.
I heard that this all but paralyzes our ability as a culture to fully examine ourselves in terms of the media that surround us.
I heard that the various alternative copyright movements are not necessarily all pulling in the same direction.
I heard that there is an "inter-operability" problem between alternative licensing schemes -- that, for instance, Wikipedia's GNU Free Documentation License is not inter-operable with any Creative Commons licenses.
I heard that since the mass market content industries have such tremendous influence on policy, that a significant extension of existing copyright laws (in the United States, at least) is likely in the near future.
I heard one person go so far as to call this a "totalitarian" intellectual property regime -- a police state for content.
I heard that one possible benefit of this extension would be a general improvement of internet content distribution, and possibly greater freedom for creators to independently sell their work since they would have greater control over the flow of digital copies and be less reliant on infrastructure that today only big companies can provide.
I heard that another possible benefit of such control would be price discrimination -- i.e. a graduated pricing scale for content varying according to the means of individual consumers, which could result in fairer prices. Basically, a graduated cultural consumption tax imposed by media conglomerates
I heard, however, that such a system would be possible only through a substantial invasion of users' privacy: tracking users' consumption patterns in other markets (right down to their local grocery store), pinpointing of users' geographical location and analysis of their socioeconomic status.
I heard that this degree of control could be achieved only through persistent surveillance of the flow of content through codes and controls embedded in files, software and hardware.
I heard that such a wholesale compromise on privacy is all but inevitable -- is in fact already happening.
I heard that in an "information economy," user data is a major asset of companies -- an asset that, like financial or physical property assets, can be liquidated, traded or sold to other companies in the event of bankruptcy, merger or acquisition.
I heard that within such an over-extended (and personally intrusive) copyright system, there would still exist the possibility of less restrictive alternatives -- e.g. a peer-to-peer content cooperative where, for a single low fee, one can exchange and consume content without restriction; money is then distributed to content creators in proportion to the demand for and use of their content.
I heard that such an alternative could theoretically be implemented on the state level, with every citizen paying a single low tax (less than $10 per year) giving them unfettered access to all published media, and easily maintaining the profit margins of media industries.
I heard that, while such a scheme is highly unlikely to be implemented in the United States, a similar proposal is in early stages of debate in the French parliament.
And I heard a lot about peer-to-peer...
I heard that p2p is not just a way to exchange files or information, it is a paradigm shift that is totally changing the way societies communicate, trade, and build.
I heard that between 1840 and 1850 the first newspapers appeared in America that could be said to have mass circulation. I heard that as a result -- in the space of that single decade -- the cost of starting a print daily rose approximately %250.
I heard that modern democracies have basically always existed within a mass media system, a system that goes hand in hand with a centralized, mass-market capital structure.
I heard that we are now moving into a radically decentralized capital structure based on social modes of production in a peer-to-peer information commons, in what is essentially a new chapter for democratic societies.
I heard that the public sphere will never be the same again.
I heard that emerging practices of "remix culture" are in an apprentice stage focused on popular entertainment, but will soon begin manifesting in higher stakes arenas (as suggested by politically charged works like "The French Democracy" or this latest Black Lantern video about the Stanley Williams execution in California).
I heard that in a networked information commons the potential for political critique, free inquiry, and citizen action will be greatly increased.
I heard that whether we will live up to our potential is far from clear.
I heard that there is a battle over pipes, the outcome of which could have huge consequences for the health and wealth of p2p.
I heard that since the telecomm monopolies have such tremendous influence on policy, a radical deregulation of physical network infrastructure is likely in the near future.
I heard that this will entrench those monopolies, shifting the balance of the internet to consumption rather than production.
I heard this is because pre-p2p business models see one-way distribution with maximum control over individual copies, downloads and streams as the most profitable way to move content.
I heard also that policing works most effectively through top-down control over broadband.
I heard that the Chinese can attest to this.
I heard that what we need is an open spectrum commons, where connections to the network are as distributed, decentralized, and collaboratively load-sharing as the network itself.
I heard that there is nothing sacred about a business model -- that it is totally dependent on capital structures, which are constantly changing throughout history.
I heard that history is shifting in a big way.
I heard it is shifting to p2p.
I heard this is the most powerful mechanism for distributing material and intellectual wealth the world has ever seen.
I heard, however, that old business models will be radically clung to, as though they are sacred.
I heard that this will be painful.
letters from second life 01.25.2006, 4:10 PM
Last week, Bob mentioned that Larry Lessig, law profressor and intellectual property scholar, was being interviewed in Second Life, the virtual world created by Linden Lab. Having heard a lot of Second Life before, I was pleased to have a reason and opportunity to create an account and explore it. Basically I quickly learned that it's Metaverse, as described in Neil Stephenson's Snowcrash, in operation today, and I'm now a part of it too.
I already covered the actual interview. Here are a few observations from my introduction to SL.
Second Life is a humbling place, especially for beginners. Everything ,even the simplest things, must be relearned. It took me 5 minutes to learn how to sit down, another 5 minutes to read something, and on and on. Traveling to the site of Lessig event was an even more daunting task. I was given the location of this event, a name and coordinates, without any idea of what to do with them. Second Life is a vast space, and it wasn't clear to me how to get from one point to another. I had no idea how to travel in SL, and had to ask around someone.
I presume it is evident that I'm very new to SL, by my constant trampling over people and inanimate objects. So, I continue walking into trees and rocks until I come across someone whose title contains "Mentor," and figure that this is a good person to ask for help. Not knowing how to strike up a private conversation, I start talking out loud, not even sure if anyone is even going to pay attention.
(I will come to learn that you travel from place to place via teleportation.)
I am relieved to discover that people are basically nice in SL, maybe even nicer than in New York. This fellow avatar is happy to chat and answer questions. Second List has a feature called "Friends" which operates like Buddies in Instant Messaging. However, I'm not sure what the social protocol for making friends is, so I make no assumptions. As I was typing "can we be friends?" I sigh with the realization that I am, in fact, back in fourth grade.
People around me have much more sophisticated outfits than I do. So, I try out the free clothing features. I darken my pants to a deep blue and my shoes black. Then, my default shirt gets turned into a loose white t-shirt. Somehow I end up a bit like a GAP model crossed with Max Headroom. After making my first "friend," another complete stranger comes up to me and just starts giving me clothes. Apparently, my clothes still need a little work. I try on the cowboy boots and faded jeans. Happy that I've moved beyond the standard issue clothes, I thank my benefactor and begin to make my way to the event.
The builders of Second Life force people to rely on other people within the virtual world. However, assistance in the real world certainly helps too. Entering Second Life, the feeling of displacement is quite clear, as if I arrived to a new city in the real world with a single address, where I don't know anyone or how to navigate the city. The virtual world often mimics the real world, but my surprise each time I learn this fact is still ongoing. It definitely helps to know people, both in where to go that's interesting and how to do things.
After teleporting to the event, I found myself around people who had common interests, which was great and similar to attending a lecture in the real world. At different times, I struck up a conversation with an avatar who is a publisher on the West Coast and then talked to an academic who runs a media center. In both cases, I was talking to the person literally "next" to me.
When I first heard about the interview, I learned at there was limited spacing. Which seemed strange to me, as it was taking place in a viritual space. When I arrived at the event place, I saw the ampitheater with video screens, that would show a live web stream of Lessig. The limited seating made more sense, seeing the seat of the theater. I also believe that the SL servers also have a finite capacity for the number of people to be located within a small area, because movement was jerky around concentrated groups of people. I guess I'll have to wait for the Second Life Woodstock.
The space was crowded with people walking around, chatting, and getting up their free digital copy of Lessig's book, "Free Culture." (I've included a picture of me reading Free Culture in Second Life. You can actually read the text.) The interview is about to begin, as an avatar with large red wings walks by me. I say out loud, "I know she was going to sit in front of me." Adding, "Just kidding," in case I might be offending someone, who knows who this person could be. Fortunately, she found a seat outside my sight line without incident, and the introductory remarks began.
There was a strange duality where I had to both learn what was being said, but also how to navigate the environment of a lecture as well. The interview proceeds within the social norms of a lecture. People are mostly quiet, clap and for the moderator runs the question and answer session. Afterwards, I line up to get Lessig to "sign" my virtual book at the virtual booksigning, as in my virtual public event. I finally stumble my way through the line, all the while asking many question on what I'm supposed to do. With my signed book in hand, I look at the sky, which is quite dark. I log out and return to the real world.
wikipedia as civic duty 01.25.2006, 10:34 AM
Number 14 on Technorati's listings of the most popular blogs is Beppe Grillo. Who's Beppe Grillo? He's an immensely popular Italian political satirist, roughly the Italian Jon Stewart. Grillo has been hellbent on exposing corruption in the political system there, and has emerged as a major force in the ongoing elections there. While he happily and effectively skewers just about everybody involved in the Italian political process, Dario Fo, currently running for mayor of Milan under the refreshing slogan "I am not a moderate" manages to receive Grillo's endorsement.
Grillo's use of new media makes sense: he has effectively been banned from Italian television. While he performs around the country, his blog – which is also offered in English just as deadpan and full of bold-faced phrases as the Italian – has become one of his major vehicles. It's proven astonishingly popular, as his Technorati ranking reveals.
His latest post (in English or Italian) is particularly interesting. (It's also drawn a great deal of debate: note the 1044 comments – at this writing – on the Italian version.) Grillo's been pushing the Wikipedia for a while; here, he suggests to his public that they should, in the name of transparency, have a go at revising the Wikipedia entry on Silvio Berlusconi.
Berlusconi is an apt target. He is, of course, the right-wing prime minister of Italy as well as its richest citizen, and at one point or another, he's had his fingers in a lot of pies of dubious legality. In the five years that he's been in power, he's been systematically rewriting Italian laws standing in his way – laws against, for example, media monopolies. Berlusconi effectively controls most Italian television: it's a fair guess that he has something to do with Grillo's ban from Italian television. Indirectly, he's probably responsible for Grillo turning to new media: Berlusconi doesn't yet own the internet.
Or the Wikipedia. Grillo brilliantly posits the editing of the Wikipedia as a civic duty. This is consonant with Grillo's general attitude: he's also been advocating environmental responsibility, for example. The public editing Berlusconi's biography seems apt: famously, during the 2001 election, Berlusconi sent out a 200-page biography to every mailbox in Italy which breathlessly chronicled his rise from a singer on cruise ships to the owner of most of Italy. This vanity press biography presented itself as being neutral and objective. Grillo doesn't buy it: history, he argues, should be written and rewritten by the masses. While Wikipedia claims to strive for a neutral point of view, its real value lies in its capacity to be rewritten by anyone.
How has Grillo's suggestion played out? Wikipedia has evidently been swamped by "BeppeGrillati" attempting to modify Berlusconi's biography. The Italian Wikipedia has declared "una edit war" and put a temporary lock on editing the page. From an administrative point of view, this seems understandable; for what it's worth, there's a similar, if less stringent, stricture on the English page for Mr. Bush. But this can't help but feel like a betrayal of Wikipedia's principals. Editing the Wikipedia should be a civic duty.
Contrary Motion 01.25.2006, 3:42 AM
X_Reloaded. 01.24.2006, 4:43 PM
This is a bilingual (English/Spanish) post. Spanish version can be found lower down.
Santofile, uses "meme" to allude to creative freedom in the digital world. Meme is mimesis and is self-generating. It refers to mediation in the sense of remix and appropriation, to the mixing of works that circulate in the Internet in order to produce an original piece. Among Santofile's projects is X_Reloaded, an interpretation of the first chapter of Don Quixote, compiled from disparate works inspired by the fourth centennial of its publication.
They put together such diverse creators as William Burroughs and Adbusters, whose common context is precisely the idea of busting. Busting decontextualizes a piece (work of art, advertisement, text) causing it to lose its character as a static icon by giving it a new life inside a new context.
To choose Don Quixote as the text for X_Reloaded, is an allusion to the concept of remix per excellence. Cervantes appropriated chivalry novels with the intention to subvert the genre, and his final remix, decontextualized, is a unique and original work. Printing itself in Cervantes' times required a highly legible copy, which wasn't necessarily the original manuscript. Thus, the "original" was a copy made by one or more amanuenses. And from this "original" corrected by the author, a sort of predecessor of proofreading, the book was put together by the typesetter, with its consequent errata. It is interesting to note that the Spanish Royal Academy's edition of Don Quixote, that celebrates its fourth centennial, claims to be based on about a hundred editions, old and new. If this is not remix, what is?
Cervantes himself is absolutely aware of what he is doing, and of the subversive character of his action. When Don Quixote reads, we don't know who is the madman, him or the one who wrote this:
The reason of the unreason with which my reason is afflicted so weakens my reason that with reason I murmur at your beauty.
Don Quixote changed forever the way novels were written, and three centuries later, Borges' "Pierre Menard, author of Don Quixote" would change forever the way one reads. Pierre Menard writes Don Quixote without ceasing to be Pierre Menard, demonstrating how it is possible to transform a text without altering a single word. Decontextualization was inaugurated.
With her windmills we have to say with Don Quixote, they are indeed giants.
Santofile, usa el concepto de meme para aludir a libertad de creación en el mundo digital. Meme es mimesis y es autogenerador. Se refiere a mediación, en el sentido de remix, de mezclar apropiándose de trabajos de otros, generalmente trabajo digital que circula por la red, para a la vez producir una nueva obra original. Entre sus proyectos está X_Reloaded una interpretación del capítulo primero de El Quijote, que recoge obras dispares inspiradas por el cuarto centenario de su publicación.,
Se reunen creadores tan disímiles como William Burroughs y Adbusters, cuyo contexto comun sería precisamente la idea de romper, de volver trizas, que está en el seno mismo del verbo "to bust". Al descontextualizar lo que se quiere romper, se le roba permanencia como ícono estático y se le confiere nueva vida dentro de un nuevo contexto.
El escoger precisamente El Quijote como texto para X_Reloaded, es aludir al remix por excelencia. Cervantes se apropia de las novelas de caballería para subvertir el génro, y su remix final, al descontextualizarlas, es una obra unica y original. La impresión misma del texto en tiempos de Cervantes, requería de una copia altamente legible, lo que no necesariamente era el manuscrito original. De ahí que el "original" eran una copia hecha por uno o más amanuenses. Y de ese "original"corregido por el autor, salía el libro, armado por el cajista, con sus consiguientes errores. Es interesante notar que la edición de la Real Academia Española, con motivo del cuarto centenario de El Quijote, es un "texto crítico de la obra constituido sobre la consulta de cerca de un centenar de ediciones antiguas y modernas". Si esto no es remix, ¿qué es?
Cervantes mismo es absolutamente consciente de lo que está haciendo, y del carácter subversivo de su acción. Cuando Don Qujiote lee no sabemos si es él el loco, o el que escribió esto:
La razón de la sinrazón que a mi razón se hace, de tal manera mi razón enflaquece, que con razón me quejo de la vuestra fermosura
El Quijote va a cambiar para siempre la manera como se escribe y tres siglos más tarde, "Pierre Menard autor del Quijote" de Borges, va a cambiar la manera como se lee. Pierre Menard escribe El Quijote sin dejar de ser Pierre Menard, demostrando cómo se transforma un texto sin cambiarlo, inaugurando la descontextualización.
Siguiendo esta tradición, X_Loaded nos presenta el mapa de jodi, imágenes como la de, Olia Lialina', el texto conceptual de Jennny Holzer, o los molinos de viento de Rosa Llop'. Y con ellos, tenemos que decir con Don Quijote, los molinos son en verdad gigantes. Rosa Llop. Y con ellos, tenemos que decir con Don Quijote, los molinos son en verdad gigantes.
fair use and the networked book 01.23.2006, 3:29 PM
I just finished reading the Brennan Center for Justice's report on fair use. This public policy report was funded in part by the Free Expression Policy Project and describes, in frightening detail, the state of public knowledge regarding fair use today. The problem is that the legal definition of fair use is hard to pin down. Here are the four factors that the courts use to determine fair use:
- the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes;
- the nature of the copyrighted work;
- the amount and substantiality of the portion used in relation to the copyrighted work as a whole; and
- the effect of the use upon the potential market for or value of the copyrighted work.
Unfortunately, these criteria are open to interpretation at every turn, and have provided little with which to predict any judicial ruling on fair use. In a lawsuit, no one is sure of the outcome of their claim. This causes confusion and fear for individuals and publishers, academics and their institutions. In many cases where there is a clear fair use argument, the target of copyright infringement action (cease and desist, lawsuit) does not challenge the decision, usually for financial reasons. It's just as clear that copyright owners pursue the protection of copyright incorrectly, with plenty of misapprehension about what qualifies for fair use. The current copyright law, as it has been written and upheld, is fraught with opportunities for mistakes by both parties, which has led to an underutilization of cultural assets for critical, educational, or artistic purposes.
This restrictive atmosphere is even more prevalent in the film and music industries. The RIAA lawsuits are a well-known example of the industry protecting its assets via heavy-handed lawsuits. The culture of shared use in the movie industry is even more stifling. This combination of aggressive control by the studio and equally aggressive piracy is causing a legislative backlash that favors copyright holders at the expense of consumer value. The Brennan report points to several examples where the erosion of fair use has limited the ability of scholars and critics to comment on these audio/visual materials, even though they are part of the landscape of our culture.
the economics of open content 01.23.2006, 9:31 AM
For the next two days, Ray and I are attending what hopes to be a fascinating conference in Cambridge, MA -- The Economics of Open Content -- co-hosted by Intelligent Television and MIT Open CourseWare.
This project is a systematic study of why and how it makes sense for commercial companies and noncommercial institutions active in culture, education, and media to make certain materials widely available for free--and also how free services are morphing into commercial companies while retaining their peer-to-peer quality.
They've assembled an excellent cross-section of people from the emerging open access movement, business, law, the academy, the tech sector and from virtually every media industry to address one of the most important (and counter-intuitive) questions of our age: how do you make money by giving things away for free?
Rather than continue, in an age of information abundance, to embrace economic models predicated on information scarcity, we need to look ahead to new models for sustainability and creative production. I look forward to hearing from some of the visionaries gathered in this room.
More to come...
cheney and google 01.21.2006, 6:27 PM
(this is a follow-up to ben's recent post "the book is reading you."
i rarely read Maureen Dowd but the headline of her column in today's New York Times, "Googling past the Graveyard," caught my attention. Dowd calls Dick Cheney on the carpet for asking Google to release the search records of U.S. citizens. while i'm horrified that the govt. would even consider asking for such information, i'm concerned that the way this particular issue is playing out, Google is being portrayed as the poor beleaguered neutral entity caught between an over-reaching bureaucracy and its citizens. Cheney will expire eventually. in the meantime Google will collect even more data. Google is a very big corporation, who's power will grow over time. in the long run, why aren't people outraged that this information is in Google's hands in the first place. shouldn't we be?
lessig in second life 01.20.2006, 9:27 AM
Wednesday evening, I attended an interview with Larry Lessig, which took place in the virtual world of Second Life. New World Notes announced the event and is posting coverage and transcripts of the interview. As it was my first experience in SL, I will post more on the experience of attending an interview/ lecture in a virtual space. For now, I am going to comment upon two quotes that Lessig covered as it relates to our work at the institute.
Lawrence Lessig: Because as life moves online we should have the SAME FREEDOMS (at least) that we had in real life. There's no doubt that in real life you could act out a movie or a different ending to a movie. There's no doubt that would have been "free" of copyright in real life. But as we move online things that were before were free now are regulated.
Yesterday, Bob made the point that our memories increasingly exist outside of ourselves. At the institute, we have discussed the mediated life, and a substantial part of that mediation occurs as we continue to digitize more parts of our lives, from photo albums to diaries. Things we once created in the physical world now reside on the network, which means that it is being published. Photo albums documenting our trips to Disneyland or the Space Needle (whose facade is trademarked and protected) that one rested within the home, are uploaded to flickr, potentially accessible to anyone browsing the Internet, a regulated space. This regulation has enormous influence on the creative outlets of everyone, not just professionals. Without trying to sound overly naive, my concern is not just that speech and discourse of all people are being compromised. As companies become more litigious towards copyright infringement (especially when their arguments are weak), the safe guards of the courts and legislation are not protecting its constituents.
Lawrence Lessig: Copyright is about creating incentives. Incentives are prospective. No matter what even the US Congress does, it will not give Elvis any more incentive to create in 1954. So whatever the length of copyright should be prospectively, we know it can make no sense of incentives to extend the term for work that is already created.
The increasing accessibility of digital technology allows people to become creators and distributors of content. Lessig notes that with each year, the increasing evidence from cases such as the Google Book Search controversy show the inadequacy of current copyright legislation. Further, he insightfully suggests to learn from the creations that young people produce such as anime music videos. Their completely different approach to intellectual property informs the cultural shift that is running counter to the legal status quo. Lessig suggest that these creative works have the potential to inform policy makers that these attitudes are moving toward the original intentions of copyright law. Then, policy makers hopefully may begin to question why these works are currently considered illegal.
The courts' failure to clearly define an interpretation of fair use puts at risk the discourse that a functioning democracy requires. The stringent attitudes towards using copyrighted material goes against the spirit of the original intentions of the law. Although, it may not be a role of the government and the courts to actively encourage creativity. It is sad that bipartisan government actions and courts rulings actively discourage innovation and creativity.
the book is reading you 01.19.2006, 1:42 PM
I just noticed that Google Book Search requires users to be logged in on a Google account to view pages of copyrighted works.
They provide the following explanation:
Why do I have to log in to see certain pages?
Because many of the books in Google Book Search are still under copyright, we limit the amount of a book that a user can see. In order to enforce these limits, we make some pages available only after you log in to an existing Google Account (such as a Gmail account) or create a new one. The aim of Google Book Search is to help you discover books, not read them cover to cover, so you may not be able to see every page you're interested in.
So they're tracking how much we've looked at and capping our number of page views. Presumably a bone tossed to publishers, who I'm sure will continue suing Google all the same (more on this here). There's also the possibility that publishers have requested information on who's looking at their books -- geographical breakdowns and stats on click-throughs to retailers and libraries. I doubt, though, that Google would share this sort of user data. Substantial privacy issues aside, that's valuable information they want to keep for themselves.
That's because "the aim of Google Book Search" is also to discover who you are. It's capturing your clickstreams, analyzing what you've searched and the terms you've used to get there. The book is reading you. Substantial privacy issues aside, (it seems more and more that's where we'll be leaving them) Google will use this data to refine Google's search algorithms and, who knows, might even develop some sort of personalized recommendation system similar to Amazon's -- you know, where the computer lists other titles that might interest you based on what you've read, bought or browsed in the past (a system that works only if you are logged in). It's possible Google is thinking of Book Search as the cornerstone of a larger venture that could compete with Amazon.
There are many ways Google could eventually capitalize on its books database -- that is, beyond the contextual advertising that is currently its main source of revenue. It might turn the scanned texts into readable editions, hammer out licensing agreements with publishers, and become the world's biggest ebook store. It could start a print-on-demand service -- a Xerox machine on steroids (and the return of Google Print?). It could work out deals with publishers to sell access to complete online editions -- a searchable text to go along with the physical book -- as Amazon announced it will do with its Upgrade service. Or it could start selling sections of books -- individual pages, chapters etc. -- as Amazon has also planned to do with its Pages program.
Amazon has long served as a valuable research tool for books in print, so much so that some university library systems are now emulating it. Recent additions to the Search Inside the Book program such as concordances, interlinked citations, and statistically improbable phrases (where distinctive terms in the book act as machine-generated tags) are especially fun to play with. Although first and foremost a retailer, Amazon feels more and more like a search system every day (and its A9 engine, though seemingly always on the back burner, is also developing some interesting features). On the flip side Google, though a search system, could start feeling more like a retailer. In either case, you'll have to log in first.
more grist for the "pipes" debate 01.18.2006, 5:47 PM
A couple of interesting items:
Larry Lessig wrote an excellent post last week debunking certain myths circulating the "to regulate or not to regulate" debate in Washington, namely that introducing "net neutrality" provisions in the new Telecom bill would impose unprecedented "common carriage" regulation on network infrastructure. Of course, the infrastructure was regulated before -- when the net was accessed primarily through phone lines. Lessig asks: if an unregulated market is so good for the consumer, then why is broadband service in this country so slow and so expensive?
Also worth noting is a rough sketch from internet entrepreneur Mark Cuban of the idea of "tiered" network service. This would entail prioritizing certain uses of bandwidth. For example, your grandma's web-delivered medical diagnostics would be prioritized over the teenager downloading music videos next door (if, that is, someone shells out for the priority service). This envisions for the consumer end what cable and telephone execs have dreamed of on the client end -- i.e. charging certain web services more for faster page loads and speedier content delivery. Seems to me that either scenario would make the U.S. internet more like the U.S. healthcare system: abysmal except for those with cash.
who do you trust? 01.17.2006, 6:12 PM
Larry Sanger posted this comment to if:book's recent Digital Universe and expert review post. In the second paragraph Sanger suggests that experts should not have to constantly prove the value of their expertise. We think this is a crucial question. What do you think?
"In its first year or two it was very much not the case that Wikipedia "only looks at reputation that has been built up within Wikipedia." We used to show respect to well-qualified people as soon as they showed up. In fact, it's very sad that it has changed in that way, because that means that Wikipedia has become insular--and it has, too. (And in fact, I warned very specifically against this insularity. I knew it would rear its ugly head unless we worked against it.) Worse, Wikipedia's notion of expertise depends on how well you work within that system--which has nothing whatsoever to do with how well you know a subject.
"That's what expertise is, after all: knowing a lot about a subject. It seems that any project in which you have to "prove" that you know a lot about a subject, to people who don't know a lot about the subject, will endlessly struggle to attract society's knowledge leaders."
meta-wikipedia 01.17.2006, 7:48 AM
As a frequent consulter, but not an editor, of Wikipedia, I've often wondered about what exactly goes on among the core contributors. A few clues can be found in the revision histories, but on a whole these are hard to read, internal work documents meant more for those actually getting their hands dirty in the business of writing and editing. Like choreographic notation, they may record the steps, but to the untrained reader they give little sense of the look or feeling of the dance.
But dig around elsewhere in Wikipedia's sprawl, turn over a few rocks, and you will find squirming in the soil a rich ecosystem of communities, organizing committees, and rival factions. Most of these -- the more formally organized ones at least -- can be found on the "Meta-Wiki," a site containing information and community plumbing for all Wikimedia Foundation projects, including Wikipedia.
I took a closer look at some of these so-called Metapedians and found them to be a varied, often contentious lot, representing a broad spectrum of philosophies asserting this or that truth about how Wikipedia should evolve, how it should be governed, and how its overall significance ought to be judged. The more prominent schools of thought are even championed by associations, complete with their own page, charter and loyal base of supporters. Although tending toward the tongue-in-cheek, these pages cannot help but convey how seriously the business of building the encyclopedia is taken, with three groups in particular providing, if not evidence of an emergent tri-party system, then at least a decent introduction to Wikipedia's political culture, and some idea of how different Wikipedians might formulate policies for the writing and editing of articles.
On one extreme is The Association of Deletionist Wikipedians, a cantankerous collective that dreams (with considerable ideological overlap with another group, the Exclusionists) of a "big, strong, garbage-free Wikipedia." These are the expungers, the pruners, the weeding-outers -- doggedly on the lookout for filth, vandalism and general extraneousness. Deletionists favor "clear and relatively rigorous standards for accepting articles to the encyclopedia." When you come across an article that has been flagged for cleanup or suspected inaccuracies, that may be the work of Deletionists. Some have even pushed for the development of Wiki Law that could provide clearly documented precedents to guide future vetting efforts. In addition, Deletionists see it as their job to "outpace rampant Inclusionism," a rival school of thought across the metaphorical aisle: The Association of Inclusionist Wikipedians.
This group's motto is "Salva veritate," or "with truth preserved," which in practice means: "change Wikipedia only when no knowledge would be lost as a result." These are Wikipedia's libertarians, its big-tenters, its stub-huggers. "Outpace and coordinate against rampant Deletionism" is one of their core directives.
A favorite phrase of inclusionists is "Wiki is not paper." Because Wikipedia does not have the same space limitations as a paper encyclopedia, there is no need to restrict content in the same way that a Britannica must. It has also been suggested that no performance problems result from having many articles. Inclusionists claim that authors should take a more open-minded look at content criteria. Articles on people, places, and concepts of little note may be perfectly acceptable for Wikipedia in this view. Some inclusionists do not see a problem with including pages which give a factual description of every last person on the planet.
(Even poor old Bob Aspromonte.)
Then along come the Mergist Wikipedians. The moderates, the middle-grounders, the bipartisans. The Mergists regard it their mission to reconcile the two extremes -- to "outpace rampant Inclusionism and Deletionism." As their eminently sensible charter explains:
The AMW believes that while some information is notable and encyclopedic and therefore has a place on Wikipedia, much of it is not notable enough to warrant its own article and is therefore best merged. In this sense we are similar to Inclusionists, as we believe in the preservation of information and knowledge, but share traits with Deletionists as we disagree with the rampant creation of new articles for topics that could easily be covered elsewhere.
For some, however, there can be no middle ground. One is either a Deletionist or and Inclusionist, it's as simple as that. To these hardliners, the mergists are referred to dismissively as "delusionists."
There are still other, less organized, ideological subdivisions. Immediatism focuses on "the immediate value of Wikipedia," and so are terribly concerned with the quality -- today -- of its information, the neatness of its appearance, and its general level of professionalism and polish. When a story in the news draws public attention to some embarrassing error -- the Seigenthaler episode, for instance -- the Immediatists wince and immediately set about correcting it. Eventualism, by contrast, is more concerned with Wikipedia in the long run -- its grand destiny -- trusting that wrinkles will be ironed out, gaps repaired. All in good time.
How much impact these factions have on the overall growth and governance of Wikipedia is hard to say. But as a description of the major currents of thought that go into the building of this juggernaut, they are quite revealing. It's nice that people have taken the time to articulate these positions, and that they have done so with humor, lending texture and color to what at first glance might appear to be an undifferentiated mob.
an overview on the future of the book 01.16.2006, 4:24 PM
The peer reviewed online journal, First Monday, has a interesting article entitled, "The Processed Book." Joseph Esposito looks at how the book will change once it is placed in a network. He covers a lot of territory from the future role of the author to the perceived ownership of text and ideas to new economic models for publishing this kind of content.
One great thing about the piece is that he uses the essay itself to demonstrate his ideas of a text in a network. That is, he encourages people to augment the reading of the article with the Internet, in this case, by looking up historic and literary references in his writing. Further, the article is an updating of an earlier article he wrote for First Monday. The end result is that we can witness the evolution of text within the network while we read about it. More posting on some of the details of his ideas are coming.
LONGPLAYER 01.16.2006, 12:08 PM
when i was growing up they started issuing LP albums which played at 33 1/3 rpm, vastly increasing the amount of playing time on one side of a record. before the LP, audio was recorded and distributed on brittle discs made of shellac, running at 78rpm. 78s had a capacity of about 12 minutes; LPs upped that to about 30 minutes which made it possible for classical music fans to listen to an entire movement without changing discs and enabled the development of the rock and roll album.
in 2,000 Jem Finer, a UK-based artist released Longplayer, a 1000-year musical composition that runs continuously and without repetition from its start on January 1, 2000 until its completion on December 31, 2999. Related conceptually to the Long Now project which seeks to build a ten-thousand year clock, Longplayer uses generative forms of music to make a piece that plays for ten to twelve human lifetimes. Longplayer challenges us to take a longer view which takes account of the generations that will come after us.
the longplayer also reminds me of an idea i've been intrigued by -- the possiblity of (networked) books that never end because authors keep adding layers, tangents and new chapters.
Finer published a book about Longplayer which includes a vinyl disc (LP actually) with samples.
who owns the network? 01.12.2006, 5:15 PM
Susan Crawford recently floated the idea of the internet network (see comments 1 and 2) as a public trust that, like America's national parks or seashore, requires the protection of the state against the undue influence of private interests.
...it's fine to build special services and make them available online. But broadband access companies that cover the waterfront (literally -- are interfering with our navigation online) should be confronted with the power of the state to protect entry into this self-owned commons, the internet. And the state may not abdicate its duty to take on this battle.
Others argue that a strong government hand will create as many problems as it fixes, and that only true competition between private, municipal and grassroots parties -- across not just broadband, but multiple platforms like wireless mesh networks and satellite -- can guarantee a free net open to corporations and individuals in equal measure.
Discussing this around the table today, Ray raised the important issue of open content: freely available knowledge resources like textbooks, reference works, scholarly journals, media databases and archives. What are the implications of having these resources reside on a network that increasingly is subject to control by phone and cable companies -- companies that would like to transform the net from a many-to-many public square into a few-to-many entertainment distribution system? How open is the content when the network is in danger of becoming distinctly less open?
ESBNs and more thoughts on the end of cyberspace 01.12.2006, 7:31 AM
Anyone who's ever seen a book has seen ISBNs, or International Standard Book Numbers -- that string of ten digits, right above the bar code, that uniquely identifies a given title. Now come ESBNs, or Electronic Standard Book Numbers, which you'd expect would be just like ISBNs, only for electronic books. And you'd be right, but only partly. ESBNs, which just came into existence this year, uniquely identify not only an electronic title, but each individual copy, stream, or download of that title -- little tracking devices that publishers can embed in their content. And not just books, but music, video or any other discrete media form -- ESBNs are media-agnostic.
"It's all part of the attempt to impose the restrictions of the physical on the digital, enforcing scarcity where there is none," David Weinberger rightly observes. On the net, it's not so much a matter of who has the book, but who is reading the book -- who is at the book. It's not a copy, it's more like a place. But cyberspace blurs that distinction. As Alex Pang explains, cyberspace is still a place to which we must travel. Going there has become much easier and much faster, but we are still visitors, not natives. We begin and end in the physical world, at a concrete terminal.
When I snap shut my laptop, I disconnect. I am back in the world. And it is that instantaneous moment of travel, that light-speed jump, that has unleashed the reams and decibels of anguished debate over intellectual property in the digital era. A sort of conceptual jetlag. Culture shock. The travel metaphors begin to falter, but the point is that we are talking about things confused during travel from one world to another. Discombobulation.
This jetlag creates a schism in how we treat and consume media. When we're connected to the net, we're not concerned with copies we may or may not own. What matters is access to the material. The copy is immaterial. It's here, there, and everywhere, as the poet said. But when you're offline, physical possession of copies, digital or otherwise, becomes important again. If you don't have it in your hand, or a local copy on your desktop then you cannot experience it. It's as simple as that. ESBNs are a byproduct of this jetlag. They seek to carry the guarantees of the physical world like luggage into the virtual world of cyberspace.
But when that distinction is erased, when connection to the network becomes ubiquitous and constant (as is generally predicted), a pervasive layer over all private and public space, keeping pace with all our movements, then the idea of digital "copies" will be effectively dead. As will the idea of cyberspace. The virtual world and the actual world will be one.
For publishers and IP lawyers, this will simplify matters greatly. Take, for example, webmail. For the past few years, I have relied exclusively on webmail with no local client on my machine. This means that when I'm offline, I have no mail (unless I go to the trouble of making copies of individual messages or printouts). As a consequence, I've stopped thinking of my correspondence in terms of copies. I think of it in terms of being there, of being "on my email" -- or not. Soon that will be the way I think of most, if not all, digital media -- in terms of access and services, not copies.
But in terms of perception, the end of cyberspace is not so simple. When the last actual-to-virtual transport service officially shuts down -- when the line between worlds is completely erased -- we will still be left, as human beings, with a desire to travel to places beyond our immediate perception. As Sol Gaitan describes it in a brilliant comment to yesterday's "end of cyberspace" post:
In the West, the desire to blur the line, the need to access the "other side," took artists to try opium, absinth, kef, and peyote. The symbolists crossed the line and brought back dada, surrealism, and other manifestations of worlds that until then had been held at bay but that were all there. The virtual is part of the actual, "we, or objects acting on our behalf are online all the time." Never though of that in such terms, but it's true, and very exciting. It potentially enriches my reality. As with a book, contents become alive through the reader/user, otherwise the book is a dead, or dormant, object. So, my e-mail, the blogs I read, the Web, are online all the time, but it's through me that they become concrete, a perceived reality. Yes, we read differently because texts grow, move, and evolve, while we are away and "the object" is closed. But, we still need to read them. Esse rerum est percipi.
Just the other night I saw a fantastic performance of Allen Ginsberg's Howl that took the poem -- which I'd always found alluring but ultimately remote on the page -- and, through the conjury of five actors, made it concrete, a perceived reality. I dug Ginsburg's words. I downloaded them, as if across time. I was in cyberspace, but with sweat and pheremones. The Beats, too, sought sublimity -- transport to a virtual world. So, too, did the cyberpunks in the net's early days. So, too, did early Christian monastics, an analogy that Pang draws:
...cyberspace expresses a desire to transcend the world; Web 2.0 is about engaging with it. The early inhabitants of cyberspace were like the early Church monastics, who sought to serve God by going into the desert and escaping the temptations and distractions of the world and the flesh. The vision of Web 2.0, in contrast, is more Franciscan: one of engagement with and improvement of the world, not escape from it.
The end of cyberspace may mean the fusion of real and virtual worlds, another layer of a massively mediated existence. And this raises many questions about what is real and how, or if, that matters. But the end of cyberspace, despite all the sweeping gospel of Web 2.0, continuous computing, urban computing etc., also signals the beginning of something terribly mundane. Networks of fiber and digits are still human networks, prone to corruption and virtue alike. A virtual environment is still a natural environment. The extraordinary, in time, becomes ordinary. And undoubtedly we will still search for lines to cross.
end of cyberspace 01.11.2006, 7:26 AM
The End of Cyberspace is a brand-new blog by Alex Soojung-Kim Pang, former academic editor and print-to-digital overseer at Encyclopedia Britannica, and currently a research director at the Institute for the Future (no relation). Pang has been toying with this idea of the end of cyberspace for several years now, but just last week he set up this blog as "a public research notebook" where he can begin working through things more systematically. To what precise end, I'm not certain.
The end of cyberspace refers to the the blurring, or outright erasure, of the line between the virtual and the actual world. With the proliferation of mobile devices that are always online, along with increasingly sophisticated social software and "Web 2.0" applications, we are moving steadily away from a conception of the virtual -- of cyberspace -- as a place one accesses exclusively through a computer console. Pang explains:
Our experience of interacting with digital information is changing. We're moving to a world in which we (or objects acting on our behalf) are online all the time, everywhere.
Designers and computer scientists are also trying hard to create a new generation of devices and interfaces that don't monopolize our attention, but ride on the edges of our awareness. We'll no longer have to choose between cyberspace and the world; we'll constantly access the first while being fully part of the second.
Because of this, the idea of cyberspace as separate from the real world will collapse.
If the future of the book, defined broadly, is about the book in the context of the network, then certainly we must examine how the network exists in relation to the world, and on what terms we engage with it. I'm not sure cyberspace has ever really been a home for the book, but it has, in a very short time, totally altered the way we read. Now, gradually, we return to the world. But changed. This could get interesting.
.tv 01.09.2006, 6:15 PM
People have been talking about internet television for a while now. But Google and Yahoo's unveiling of their new video search and subscription services last week at the Consumer Electronics Show in Las Vegas seemed to make it real.
Sifting through the predictions and prophecies that subsequently poured forth, I stumbled on something sort of interesting -- a small concrete discovery that helped put some of this in perspective. Over the weekend, Slate Magazine quietly announced its partnership with "meaningoflife.tv," a web-based interview series hosted by Robert Wright, author of Nonzero and The Moral Animal, dealing with big questions at the perilous intersection of science and religion.
Launched last fall (presumably in response to the intelligent design fracas), meaningoflife.tv is a web page featuring a playlist of video interviews with an intriguing roster of "cosmic thinkers" -- philosophers, scientists and religious types -- on such topics as "Direction in evolution," "Limits in science," and "The Godhead."
This is just one of several experiments in which Slate is fiddling with its text-to-media ratio. Today's Pictures, a collaboration with Magnum Photos, presents a daily gallery of images and audio-photo essays, recalling both the heyday of long-form photojournalism and a possible future of hybrid documentary forms. One problem is that it's not terribly easy to find these projects on Slate's site. The Magnum page has an ad tucked discretely on the sidebar, but meaningoflife.tv seems to have disappeared from the front page after a brief splash this weekend. For a born-digital publication that has always thought of itself in terms of the web, Slate still suffers from a pretty appalling design, with its small headline area capping a more or less undifferentiated stream of headlines and teasers.
Still, I'm intrigued by these collaborations, especially in light of the forecast TV-net convergence. While internet TV seems to promise fragmentation, these projects provide a comforting dose of coherence -- a strong editorial hand and a conscious effort to grapple with big ideas and issues, like the reassuringly nutritious programming of PBS or the BBC. It's interesting to see text-based publications moving now into the realm of television. As Tivo, on demand, and now, the internet atomize TV beyond recognition, perhaps magazines and newspapers will fill part of the void left by channels.
Limited as it may now seem, traditional broadcast TV can provide us with valuable cultural touchstones, common frames of reference that help us speak a common language about our culture. That's one thing I worry we'll lose as the net blows broadcast media apart. Then again, even in the age of five gazillion cable channels, we still have our water-cooler shows, our mega-hits, our television "events." And we'll probably have them on the internet too, even when "by appointment" television is long gone. We'll just have more choice regarding where, when and how we get at them. Perhaps the difference is that in an age of fragmentation, we view these touchstone programs with a mildly ironic awareness of their mainstream status, through the multiple lenses of our more idiosyncratic and infinitely gratified niche affiliations. They are islands of commonality in seas of specialization. And maybe that makes them all the more refreshing. Shows like "24," "American Idol," or a Ken Burns documentary, or major sporting events like the World Cup or the Olympics that draw us like prairie dogs out of our niches. Coming up for air from deep submersion in our self-tailored, optional worlds.
machinima: a call for papers and some thoughts on defining a form 01.07.2006, 10:22 AM
Grand Text Auto reports a call for proposals for essays to be included in a reader on machinima. Most often, machinima is the repurposing of video gameplay that is recorded and then re-edited, with additional sound and voice over.
People have been creating machinima with 3D video games, such as Quake, since the late 1990s. Even before that, in the late 80s, my friends and I would record our Nintendo victories on VHS, more in the spirit of DIY skate videos. However, in the last few years, the machinima community has seen tremendous growth, which coincided with the penetration of video editing equipment in the home. What started as ironic short movies have started to grow into fairly elaborate projects.
Until the last few years, social research on games in general was limited and sporadic. In the 1970s and 1980s, the University of Pennsylvania was the rare institution that supported a community of scholars to investigate games and play. A vast proliferation of book and social research exists on gaming and especially video games, which we have discussed here.
Although I love machinima, I am surprised as to how quickly a reader is being produced. Machinima is still a rather fringe phenomena, albeit growing. My first reaction is that machinima is not exactly ready for an entire reader on the subject. I look forward to being surprised by the final selection of essays.
Part of this reaction comes from the notion that machinima is a rather limited form. In my mind, machinima is the repurposing of video game output. However, machinima.org emphasizes capturing live action/ real time digital animation as an essential part of the form, thereby removing the necessity of the video game. Most machinima is created within the virtual video gaming environment because that is where people are able to most readily control and capture 3D animation in real time. Live action or real time capture is different from traditional 3D animation tools (for instance Maya) where you program (and hence control) the motion of your object, background, and camera before you render (or record) the animation rather than during as in machinima.
Broadening of the definition of machinima, as with any form, plays a role on the sustainability of the form. For example, in looking at painting versus sculpture, painting seems to confine what is considered "painting" to pigment on a 2D surface. Where more expansive interpretations of the form get new labeling such as mixed media or multimedia. On the other hand, sculpture has expanded beyond traditional materials of wood, metal, and stone. Thus, the art of James Turrell, who works with landscape, light and interior space can be called sculpture. I do not imply that painting is by any means dead. The 2004 Whitney Biennal had a surprisingly rich display of painting and drawing, as well as photography. However, note the distinction that photography is not considered painting, although photography is 2D medium.
The word machinima comes from combining machine cineama or machine animation. This foundation does pose limits to how far beyond repurposing video game output machinima can go. It is not convincing to try to include the repurposing of traditional film and animation under the label of machinima. Clearly, repurposing material such as japanese movies or cartoons as in Woody Allen's "What's Up, Tigerlily?" and the Cartoon Network's "Sealab 2021" is not machinima. Further more, I am hesitant to call the repurposing of a digital animation machinima. I am not familiar with any examples, but I would not be surprised if they exist.
With the release of The Movies, people can use the game's 3D modeling engine to create wholly new movies. It is not readily clear to me, if The Movies allows for real time control of it's characters. If it does, then "French Democracy" (the movie made by French teenagers about the Parisian riots in late 2005) should be considered machinima. However, if it does not, then I cannot differentiate the "French Democracy" from films made in Maya or Pixar in-house applications. Clearly, Pixar's "Toy Story" is not machinima.
As digital forms emerge, the boundaries of our mental constructions guide our understanding and discourse surrounding these forms. I'm realizing that how we define these constructions control not only the relevance but also the sustainability of these forms. Machinima defined solely as repurposed video game output is limiting, and utlimately less interesting than the potential of capturing real time 3D modeling engines as a form of expression, whatever we end up calling it.
exploring the book-blog nexus 01.07.2006, 8:36 AM
It appears that Amazon is going to start hosting blogs for authors. Sort of. Amazon Connect, a new free service designed to boost sales and readership, will host what are essentially stripped-down blogs where registered authors can post announcements, news and general musings. Eventually, customers can keep track of individual writers by subscribing to bulletins that collect in an aggregated "plog" stream on their Amazon home page. But comments and RSS feeds -- two of the most popular features of blogs -- will not be supported. Engagement with readers will be strictly one-way, and connection to the larger blogosphere basically nil. A missed opportunity if you ask me.
Then again, Amazon probably figured it would be a misapplication of resources to establish a whole new province of blogland. This is more like the special events department of a book store -- arranging readings, book singings and the like. There has on occasion, however, been some entertaining author-public interaction in Amazon's reader reviews, most famously Anne Rice's lashing out at readers for their chilly reception of her novel Blood Canticle (link - scroll down to first review). But evidently Connect blogs are not aimed at sparking this sort of exchange. Genuine literary commotion will have to occur in the nooks and crannies of Amazon's architecture.
It's interesting, though, to see this happening just as our own book-blog experiment, Without Gods, is getting underway. Over the past few weeks, Mitchell Stephens has been writing a blog (hosted by the institute) as a way of publicly stoking the fire of his latest book project, a narrative history of atheism to be published next year by Carroll and Graf. While Amazon's blogs are mainly for PR purposes, our project seeks to foster a more substantive relationship between Mitch and his readers (though, naturally, Mitch and his publisher hope it will have a favorable effect on sales as well). We announced Without Gods a little over two weeks ago and already it has collected well over 100 comments, a high percentage of which are thoughtful and useful.
We are curious to learn how blogging will impact the process of writing the book. By working partially in the open, Mitch in effect raises the stakes of his research -- assumptions will be challenged and theses tested. Our hunch isn't so much that this procedure would be ideal for all books or authors, but that for certain ones it might yield some tangible benefit, whether due to the nature or breadth of their subject, the stage they're at in their thinking, or simply a desire to try something new.
An example. This past week, Mitch posted a very thinking-out-loud sort of entry on "a positive idea of atheism" in which he wrestles with Nietzsche and the concepts of void and nothingness. This led to a brief exchange in the comment stream where a reader recommended that Mitch investigate the writings of Gora, a self-avowed atheist and figure in the Indian independence movement in the 30s. Apparently, Gora wrote what sounds like a very intriguing memoir of his meeting with Gandhi (whom he greatly admired) and his various struggles with the religious component of the great leader's philosophy. Mitch had not previously been acquainted with Gora or his writings, but thanks to the blog and the community that has begun to form around it, he now knows to take a look.
What's more, Mitch is currently traveling in India, so this could not have come at a more appropriate time. It's possible that the commenter had noted this from a previous post, which may have helped trigger the Gora association in his mind. Regardless, these are the sorts of the serendipitous discoveries one craves while writing book. I'm thrilled to see the blog making connections where none previously existed.
digital universe and expert review 01.06.2006, 5:09 PM
The notion of expert review has been tossed around in the open-content community for a long time. Philosophically, those who lean towards openness tend to sneer at the idea of formalized expert review, trusting in the multiplied consciousness of the community to maintain high standards through less formal processes. Wikipedia is obviously the most successful project in this mode.The informal process has the benefit of speed, and avoids bureaucracy—something which raises the barrier to entry, and keeps out people who just don't have the time to deal with 'process.'
The other side of that coin is the belief that experts and editors encourage civil discourse at a high level; without them you'll end up with mob rule and lowest common denominator content. Editors encourage higher quality writing and thinking. Thinking and writing better than others is, in a way, the definition of expert. In addition, editors and experts tend to have a professional interest in the subject matter, as well as access to better resources. These are exactly the kind of people who are not discouraged by higher barriers to entry, and they are, by extension, the people that you want to create content on your site.
Larry Sanger thinks that, anyway. A Wikipedia co-founder, he gave an interview on news.com about a project that plans to create a better Wikipedia, using a combination of open content development and editorial review: The Digital Universe.
You can think of the Digital Universe as a set of portals, each defined by a topic, such as the planet Mars. And from each portal, there will be links to the best resources on the Web, including a lot of resources of different kinds that are prepared by experts and the general public under the management of experts. This will include an encyclopedia, as well as public domain books, participatory journalism, forums of various kinds and so forth. We'll build a community of experts and an online collaborative network of independent organizations, each of which has authority over its own discipline to select material and to build resources that are together displayed through a single free-information platform.
I have experience with the editor model from my time at About.com. The About.com model is based on 'guides'—nominal (and sometimes actual) experts on a chosen topic (say NASCAR, or anesthesiology)—who scour the internet, find good resources, and write articles and newsletters to facilitate understanding and keep communities up to date. The guides were overseen by a bevy of editors, who tended mostly to enforce the quotas for newsletters and set the line on quality. About.com has its problems, but it was novel and successful during its time.
The Digital Universe model is an improvement on the single guide model; it encourages a multitude of people to contribute to a reservoir of content. Measured by available resources, the Digital Universe model wins, hands down. As with all large, open systems, emergent behaviors will add even more to the system in ways than we cannot predict. The Digitial Universe will have it's own identity and quality, which, according to the blueprint, will be further enhanced by expert editors, shaping the development of a topic and polishing it to a high gloss.
Full disclosure: I find the idea of experts "managing the public" somehow distasteful, but I am compelled by the argument that this will bring about a better product. Sanger's essay on eliminating anti-elitism from Wikipedia clearly demonstrates his belief in the 'expert' methodology. I am willing to go along, mindful that we should be creating material that not only leads people to the best resources, but also allows them to engage more critically with the content. This is what experts do best. However, I'm pessimistic about experts mixing it up with the public. There are strong, and as I see it, opposing forces in play: an expert's reputation vs. public participation, industry cant vs. plain speech, and one expert opinion vs. another.
The difference between Wikipedia and the Digital Universe comes down, fundamentally, to the importance placed on authority. We'll see what shape the Digital Universe takes as the stresses of maintaining an authoritative process clashes with the anarchy of the online public. I think we'll see that adopting authority as your rallying cry is a volatile position in a world of empowered authorship and a universe of alternative viewpoints.
the future of academic publishing, peer review, and tenure requirements 01.06.2006, 12:54 PM
There's a brilliant guest post today on the Valve by Kathleen Fitzpatrick, english and media studies professor/blogger, presenting "a sketch of the electronic publishing scheme of the future." Fitzpatrick, who recently launched ElectraPress, "a collaborative, open-access scholarly project intended to facilitate the reimagining of academic discourse in digital environments," argues convincingly why the embrace of digital forms and web-based methods of discourse is necessary to save scholarly publishing and bring the academy into the contemporary world.
In part, this would involve re-assessing our fetishization of the scholarly monograph as "the gold standard for scholarly production" and the principal ticket of entry for tenure. There is also the matter of re-thinking how scholarly texts are assessed and discussed, both prior to and following publication. Blogs, wikis and other emerging social software point to a potential future where scholarship evolves in a matrix of vigorous collaboration -- where peer review is not just a gate-keeping mechanism, but a transparent, unfolding process toward excellence.
There is also the question of academic culture, print snobbism and other entrenched attitudes. The post ends with an impassioned plea to the older generations of scholars, who, since tenured, can advocate change without the risk of being dashed on the rocks, as many younger professors fear.
...until the biases held by many senior faculty about the relative value of electronic and print publication are changed--but moreover, until our institutions come to understand peer-review as part of an ongoing conversation among scholars rather than a convenient means of determining "value" without all that inconvenient reading and discussion--the processes of evaluation for tenure and promotion are doomed to become a monster that eats its young, trapped in an early twentieth century model of scholarly production that simply no longer works.
I'll stop my summary there since this is something that absolutely merits a careful read. Take a look and join in the discussion.
questions about blog search and time 01.06.2006, 8:17 AM
Does anyone know of a good way to search for old blog entries on the web? I've just been looking at some of the available blog search resources and few of them appear to provide any serious advanced search options. The couple of major ones I've found that do (after an admittedly cursory look) are Google and Ice Rocket. Both, however, appear to be broken, at least when it comes to dates. I've tried them on three different browsers, on Mac and PC, and in each case the date menus seem to be frozen. It's very weird. They give you the option of entering a specific time range but won't accept the actual dates. Maybe I'm just having a bad tech day, but it's as if there's some conceptual glitch across the web vis a vis blogs and time.
Most blog search engines are geared toward searching the current blogosphere, but there should be a way to research older content. My first thought was that blog search engines crawl RSS feeds, most of which do not transmit the entirety of a blog's content, just the more recent. That would pose a problem for archival search.
Does anyone know what would be the best way to go about finding, say, old blog entries containing the keywords "new orleans superdome" from late August to late September 2005? Is it best to just stick with general web search and painstakingly comb through for blogs? If we agree that blogs have become an important kind of cultural document, than surely there should be a way to find them more than a month after they've been written.
video ipod 01.05.2006, 6:52 PM
Looks like it's hardware day at if:book. Just got a video iPod. Hmmm. this looks like another niche where Apple has handily beat Sony. the image is crisper and larger than i expected; the case is slimmer and lighter.
it's remarkably easy to convert video files to MP4 and load them onto the iPod. the experience is intimate . i'm experimenting with different genres, poetry, animation, family home video, short films. everything works. the iPod got handed around from ben to dan to ray to jesse. we came up with a bunch of ideas for projects we want to try. stay tuned.
first sighting of sony ebook reader 01.05.2006, 7:17 AM
this is a late addition to this post. i just realized that whatever the strengths and weaknesses of the Sony ebook reader, i think that most of the people writing about it, including me, have missed perhaps the most important aspect -- the device has Sony's name on it. correct me if i'm wrong, but this is the first time a major consumer electronics company has seen fit to put their name on a ebook reader in the US market. it's been a long time coming.
Reuters posted this image by Rick Wilking. every post i've seen so far is pessimistic about sony's chances. i'm doubtful myself, but will wait to see what kind of digital rights management they've installed. if it's easy to take
things off my desktop to read later, including pdfs and web pages, and if the MP3 player feature is any good they might be able to carve out a niche which they can expand over time if they keep developing the concept. i do wish it were a bit more stylish . . .
here's a link to ben's excellent post ipod for text.
useful rss 01.04.2006, 1:58 PM
Hi. I'm Jesse, the latest member to join the staff here at the Institute. I'm interested in network effects, online communities, and emergent behavior. Right now I'm interested in the tools we have available to control and manipulate RSS feeds. My goal is to collect a wide variety of feeds and tease out the threads that are important to me. In my experience, mechanical aggregation gives you quantity and diversity, but not quality and focus. So I did a quick investigation of the tools that exist to manage and manipulate feeds.
Sites like MetaFilter and Technorati skim the most popular topics in the blogosphere. But what sort of tools exist to help us narrow our focus? There are two tools that we can use right now: tag searches/filtering, and keyword searching. Tag searches (on Technorati) and tag filtering (on Metafilter) drill down to specific areas, like "books" or "books and publishing." A casual search on MetaFilter was a complete failure, but Technorati, with its combination of tags and keyword search results produced good material.
There is also the Google Blog search. As Google puts it, you can 'find blogs on your favorite topics.' PageRank works, so PageRank applied to blogs should work too. Unfortunately it results in too many pages that, while higher ranked in the whole set of the Internet, either fail to be on topic or exist outside of the desired sub-spheres of a topic. For example, I searched for "gourmet food" and found one of the premier food blogs on the fourth page, just below Carpundit. Google blog search fails here because it can't get small enough to understand the relationships in the blogosphere, and relies more heavily on text retrieval algorithms that sabotage the results.
Finally, let's talk about aggregators. There is more human involvement in selecting sites you're interested in reading. This creates a personalized network of sites that are related, if only by your personal interest. The problem is, you get what they want to write about. Managing a large collection of feeds can be tiresome when you're looking for specific information. Bloglines has a search function that allows you to find keywords inside your subscriptions, then treat that as a feed. This neatly combines hand-picked sources with keyword or tag harvesting. The result: a slice of from your trusted collection of authors about a specific topic.
What can we envision for the future of RSS? Affinity mapping and personalized recommendation systems could augment the tag/keyword search functionality to automatically generate a slice from a small network of trusted blogs. Automatic harvesting of whole swaths of linked entries for offline reading in a bounded hypertext environment. Reposting and remixing feed content on the fly based on text-processing algorithms. And we'll have to deal with the dissolving identity and trust relationships that are a natural consequence of these innovations.
two newspapers 01.04.2006, 11:21 AM
I picked up The New York Times from outside my door this morning knowing that the lead headline was going to be wrong. I still read the print paper every morning – I do read the electronic version, but I find that my reading there tends to be more self-selecting than I'd like it to be – but lately I find myself checking the Web before settling down to the paper and a cup of coffee. On the Web, I'd already seen the predictable gloating and hand-wringing in evidence there. Because of some communication mixup, the papers went to press with the information that the trapped West Virginia coal miners were mostly alive; a few hours later it turned out that they were, in fact, mostly dead. A scrutiny of the front pages of the New York dailies at the bodega this morning revealed that just about all had the wrong news – only Hoy, a Spanish-language daily didn't have the story, presumably because it went to press a bit earlier. At right is the front page of today's USA Today, the nation's most popular newspaper; click on the thumbnail for a more legible version. See also the gallery at their "newseum". (Note that this link won't show today's papers tomorrow – my apologies, readers of the future, there doesn't seem to be anything that can be done for you, copyright and all that.)
At left is another front page of a newspaper, The New York Times from April 20, 1950 (again, click to see a legible version). I found it last night at the start of Marshall McLuhan's The Mechanical Bride: Folklore of Industrial Man. Published in 1951, The Mechanical Bride is one of McLuhan's earliest works; in it, he primarily looks at the then-current world of print advertising, starting with the front page shown here. To my jaundiced eye, most of the book hasn't stood up that well; while it was undoubtedly very interesting at the time – being one of the first attempts to seriously deal with how people interact with advertisements from a critical perspective – fifty years, and billions and billions of advertisements later, it doesn't stand up as well as, say, Judith Williamson's Decoding Advertisements manages to. But bits of it are still interesting: McLuhan presents this front page to talk about how Stephane Mallarmé and the Symbolists found the newspaper to be the modern symbol of their day, with the different stories all jostling each other for prominence on the page.
But you don't – at least, I don't – immediately see that when you look at the front page that McLuhan exhibits. This was presumably an extremely ordinary front page when he was exhibiting it, just as the USA Today up top might be representative today. Looked at today, though, it's something else entirely, especially when you what newspapers look like now. You can notice this even in my thumbnails: when both papers are normalized to 200 pixels wide, you can't read anything in the old one, besides that it says "The New York Times" as the top, whereas you can make out the headlines to four stories in the USA Today. Newspapers have changed, not just from black & white to color, but in the way the present text and images. In the old paper there are only two photos, headshots of white men in the news – one a politician who's just given a speech, the other a doctor who's had his license revoked. The USA Today has perhaps an analogue to that photo in Jack Abramoff's perp walk; it also has five other photos, one of the miners' deluded family members (along with Abramoff, the only news photos), two sports-related photos – one of which seems to be stock footage of the Rose Bowl sign, a photo advertising television coverage inside, and a photo of two students for a human interest story. This being the USA Today, there's also a silly graph in the bottom left; the green strip across the bottom is an ad. Photos and graphics take up more than a third of the front page of today's paper.
What's overwhelming to me about the old Times cover is how much text there is. This was not a newspaper that was meant to be read at a glance – as you can do with the thumbnail of the USA Today. If you look at the Times more closely it looks like everything on the front page is serious news. You could make an argument here about the decline of journalism, but I'm not that interested in that. More interesting is how visual print culture has become. Technology has enabled this – a reasonably intelligent high-schooler could, I think, create a layout like the USA Today. But having this possibility available would also seem to have had an impact on the content – and whether McLuhan would have predicted that, I can't say.
wikipedia, lifelines, and the packaging of authority 01.04.2006, 7:37 AM
In a nice comment in yesterday's Times, "The Nitpicking of the Masses vs. the Authority of the Experts," George Johnson revisits last month's Seigenthaler smear episode and Nature magazine Wikipedia-Britannica comparison, and decides to place his long term bets on the open-source encyclopedia:
It seems natural that over time, thousands, then millions of inexpert Wikipedians - even with an occasional saboteur in their midst - can produce a better product than a far smaller number of isolated experts ever could.
Reading it, a strange analogy popped into my mind: "Who Wants to Be a Millionaire." Yes, the game show. What does it have to do with encyclopedias, the internet and the re-mapping of intellectual authority? I'll try to explain. "Who Wants to Be a Millionaire" is a simple quiz show, very straightforward, like "Jeopardy" or "The $64,000 Question." A single contestant answers a series of multiple choice questions, and with each question the money stakes rise toward a million-dollar jackpot. The higher the stakes the harder the questions (and some seriously overdone lighting and music is added for maximum stress). There is a recurring moment in the game when the contestant's knowledge fails and they have the option of using one of three "lifelines" that have been alloted to them for the show.
The first lifeline (and these can be used in any order) is the 50:50, which simply reduces the number of possible answers from four to two, thereby doubling your chances of selecting the correct one -- a simple jiggering of probablities. The other two are more interesting. The second lifeline is a telephone call to a friend or relative at home who is given 30 seconds to come up with the answer to the stumper question. This is a more interesting kind of a probability, since it involves a personal relationship. It deals with who you trust, who you feel you can rely on. Last, and my favorite, is the "ask the audience" lifeline, in which the crowd in the studio is surveyed and hopefully musters a clear majority behind one of the four answers. Here, the probability issue gets even more intriguing. Your potential fortune is riding on the knowledge of a room full of strangers.
In most respects, "Who Wants to Be a Millionaire" is just another riff on the classic quiz show genre, but the lifeline option pegs it in time, providing a clue about its place in cultural history. The perceptive game show anthropologist would surely recognize that the lifeline is all about the network. It's what gives "Millionaire" away as a show from around the time of the tech bubble in the late 90s -- manifestly a network-era program. Had it been produced in the 50s, the lifeline option would have been more along the lines of "ask the professor!" Lights rise on a glass booth containing a mustached man in a tweed jacket sucking on a pipe. Our cliché of authority. But "Millionaire" turns not to the tweedy professor in the glass booth (substitute ivory tower) but rather to the swarming mound of ants in the crowd.
And that's precisely what we do when we consult Wikipedia. It isn't an authoritative source in the professor-in-the-booth sense. It's more lifeline number 3 -- hive mind, emergent intelligence, smart mobs, there is no shortage of colorful buzzwords to describe it. We've always had lifeline number 2. It's who you know. The friend or relative on the other end of the phone line. Or think of the whispered exchange between students in the college library reading room, or late-night study in the dorm. Suddenly you need a quick answer, an informal gloss on a subject. You turn to your friend across the table, or sprawled on the couch eating Twizzlers: When was the Glorious Revolution again? Remind me, what's the Uncertainty Principle?
With Wikipedia, this friend factor is multiplied by an order of millions -- the live studio audience of the web. This is the lifeline number 3, or network, model of knowledge. Individual transactions may be less authoritative, pound for pound, paragraph for paragraph, than individual transactions with the professors. But as an overall system to get you through a bit of reading, iron out a wrinkle in a conversation, or patch over a minor factual uncertainty, it works quite well. And being free and informal it's what we're more inclined to turn to first, much more about the process of inquiry than the polished result. As Danah Boyd puts it in an excellently measured defense of Wikipedia, it "should be the first source of information, not the last. It should be a site for information exploration, not the definitive source of facts." Wikipedia advocates and critics alike ought to acknowledge this distinction.
So, having acknowledged it, can we then broker a truce between Wikipedia and Britannica? Can we just relax and have the best of both worlds? I'd like that, but in the long run it seems that only one can win, and if I were a betting man, I'd have to bet with Johnson. Britannica is bound for obsolescence. A couple of generations hence (or less), who will want it? How will it keep up with this larger, far more dynamic competitor that is already of roughly equal in quality in certain crucial areas?
Just as the printing press eventually drove the monastic scriptoria out of business, Wikipedia's free market of knowledge, with all its abuses and irregularities, its palaces and slums, will outperform Britannica's centralized command economy, with its neat, cookie-cutter housing slabs, its fair, dependable, but ultimately less dynamic, system. But, to stretch the economic metaphor just a little further before it breaks, it's doubtful that the free market model will remain unregulated for long. At present, the world is beginning to take notice of Wikipedia. A growing number are championing it, but for most, it is more a grudging acknowledgment, a recognition that, for better of for worse, what's going on with Wikipedia is significant and shouldn't be ignored.
Eventually we'll pass from the current phase into widespread adoption. We'll realize that Wikipedia, being an open-source work, can be repackaged in any conceivable way, for profit even, with no legal strings attached (it already has been on sites like about.com and thousands -- probably millions -- of spam and link farms). As Lisa intimated in a recent post, Wikipedia will eventually come in many flavors. There will be commercial editions, vetted academic editions, handicap-accessible editions. Darwinist editions, creationist editions. Google, Yahoo and Amazon editions. Or, in the ultimate irony, Britannica editions! (If you can't beat 'em...)
All the while, the original Wikipedia site will carry on as the sprawling community garden that it is. The place where a dedicated minority take up their clippers and spades and tend the plots. Where material is cultivated for packaging. Right now Wikipedia serves best as an informal lifeline, but soon enough, people will begin to demand something more "authoritative," and so more will join in the effort to improve it. Some will even make fortunes repackaging it in clever ways for which people or institutions are willing to pay. In time, we'll likely all come to view Wikipedia, or its various spin-offs, as a resource every bit as authoritative as Britannica. But when this happens, it will no longer be Wikipedia.
Authority, after all, is a double-edged sword, essential in the pursuit of truth, but dangerous when it demands that we stop asking questions. What I find so thrilling about the Wikipedia enterprise is that it is so process-oriented, that its work is never done. The minute you stop questioning it, stop striving to improve it, it becomes a museum piece that tells the dangerous lie of authority. Even those of use who do not take part in the editorial gardening, who rely on it solely as lifeline number 3, we feel the crowd rise up to answer our query, we take the knowledge it gives us, but not (unless we are lazy) without a grain of salt. The work is never done. Crowds can be wrong. But we were not asking for all doubts to be resolved, we wanted simply to keep moving, to keep working. Sometimes authority is just a matter of packaging, and the packaging bonanza will soon commence. But I hope we don't lose the original Wikipedia -- the rowdy community garden, lifeline number 3. A place that keeps you on your toes -- that resists tidy packages.
new mission statement 01.02.2006, 4:30 PM
the institute is a bit over a year old now. our understanding of what we're doing has deepened considerably during the year, so we thought it was time for a serious re-statement of our goals. here's a draft for a new mission statement. we're confident that your input can make it better, so please send your ideas and criticisms.
The Institute for the Future of the Book is a project of the Annenberg Center for Communication at USC. Starting with the assumption that the locus of intellectual discourse is shifting from printed page to networked screen, the primary goal of the Institute is to explore, understand and hopefully influence this evolution.
We use the word "book" metaphorically. For the past several hundred years, humans have used print to move big ideas across time and space for the purpose of carrying on conversations about important subjects. Radio, movies, TV emerged in the last century and now with the advent of computers we are combining media to forge new forms of expression. For now, we use "book" to convey the past, the present transformation, and a number of possible futures.
THE WORK & THE NETWORK
One major consequence of the shift to digital is the addition of graphical, audio, and video elements to the written word. More profound, however, are the consequences of the relocation of the book within the network. We are transforming books from bounded objects to documents that evolve over time, bringing about fundamental changes in our concepts of reading and writing, as well as the role of author and reader.
SHORT TERM/LONG TERM
The Institute values theory and practice equally. Part of our work involves doing what we can with the tools at hand (short term). Examples include last year's Gates Memory Project or the new author's thinking-out-loud blogging effort. Part of our work involves trying to build new tools and effecting industry wide change (medium term): see the Sophie Project and Next\Text. And a significant part of our work involves blue-sky thinking about what might be possible someday, somehow (long term). Our blog, if:book covers the full-range of our interests.
As part of the Mellon Foundation's project to develop an open-source digital infrastructure for higher education, the Institute is building Sophie, a set of high-end tools for writing and reading rich media electronic documents. Our goal is to enable anyone to assemble complex, elegant, and robust documents without the necessity of mastering overly complicated applications or the help of programmers.
NEW FORMS, NEW PROCESSES
Academic institutes arose in the age of print, which informed the structure and rhythm of their work. The Institute for the Future of the Book was born in the digital era, and we seek to conduct our work in ways appropriate to the emerging modes of communication and rhythms of the networked world. Freed from the traditional print publishing cycles and hierarchies of authority, the Institute seeks to conduct its activities as much as possible in the open and in real time.
HUMANISM & TECHNOLOGY
Although we are excited about the potential of digital technologies to amplify human potential in wondrous ways, we believe it is crucial to consciously consider the social impact of the long-term changes to society afforded by new technologies.
Although the institute is based in the U.S. we take the seriously the potential of the internet and digital media to transcend borders. We think it's important to pay attention to developments all over the world, recognizing that the future of the book will likely be determined as much by Beijing, Buenos Aires, Cairo, Mumbai and Accra as by New York and Los Angeles.
the year in ideas 01.01.2006, 11:17 PM
In developed nations, and in the US in particular, high-speed wireless access to the Internet is a given for the affluent and an achievable possibility for most. In the rest of the world, owning a computer is a dream for a community, and a fantasy for the individual. At this moment, away in the central mountains of Colombia, I am virtually disconnected from the world, though quite connected to the splendor of nature. I'm writing this relying on uncertain electricity that, if it fails, will be backed up by a gas generator that will keep food fresh and beer cold, the hell with the laptop. Reading one of last week's Medellín's newspapers, I was surprised to see news of the advent of the BlueBerry as a technological advance that will reach the city in early 2006. Medellín is a booming, sophisticated Third World city of more than 3.5 million people. This piece of news made clearer for me, more than ever, how in the US we take technology for granted when, in fact, it is the domain of only a small minority of the world. This doesn't mean that the rest don't need connectivity, it means that if they are being pushed to play in the global monopoly game, they must have it. From that perspective, I bring the New York Times Magazine's fifth edition of The Year in Ideas" (12/11/2005.) As always, it examines a number of trends and fads that, in a way or another, were markers of the year. Considering the year at the Institute and its pursuit of the meaningful among myriad innovations, I'll review some of the ideas the Times chose, that meet the ones the Institute brought to the front throughout the year. Beyond the noteworthy technological inventions, it is the human contribution, the users' innovative ways of dealing with what already exists in the Internet, which make them worth reflecting upon.
The political power of the blogosphere is an accepted fact, but it is the media infrastructure that passes on what is said on blogs what has given the conservatives the upper hand. Even though Howard Dean's campaign epitomized the power of the liberal blogosphere, the so called "Net roots" continue to be regarded as the terrain of young people with the time in their hands to participate in virtual dialogue. The liberal's approach blogs as a forum to air ideas and to criticize not only their opponents but also each other, differs greatly from that of the conservatives'. They are not particularly interested in introspection and use the Web to support their issues and to induce emotional responses from their base. But, it is their connection to a network of local and national talk-radio and TV shows what has given exposure and credibility to the conservative blogs. Here, we have a sad, but true, example of how it is the coalescence of different media what matters, not their insular existence.
The news media increasingly have been using the Web both as an enhancer and as a way to achieve two-way communications with the public. An exciting example of the meeting of journalism and the blog is the New Orleans Times-Picayune. Before Katrina hit the city, they set up a page on their Web site called "Hurricane Katrina Weblog." Its original function was supplemental. However, when the flood came and the printed edition was shut down, the blog became the newspaper. Even though the paper's staff kept compiling a daily edition as a download, the blog was brimming with posts appearing throughout the day and readership grew exponentially, getting 20 to 30 million page views per day. The paper continued posting carefully edited stories interspersed with short dispatches phoned or e-mailed to the newspaper's new headquarters in Baton Rouge. In the words of Paul Tough, "what resulted was exciting and engrossing and new, a stream-of-consciousness hybrid that combined the immediacy and scattershot quality of a blog with the authority and on-the-scene journalism of a major daily newspaper."
Joshua M. Marshall, editor of the blog Talkingpointsmemo.com, decided to ask his readers to share their knowledge of the ever spreading Washington scandals in an effort to keep abreast of news. He called his experiment "open-source investigative reporting." Marshall's blog has about 100,000 readers a day and he saw in them the potential to gather news in a nationwide basis. For instance, he relied on his readers' expertise with Congressional ethics code in order to determine if Jack Abramoff's gifts were violations. What Marshall has come up with is a very large news-gathering and fact-checking network, a healthy alternative to traditional journalism.
Podcasting has become another alternative to broadcasting which provides the ability to access audio and video programs as soon as they're delivered to your computer, or to pile them up as you do with written media. Now, through iTunes, we are experiencing the advent of homemade video postcasts. Some of them have already thousands of viewers. Potentially, this could become the next step of community access television.
The mash-up of data from different web sites has gained thousands of adepts. One of the first ones was Adrian Holovaty's Chicagocrime.org, a street map of Chicago from Google overlaid with crime statistics from the Chicago Police online database. Following this, many people started to make annotated maps, organizing all sorts of information geographically from real-estate listings to memory maps. The social possibilities of this personal cartography are enormous. The Times brings Matthew Haughey's "My Childhood, Seen by Google Maps," as an example of an elegant and evocative project. If we think of the illuminated maps that expanded the world and ignited the imagination of many explorers, this new form of cartography brings a similar human dimension to the perfect satellite maps.
Thomas Vander Wal has called "folksonomy" to tagging taken to the level of taxonomy. The labeling of people's photos, on Flicker for instance, gets richer by the additions of others who tag the same photos for their own use. Daniel H. Pink claims, "The cumulative force of all the individual tags can produce a bottom-up, self organized system for classifying mountains of digital material." In an interesting twist, several institutions that are part of the Art Museum Community Cataloging Project, including the San Francisco Museum of Modern Art and the Guggenheim, are taking a folksonomic approach to their online collections by allowing patrons to supplement the annotations done by curators, making them more accessible and useful to people.
The effort of Nicholas Negroponte, chairman of the MIT's Media Lab, to raise the funds to have a group of his colleagues design a no-frills, durable, and cheap computer for the children of the world is a terrific one. Having laptops equipped with a hand crank, in absence of electricity, and using wireless peer-to-peer connections to create a local network will make it easier to access the Internet from economically challenged areas of the world, notwithstanding the difficulties this presents. The detractors of Negroponte's effort claim that children in Africa, for instance, will not benefit from having access to the libraries of the world if they don't understand foreign languages; that children with little exposure to modern civilization will suddenly have access to pornography and commercialism; and that wealthy donors should concentrate on malaria eradication before giving an e-mail address to every child. Negroponte, as Jeffrey Sachs, Bono, Kofi Annan, and many others, know that education along with connectivity, are key to bring the next generation out of the poverty cycle to which they have been condemned by foreign powers interested in the resources of their countries, and by every corrupt local regime that has worked along the lines of those powers. The $100 laptop, accompanied by a sound and humane program to use them will bring enormous benefits.
A. O. Scott's review of the documentary as a genre that supplies satisfaction not from Hollywood formulas but from the real world, reminded me of Bob Stein's quest for thrills beyond technologically enhanced reality. A factor of the postmodern condition is the unprecedented immediate accessibility to the application of scientific knowledge that technology brings, accessibility that has permeated our relationships with and towards everything. Knowledge has acquired an unsettling superficiality because it has become an economic product. Technology is used and abused, forced upon the consumer in all sorts of ways and Hollywood's productions are the obvious example. 2005 was the year of the documentary, and I suspect this has to do with a yearning for the human, for the real, for the immediate, for the unmediated. A. O. Scott eloquently traces that line when he praises Luc Jacquet's "March of the Penguins" as the documentary that hits it all; epic journey, humor, tenderness and suspense, as well as "an occasion for culture-war skirmishing. In short it provided everything you'd want from a night at the movies, without stars or special effects. It's almost too good to be true." With that I greet 2006.