google and the myth of universal knowledge: a view from europe 06.30.2006, 2:33 AM
I just came across the pre-pub materials for a book, due out this November from the University of Chicago Press, by Jean-Noël Jeanneney, president of the Bibliothè que Nationale de France and famous critic of the Google Library Project. You'll remember that within months of Google's announcement of partnership with a high-powered library quintet (Oxford, Harvard, Michigan, Stanford and the New York Public), Jeanneney issued a battle cry across Europe, warning that Google, far from creating a universal world library, would end up cementing Anglo-American cultural hegemony across the internet, eroding European cultural heritages through the insidious linguistic uniformity of its database. The alarm woke Jacques Chirac, who, in turn, lit a fire under all the nations of the EU, leading them to draw up plans for a European Digital Library. A digitization space race had begun between the private enterprises of the US and the public bureaucracies of Europe.
Now Jeanneney has funneled his concerns into a 96-page treatise called Google and the Myth of Universal Knowledge: a View from Europe. The original French version is pictured above. From U. Chicago:
Jeanneney argues that Google's unsystematic digitization of books from a few partner libraries and its reliance on works written mostly in English constitute acts of selection that can only extend the dominance of American culture abroad. This danger is made evident by a Google book search the author discusses here--one run on Hugo, Cervantes, Dante, and Goethe that resulted in just one non-English edition, and a German translation of Hugo at that. An archive that can so easily slight the masters of European literature--and whose development is driven by commercial interests--cannot provide the foundation for a universal library.
Now I'm no big lover of Google, but there are a few problems with this critique, at least as summarized by the publisher. First of all, Google is just barely into its scanning efforts, so naturally, search results will often come up threadbare or poorly proportioned. But there's more that complicates Jeanneney's charges of cultural imperialism. Last October, when the copyright debate over Google's ambitions was heating up, I received an informative comment on one of my posts from a reader at the Online Computer Library Center. They had recently completed a profile of the collections of the five Google partner libraries, and had found, among other things, that just under half of the books that could make their way into Google's database are in English:
More than 430 languages were identified in the Google 5 combined collection. English-language materials represent slightly less than half of the books in this collection; German-, French-, and Spanish-language materials account for about a quarter of the remaining books, with the rest scattered over a wide variety of languages. At first sight this seems a strange result: the distribution between English and non-English books would be more weighted to the former in any one of the library collections. However, as the collections are brought together there is greater redundancy among the English books.
Still, the "driven by commercial interests" part of Jeanneney's attack is important and on-target. I worry less about the dominance of any single language (I assume Google wants to get its scanners on all books in all tongues), and more about the distorting power of the market on the rankings and accessibility of future collections, not to mention the effect on the privacy of users, whose search profiles become company assets. France tends much further toward the enlightenment end of the cultural policy scale -- witness what they (almost) achieved with their anti-DRM iTunes interoperability legislation. Can you imagine James Billington, of our own Library of Congress, asserting such leadership on the future of digital collections? LOC's feeble World Digital Library effort is a mere afterthought to what Google and its commercial rivals are doing (they even receive private investment from Google). Most public debate in this country is also of the afterthought variety. The privatization of public knowledge plows ahead, and yet few complain. Good for Jeanneney and the French for piping up.
the commodification of news / the washingtonpost.com turns 10 06.29.2006, 8:06 AM
It began with what is still referred to as the "Kaiser Memo" within the Washington Post organization. In 1992, Bob Kaiser, then managing-editor, wrote a handwritten memo on the way back from a technology conference in Japan. In the memo, he posits the development of an electronic newspaper. In 1996, washingtonpost.com was launched. Last week, it marked its 10th year with three insightful articles. The first, gives a brief overview of the effect of the Kaiser early vision, recounting some of the ups and downs, from losing millions in the heady dot.com bubble of the 90s to turning its first profit two years ago. Lessons were learned in this new form be it from the new growth from coverage of the Clinton-Lewinsky scandal to traffic bottlenecks during 2000 US presidential election to the vital role online news played during 9/11 and its aftermath. Ten years later, the online news landscape looks nothing what people, including Kaiser, originally envisioned, which was basically a slight modification of traditional news forms.
The other two articles serve as counterpoints to each other. Jay Rosen, NYU journalism professor and blogger on PressThink, reflects on the Internet as a distruptive technology in the world of journalism. Washington Post staff writer Patricia Sullivan argues that traditional journalism and news organizations are still relevant and vital for democracy. Although, both authors end up at the same place (having both traditional and new forms is good,) their approaches play off each other in interesting ways.
There is a tension between to the two articles by Sullivan and Rosen. In that, they are focusing on different things. Sullivan seems to be defending the viability of the traditional media, in terms of business models and practices. She acknowledge that the hugh profit margins are shrinking and revenues are stagnant. This is not surprising, as the increases in citizen journalism, "arm chair" news analysts, as well as, free online access to print and born-digital reporting all contribute to making news a commodity, rather than a scarce resource. Few cities still have more than one daily newspaper. Just as cable news channels took market share from the evening network news, people can read online versions of newspapers from around the country and read feeds from web news aggregators.
With the increasing number of voices in print, network television and cable, news is becoming increasingly commodified. Commodified here means that individual news coverage is becoming indistinguishable from one another. It is useful to note Sullivan's observation that the broad major weekly magazines as Time, Newsweek, and US Weekly are losing readers while the weekly magazines, The Economist and the New Yorker with their specialized perspectives, have increasing circulation. If a reader cannot distinguish between the reporting of Time, Newsweek, or US Weekly, then it is easy to move among the three or to another commodified online news source. Therefore, the examples of the Economist and the New Yorker show the importance of distinct voices, which readers come to expect, coupled with strong writing. Having an established perspective is becoming much more important to news readers.
If general news is becoming commodified, then news sources that differentiates its news will have an increased value, which people are willing to pay money to read. Rosen comes to a similar conclusion, when he mentions that in 2004 he called for some major news organizations to take a strong left position with "oppositional (but relentlessly factual)" coverage of the White House. His proposal was decried by many, including staff at the CNN, who claimed that it would destroy their credibility. Rosen asks why a major news organization cannot do for the left what Fox News has done for the right?
Rosen directly and Sullivan indirectly suggests that one key feature in the reshuffling of news will be the importance of voice and perspective. If a new publication can create a credible and distinct voice, they claim it will attract a sustainable audience, even in the age of free, commodified news.
Sullivan closes by discussing the importance of investigative reporting that reveals secret prisons, government eavesdropping is expensive, time consuming, and requires the subsidies from lighter news. However, history shows that the traditional news room is not infallible, as seen with the lack of rigor journalists examined the claims of weapons mass destruction during the events that lead to the invasion of Iraq. When Sullivan sites that "almost no online news sties invest in original, in-depth and scrupulously edited news reporting" it is clear that her conceptualization of new journalism is still tied to the idea of the centralized news organization. However, in the distributed realm of the blogosphere and p2p, we have seen examples that Sullivan describes, not from single journalists, but rather by a collaborative and decentralized network of concerned "amateurs." For example, citizen journalists can also achieve these kinds of disruptive reporting. Rosen notes how the blogosphere was able to unravel the CBS report on President Bush's National Guard Service. As well, technical problems with the electronic voting machines in the 2004 election (an example Yochai Benkler often recounts) were revealed by using the network. People using individual knowledge bases to do research, uncover facts, and report findings in a way that would be quite difficult for a news organization to replicate.
Where as, Rosen finishes with a description of how during the India Ocean tsunami, that despite Reuters' 2,300 journalist and 1,000 stringers, no one was in the area to provide reporting, as the concerned world waited for coverage. However, tourists armed with amateur equipment provided the watching world with the best and only digital photographs and video from the devastated areas. For Reuters to report anything, they had to include amateur journalism, until professional journalists could be deployed to supplement the coverage.
Not surprisingly, ten years on, washingtonpost.com along with the rest of the news media industry is still figuring out how to use and grow with the Internet. Nor it is surprising that their initial strategy was to re-purpose their content for the web. We understand new media based on the conventions of old media. The introduction of the Internet to newspapers was more than adding a new distribution channel. With increases in the access to information and the low cost of entrance, news is no longer a scarce resource. In the age of commodified news, washingtonpost.com, the political blog network, major daily newspaper columnists, and the editor-in-chiefs of weekly new magazines are all striving to create credible and reliable points of view. Active news consumers are better for it.
on the future of peer review in electronic scholarly publishing 06.28.2006, 7:08 AM
Over the last several months, as I've met with the folks from if:book and with the quite impressive group of academics we pulled together to discuss the possibility of starting an all-electronic scholarly press, I've spent an awful lot of time thinking and talking about peer review -- how it currently functions, why we need it, and how it might be improved. Peer review is extremely important -- I want to acknowledge that right up front -- but it threatens to become the axle around which all conversations about the future of publishing get wrapped, like Isadora Duncan's scarf, strangling any possible innovations in scholarly communication before they can get launched. In order to move forward with any kind of innovative publishing process, we must solve the peer review problem, but in order to do so, we first have to separate the structure of peer review from the purposes it serves -- and we need to be a bit brutally honest with ourselves about those purposes, distinguishing between those purposes we'd ideally like peer review to serve and those functions it actually winds up fulfilling.
The issue of peer review has of course been brought back to the front of my consciousness by the experiment with open peer review currently being undertaken by the journal Nature, as well as by the debate about the future of peer review that the journal is currently hosting (both introduced last week here on if:book). The experiment is fairly simple: the editors of Nature have created an online open review system that will run parallel to its traditional anonymous review process.
From 5 June 2006, authors may opt to have their submitted manuscripts posted publicly for comment.
Any scientist may then post comments, provided they identify themselves. Once the usual confidential peer review process is complete, the public 'open peer review' process will be closed. Editors will then read all comments on the manuscript and invite authors to respond. At the end of the process, as part of the trial, editors will assess the value of the public comments.
As several entries in the web debate that is running alongside this trial make clear, though, this is not exactly a groundbreaking model; the editors of several other scientific journals that already use open review systems to varying extents have posted brief comments about their processes. Electronic Transactions in Artificial Intelligence, for instance, has a two-stage process, a three-month open review stage, followed by a speedy up-or-down refereeing stage (with some time for revisions, if desired, inbetween). This process, the editors acknowledge, has produced some complications in the notion of "publication," as the texts in the open review stage are already freely available online; in some sense, the journal itself has become a vehicle for re-publishing selected articles.
Peer review is, by this model, designed to serve two different purposes -- first, fostering discussion and feedback amongst scholars, with the aim of strengthening the work that they produce; second, filtering that work for quality, such that only the best is selected for final "publication." ETAI's dual-stage process makes this bifurcation in the purpose of peer review clear, and manages to serve both functions well. Moreover, by foregrounding the open stage of peer review -- by considering an article "published" during the three months of its open review, but then only "refereed" once anonymous scientists have held their up-or-down vote, a vote that comes only after the article has been read, discussed, and revised -- this kind of process seems to return the center of gravity in peer review to communication amongst peers.
I wonder, then, about the relatively conservative move that Nature has made with its open peer review trial. First, the journal is at great pains to reassure authors and readers that traditional, anonymous peer review will still take place alongside open discussion. Beyond this, however, there seems to be a relative lack of communication between those two forms of review: open review will take place at the same time as anonymous review, rather than as a preliminary phase, preventing authors from putting the public comments they receive to use in revision; and while the editors will "read" all such public comments, it appears that only the anonymous reviews will be considered in determining whether any given article is published. Is this caution about open review an attempt to avoid throwing out the baby of quality control with the bathwater of anonymity? In fact, the editors of Atmospheric Chemistry and Physics present evidence (based on their two-stage review process) that open review significantly increases the quality of articles a journal publishes:
Our statistics confirm that collaborative peer review facilitates and enhances quality assurance. The journal has a relatively low overall rejection rate of less than 20%, but only three years after its launch the ISI journal impact factor ranked Atmospheric Chemistry and Physics twelfth out of 169 journals in 'Meteorology and Atmospheric Sciences' and 'Environmental Sciences'.
These numbers support the idea that public peer review and interactive discussion deter authors from submitting low-quality manuscripts, and thus relieve editors and reviewers from spending too much time on deficient submissions.
By keeping anonymous review and open review separate, without allowing the open any precedence, Nature is allowing itself to avoid asking any risky questions about the purposes of its process, and is perhaps inadvertently maintaining the focus on peer review's gatekeeping function. The result of such a focus is that scholars are less able to learn from the review process, less able to put comments on their work to use, and less able to respond to those comments in kind.
If anonymous, closed peer review processes aren't facilitating scholarly discourse, what purposes do they serve? Gatekeeping, as I've suggested, is a primary one; as almost all of the folks I've talked with this spring have insisted, peer review is necessary to ensuring that the work published by scholarly outlets is of sufficiently high quality, and anonymity is necessary in order to allow reviewers the freedom to say that an article should not be published. In fact, this question of anonymity is quite fraught for most of the academics with whom I've spoken; they have repeatedly responded with various degrees of alarm to suggestions that their review comments might in fact be more productive delivered publicly, as part of an ongoing conversation with the author, rather than as a backchannel, one-way communication mediated by an editor. Such a position may be justifiable if, again, the primary purpose of peer review is quality control, and if the process is reliably scrupulous. However, as other discussants in the Nature web debate point out, blind peer review is not a perfect process, subject as it is to all kinds of failures and abuses, ranging from flawed articles that nonetheless make it through the system to ideas that are appropriated by unethical reviewers, with all manner of cronyism and professional jealousy inbetween.
So, again, if closed peer review processes aren't serving scholars in their need for feedback and discussion, and if they can't be wholly relied upon for their quality-control functions, what's left? I'd argue that the primary purpose that anonymous peer review actually serves today, at least in the humanities (and that qualifier, and everything that follows from it, opens a whole other can of worms that needs further discussion -- what are the different needs with respect to peer review in the different disciplines?), is that of institutional warranting, of conveying to college and university administrations that the work their employees are doing is appropriate and well-thought-of in its field, and thus that these employees are deserving of ongoing appointments, tenure, promotions, raises, and whathaveyou.
Are these the functions that we really want peer review to serve? Vast amounts of scholars' time is poured into the peer review process each year; wouldn't it be better to put that time into open discussions that not only improve the individual texts under review but are also, potentially, productive of new work? Isn't it possible that scholars would all be better served by separating the question of credentialing from the publishing process, by allowing everything through the gate, by designing a post-publication peer review process that focuses on how a scholarly text should be received rather than whether it should be out there in the first place? Would the various credentialing bodies that currently rely on peer review's gatekeeping function be satisfied if we were to say to them, "no, anonymous reviewers did not determine whether my article was worthy of publication, but if you look at the comments that my article has received, you can see that ten of the top experts in my field had really positive, constructive things to say about it"?
Nature's experiment is an honorable one, and a step in the right direction. It is, however, a conservative step, one that foregrounds the institutional purposes of peer review rather than the ways that such review might be made to better serve the scholarly community. We've been working this spring on what we imagine to be a more progressive possibility, the scholarly press reimagined not as a disseminator of discrete electronic texts, but instead as a network that brings scholars together, allowing them to publish everything from blogs to books in formats that allow for productive connections, discussions, and discoveries. I'll be writing more about this network soon; in the meantime, however, if we really want to energize scholarly discourse through this new mode of networked publishing, we're going to have to design, from the ground up, a productive new peer review process, one that makes more fruitful interaction among authors and readers a primary goal.
the least interesting conversation in the world continues 06.27.2006, 1:47 AM
Much as I hate to dredge up Updike and his crusty rejoinder to Kevin Kelly's "Scan this Book" at last month's Book Expo, The New York Times has refused to let it die, re-printing his speech in the Sunday Book Review under the headline, "The End of Authorship." We should all thank the Times for perpetuating this most uninteresting war of words about the publishing future. Here, once again, is Updike:
Books traditionally have edges: some are rough-cut, some are smooth-cut, and a few, at least at my extravagant publishing house, are even top-stained. In the electronic anthill, where are the edges? The book revolution, which, from the Renaissance on, taught men and women to cherish and cultivate their individuality, threatens to end in a sparkling cloud of snippets.
I was reading Christine Boese's response to this (always an exhilarating antidote to the usual muck), where she wonders about Updike's use of history:
The part of this that is the most peculiar to me is the invoking of the Renaissance. I'd characterize that period as a time of explosive artistic and intellectual growth unleashed largely by social unrest due to structural and technological changes.
....swung the tipping point against the entrenched power arteries of the Church and Aristocracy, toward the rising merchant class and new ways of thinking, learning, and making, the end result was that the "fruit basket upset" of turning the known world's power structures upside down opened the way to new kinds of art and literature and science.
So I believe we are (or were) in a similar entrenched period like that now. Except that there is a similar revolution underway. It unsettles many people. Many are brittle and want to fight it. I'm no determinist. I don't see it as an inevitability. It looks to me more like a shift in the prevailing winds. The wind does not deterministically affect all who are buffeted the same way. Some resist, some bend, some spread their wings and fly off to wherever the wind will take them, for good or ill.
Normally, I'd hope the leading edge of our best artists and writers would understand such a shift, would be excited to be present at the birth of a new Renaissance. So it puzzles me that John Updike is sounding so much like those entrenched powers of the First and Second Estate who faced the Enlightenment and wondered why anyone would want a mass-printed book when clearly monk-copied manuscripts from the scriptoria are so much better?!
I say it again, it's a shame that Kelly, the uncritical commercialist, and Updike, the nostaligic elitist, have been the ones framing the public debate. For most of us, Google is neither the eclipse nor dawn of authorship, but just a single feature of a shifting landscape. Search is merely a tool, a means: the books themselves are the end. Yet, neither Google Book Search, which is simply an apparatus for extracting new profits off of the transmission and search of books, nor the present-day publishing industry, dominated as it is by mega-conglomerates with their penchant for blockbusters (our culture haunted by vast legions of the out-of-print), serves those ends very well. And yet these are the competing futures of the book: lonely forts and sparkling clouds. Or so we're told.
a girl goes to work (infographic video) 06.26.2006, 10:24 AM
It's not often that you see infographics with soul. Even though visuals are significantly more fun to look at than actual data tables, the oversimplification of infographics tends to suck out the interest in favor of making things quickly comprehensible (often to the detriment of the true data points, like the 2000 election map). This Röyksopp video, on the other hand, a delightful crossover between games, illustration, and infographic, is all about the storyline and subverts data to be a secondary player. This is not pure data visualization on the lines of the front page feature in USA Today. It is, instead, a touching story encased in the traditional visual language and iconography of infographics. The video's currency belies its age: it won the 2002 MTV Europe music video award for best video.
Our information environment is growing both more dispersed and more saturated. Infographics serve as a filter, distilling hundreds of data points down into comprehensible form. They help us peer into the impenetrable data pools in our day to day life, and, in the best case, provide an alternative way to reevaluate our surroundings and make better decisions. (Tufte has also famously argued that infographics can be used to make incredibly poor decisions--caveat lector.)
But infographics do something else; more than visual representations of data, they are beautiful renderings of the invisible and obscured. They stylishly separate signal from noise, bringing a sense of comprehensive simplicity to an overstimulating environment. That's what makes the video so wonderful. In the non-physical space of the animation, the datasphere is made visible. The ambient informatics reflect the information saturation that we navigate everyday (some with more serenity than others), but the woman in the video is unperturbed by the massive complexity of the systems that surround her. Her bathroom is part of a maze of municipal waterpipes; she navigates the public transport grid with thousands of others; she works at a computer terminal dealing with massive amounts of data (which are rendered in dancing—and therefore somewhat useless—infographics for her. A clever wink to the audience.); she eats food from a worldwide system of agricultural production that delivers it to her (as far as she can tell) in mere moments. This is the complexity that we see and we know and we ignore, just like her. This recursiveness and reference to the real is deftly handled. The video is designed to emphasize the larger picture and allows us to make connections without being visually bogged down in the particulars and textures of reality. The girl's journey from morning to pint is utterly familiar, yet rendered at this larger scale and with the pointed clarity of a information graphic, the narrative is beautiful and touching.
open source dissertation 06.23.2006, 2:03 PM
Despite numerous books and accolades, Douglas Rushkoff is pursuing a PhD at Utrecht University, and has recently begun work on his dissertation, which will argue that the media forms of the network age are biased toward collaborative production. As proof of concept, Rushkoff is contemplating doing what he calls an "open source dissertation." This would entail either a wikified outline to be fleshed out by volunteers, or some kind of additive approach wherein Rushkoff's original content would become nested within layers of material contributed by collaborators. The latter tactic was employed in Rushkoff's 2002 novel, "Exit Strategy," which is posed as a manuscript from the dot.com days unearthed 200 years into the future. Before publishing, Rushkoff invited readers to participate in a public annotation process, in which they could play the role of literary excavator and submit their own marginalia for inclusion in the book. One hundred of these reader-contributed "future" annotations (mostly elucidations of late-90s slang) eventually appeared in the final print edition.
Writing a novel this way is one thing, but a doctoral thesis will likely not be granted as much license. While I suspect the Dutch are more amenable to new forms, only two born-digital dissertations have ever been accepted by American universities: the first, a hypertext work on the online fan culture of "Xena: Warrior Princess," which was submitted by Christine Boese to Rensselaer Polytechnic Institute in 1998; the second, approved just this past year at the University of Wisconsin, Milwaukee, was a thesis by Virginia Kuhn on multimedia literacy and pedagogy that involved substantial amounts of video and audio and was assembled in TK3. For well over a year, the Institute advocated for Virginia in the face of enormous institutional resistance. The eventual hard-won victory occasioned a big story (subscription required) in the Chronicle of Higher Education.
In these cases, the bone of contention was form (though legal concerns about the use of video and audio certainly contributed in Kuhn's case): it's still inordinately difficult to convince thesis review committees to accept anything that cannot be read, archived and pointed to on paper. A dissertation that requires a digital environment, whether to employ unconventional structures (e.g. hypertext) or to incorporate multiple media forms, in most cases will not even be considered unless you wish to turn your thesis defense into a full-blown crusade. Yet, as pitched as these battles have been, what Rushkoff is suggesting will undoubtedly be far more unsettling to even the most progressive of academic administrations. We're no longer simply talking about the leveraging of new rhetorical forms and a gradual disentanglement of printed pulp from institutional warrants, we're talking about a fundamental reorientation of authorship.
When Rushkoff tossed out the idea of a wikified dissertation on his blog last week, readers came back with some interesting comments. One asked, "So do all of the contributors get a PhD?", which raises the tricky question of how to evaluate and accredit collaborative work. "Not that professors at real grad schools don't have scores of uncredited students doing their work for them," Rushkoff replied. "they do. But that's accepted as the way the institution works. To practice this out in the open is an entirely different thing."
meanwhile, back in the world of old media . . . 06.22.2006, 4:50 PM
One of the most interesting things about new media is the light that it shines on how old media works and doesn't work, a phenomenon that Marshall McLuhan encapsulated precisely with his declaration that a fish doesn't realize that it lives in water until it finds itself stranded on land. The latest demonstration: an article on the front page of yesterday's New York Times. (The version in the International Herald Tribune might be more rot-resistant, though it lacks illustrations.) The Times details, with no small amount of snark, how the conservatives have taken it upon themselves to construct an Encyclopedia of American Conservatism.
We've spent a disproportionate amount of time discussing encyclopedias on this blog. What's interesting to me about this one is how resolutely old-fashioned it is: it's print-based through and through. The editors have decided who's in and who's out, as the Times points out in this useful chart:
Readers are not allowed to argue with the selections: American Conservatism is what the editors say it is. It's a closed text and not up for discussion. Readers can discuss it, of course – that's what I'm doing here – but such discussions have no direct impact on the text itself.
There's a political moral to be teased out here – conservative thinking is dogmatic rather than dialectical – but that's too easy. I'm more interested in how we think about this. Would we notice the authoritarian nature of this work if we didn't have things like the Wikipedia to compare it to? Someone who knows more about book history than I can confirm whether Diderot & d'Alembert had to deal with readers disgruntled by omissions from their Encyclopédie. It's only now, however, that we sense the loss of potential: compared to the Wikipedia this seems limiting.
rosenzweig on wikipedia 06.22.2006, 3:34 PM
Roy Rosenzweig, a history professor at George Mason University and colleague of the institute, recently published a very good article on Wikipedia from the perspective of a historian. "Can History be Open Source? Wikipedia and the Future of the Past" as a historian's analysis complements the discussion from the important but different lens of journalists and scientists. Therefore, Rosenzweig focuses on, not just factual accuracy, but also the quality of prose and the historical context of entry subjects. He begins with in depth overview of how Wikipedia was created by Jimmy Wales and Larry Sanger and describes their previous attempts to create a free online encyclopedia. Wales and Sanger's first attempt at a vetted resource, called Nupedia, sheds light on how from the very beginning of the project, vetting and reliability of authorship were at the forefront of the creators.
Rosenzweig adds to a growing body of research trying to determine the accuracy of Wikipedia, in his comparative analysis of it with other online history references, along similar lines of the Nature study. He compares entries in Wikipedia with Microsoft's online resource Encarta and American National Biography Online out of the Oxford University Press and the American Council of Learned Societies. Where Encarta is for a mass audience, American National Biography Online is a more specialized history resource. Rosenzweig takes a sample of 52 entries from the 18,000 found in ANBO and compares them with entries in Encarta and Wikipeida. In coverage, Wikipedia contain more of from the sample than Encarta. Although the length of the articles didn't reach the level of ANBO, Wikipedia articles were more lengthy than the entries than Encarta. Further, in terms of accuracy, Wikipedia and Encarta seem basically on par with each other, which confirms a similar conclusion (although debated) that the Nature study reached in its comparison of Wikipedia and the Encyclopedia Britannica.
The discussion gets more interesting when Rosenzweig discusses the effect of collaborative writing in more qualitative ways. He rightfully notes that collaborative writing often leads to less compelling prose. Multiple stlyes of writing, competing interests and motivations, varying levels of writing ability are all factors in the quality of a written text. Wikipedia entries may be for the most part factually correct, but are often not that well written or historically relevant in terms of what receives emphasis. Due to piecemeal authorship, the articles often miss out on adding coherency to the larger historical conversation. ANBO has well crafted entries, however, they are often authored by well known historians, including the likes of Alan Brinkley covering Franklin Roosevelt and T. H. Watkins penning an entry on Harold Ickes.
However, the quality of writing needs to be balanced with accessibility. ANBO is subscription based, where as Wikipedia is free, which reveals how access to a resource plays a role in its purpose. As a product of the amateur historian, Rosenzweig comments upon the tension created when professional historians engage with Wikipedia. For example, he notes that it tends to be full of interesting trivia, but the seasoned historian will question its historic significance. As well, the professional historian has great concern for citation and sourcing references, which is not as rigorously enforced in Wikipedia.
Because of Wikipedia's widespread and growing use, it challenges the authority of the professional historian, and therefore cannot be ignored. The tension is interesting because it raises questions about the professional historians obligation to Wikipedia. I am curious to know if Rosenzweig or any of the other authors of similar studies went back and corrected errors that were discovered. Even if they do not, once errors are published, an article quickly gets corrected. However, in the process of research, when should the researcher step in and make correction they discover? Rosenzweig documents the "burn out" that any experts feels when authors attempt to moderate of entries, including early expert authors. In general, what is the professional ethical obligation for any expert to engage maintaining Wikipedia? To this point, Rosenzweig notes there is an obligation and need to provide the public with quality information in Wikipedia or some other venue.
Rosenzweig has written a comprehensive description of Wikipedia and how it relates to the scholarship of the professional historian. He concludes by looking forward and describes what the professional historian can learn from open collaborative production models. Further, he notes interesting possibilities such as the collaborative open source textbook as well as challenges such as how to properly cite (a currency of the academy) collaborative efforts. My hope is that this article will begin to bring more historians and others in the humanities into productive discussion on how open collaboration is changing traditional roles and methods of scholarship.
nature re-jiggers peer review 06.21.2006, 8:21 AM
Nature, one of the most esteemed arbiters of scientific research, has initiated a major experiment that could, if successful, fundamentally alter the way it handles peer review, and, in the long run, redefine what it means to be a scholarly journal. From the editors:
...like any process, peer review requires occasional scrutiny and assessement. Has the Internet bought new opportunities for journals to manage peer review more imaginatively or by different means? Are there any systematic flaws in the process? Should the process be transparent or confidential? Is the journal even necessary, or could scientists manage the peer review process themselves?
Nature's peer review process has been maintained, unchanged, for decades. We, the editors, believe that the process functions well, by and large. But, in the spirit of being open to considering alternative approaches, we are taking two initiatives: a web debate and a trial of a particular type of open peer review.
The trial will not displace Nature's traditional confidential peer review process, but will complement it. From 5 June 2006, authors may opt to have their submitted manuscripts posted publicly for comment.
In a way, Nature's peer review trial is nothing new. Since the early days of the Internet, the scientific community has been finding ways to share research outside of the official publishing channels -- the World Wide Web was created at a particle physics lab in Switzerland for the purpose of facilitating exchange among scientists. Of more direct concern to journal editors are initiatives like PLoS (Public Library of Science), a nonprofit, open-access publishing network founded expressly to undercut the hegemony of subscription-only journals in the medical sciences. More relevant to the issue of peer review is a project like arXiv.org, a "preprint" server hosted at Cornell, where for a decade scientists have circulated working papers in physics, mathematics, computer science and quantitative biology. Increasingly, scientists are posting to arXiv before submitting to journals, either to get some feedback, or, out of a competitive impulse, to quickly attach their names to a hot idea while waiting for the much slower and non-transparent review process at the journals to unfold. Even journalists covering the sciences are turning more and more to these preprint sites to scoop the latest breakthroughs.
Nature has taken the arXiv model and situated it within a more traditional editorial structure. Abstracts of papers submitted into Nature's open peer review are immediately posted in a blog, from which anyone can download a full copy. Comments may then be submitted by any scientist in a relevant field, provided that they submit their name and an institutional email address. Once approved by the editors, comments are posted on the site, with RSS feeds available for individual comment streams. This all takes place alongside Nature's established peer review process, which, when completed for a particular paper, will mean a freeze on that paper's comments in the open review. At the end of the three-month trial, Nature will evaluate the public comments and publish its conclusions about the experiment.
A watershed moment in the evolution of academic publishing or simply a token gesture in the face of unstoppable change? We'll have to wait and see. Obviously, Nature's editors have read the writing on the wall: grasped that the locus of scientific discourse is shifting from the pages of journals to a broader online conversation. In attempting this experiment, Nature is saying that it would like to host that conversation, and at the same time suggesting that there's still a crucial role to be played by the editor, even if that role increasingly (as we've found with GAM3R 7H30RY) is that of moderator. The experiment's success will ultimately hinge on how much the scientific community buys into this kind of moderated semi-openness, and on how much control Nature is really willing to cede to the community. As of this writing, there are only a few comments on the open papers.
Accompanying the peer review trial, Nature is hosting a "web debate" (actually, more of an essay series) that brings together prominent scientists and editors to publicly examine the various dimensions of peer review: what works, what doesn't, and what might be changed to better harness new communication technologies. It's sort of a peer review of peer review. Hopefully this will occasion some serious discussion, not just in the sciences, but across academia, of how the peer review process might be re-thought in the context of networks to better serve scholars and the public.
(This is particularly exciting news for the Institute, since we are currently working to effect similar change in the humanities. We'll talk more about that soon.)
future of flickr 06.20.2006, 12:31 PM
Wired News reported last week, that some users of Flickr were upset at the enforcing of, until now a rarely mentioned, Flickr policy of making non-photographic images unavailable to the public if the account does not mostly contain photographs. Although Flickr is mostly known as a photo sharing site, people often post various digitized images into Flickr including our collaborator, Alex Itin. Currently, users of Second Life are receiving particular attention with Flickr's posting policies.
The article quotes Stewart Butterfield saying, "the rationale is that when people do a global search on Flickr, they want to find photos."
I can appreciate that Flickr wants to maintain a clear brand identity. They have created one of the most successful open photo sharing websites to date and, they don't want to dilute their brand. However, isn't this just a tagging issue? It is ironic that Flickr, one of the pioneering Web 2.0 apps, whose success strongly relies on the power of folksonomy, misses this point. Flickr was one of the primary ways the general public figured out how tagging works, and their users should be able to figure out how to selection what kinds of images they want.
How much of a stretch would it be for Flickr to become an image sharing website, including tags for photographs, scanned analog images, and born digital images?
FInally, Second Life had a recent event with a tie-in to a virtual X-Men movie premiere, whose images made their way into Flickr. When asked to comment about it, Butterfield goes on to say, "Flickr wasn't designed for Universal or Sony to promote their movie. Flickr is very explicitly for personal, noncommercial use" rather than "using a photo as a proxy for an ad."
Again, I appreciate their sentiment. However, is there a feasible way to enforce this kind of policy? Is it ok to for me to post a picture of my trip to Seattle, wearing an Izod shirt, holding a Starbucks cups, in front of the Space Needle? Isn't this a proxy for an ad? As we have noted before, architecture, such as Disneyland, the Chrylser Building and Space Needle are all copyrighted. Our clothes are plastered with icons and slogans. Food and drinks are covered with logos. We are a culture of brands and increasing everything in our lives is branded. It should come to no surprise that the media we, as a culture, produce reflects these brands, corporate identities, and commercial bodies.
The decreases in cost of digital production tools have vastly increased amateur media production. Flickr provides a great service to users of the web to support the sharing of all the media people are creating. However, Flickr created something bigger than they originally intended. Rather than limiting themselves to photo sharing, there is much more potential in creating a space for the sharing of and community building around all digital images.
academic library explores tagging 06.19.2006, 6:09 PM
The ever-innovative University of Pennsylvania library is piloting a new social bookmarking system (like del.icio.us or CiteULike), in which the Penn community can tag resources and catalog items within its library system, as well as general sites from around the web. There's also the option of grouping links thematically into "projects," which reminds me of Amazon's "listmania," where readers compile public book lists on specific topics to guide other customers. It's very exciting to see a library experimenting with folksonomies: exploring how top-down classification systems can productively collide with grassroots organization.
wikipedia in the times 06.17.2006, 1:56 PM
Wikipedia is on the front page of the New York Times today, presumably for the first time. The article surveys recent changes to the site's governance structure, most significantly the decision to allow administrators (community leaders nominated by their peers) to freeze edits on controversial pages. These "protection" and "semi-protection" measures have been criticized by some as being against the spirit of Wikipedia, but have generally been embraced as a necessary step in the growth of a collective endeavor that has become increasingly vast and increasingly scrutinized.
Browsing through a few of the protected articles -- pages that have been temporarily frozen to allow time for hot disputes to cool down -- I was totally floored by the complexity of the negotiations that inform the construction of a page on, say, the Moscow Metro. I attempted to penetrate the dense "talk" page for this temporarily frozen article, and it appears that the dispute centered around the arcane question of whether numbers of train lines should be listed to the left of a color-coded route table. Tempers flared and things apparently reached an impasse, so the article was frozen on June 10th by its administrator -- a user by the name of Ezhiki (Russian for hedgehogs), who appears to be taking a break from her editing duties until the 20th (whether it is in connection to the recent metro war is unclear).
Look at Ezhiki's profile page and you'll see a column of her qualifications and ranks stacked neatly like merit badges. Little rotating star .gifs denote awards of distinction bestowed by the Wikipedia community. A row of tiny flag thumbnails at the bottom tells you where in the world Ezhiki has traveled. There's something touching about the page's cub scout aesthetic, and the obvious idealism with which it is infused. Many have criticized Wikipedia for a "hive mind" mentality, but here I see a smart individual with distinct talents (and a level head for conflict management), who has pitched herself into a collective effort for the greater good. And all this obsessive, financially uncompensated striving -- all the heated "edit wars" and "revert wars" -- for the production of something as prosaic as an encyclopedia, a mere doormat on the threshold of real knowledge.
But reworking the doormat is a project of massive proportions, and one that carries great political and social significance. Who should produce these basic knowledge resources and how should the kernel of knowledge be managed? These are the questions that Wikipedia has advanced to the front page of the newspaper of record. The mention of WIkipedia on the front of the Times signifies its crucial place in the cultural moment, and provides much-needed balance to the usual focus in the news on giant commercial players like Google and Microsoft. In a time of uncontrolled media oligopoly and efforts by powerful interests to mould the decentralized structure of the Internet into a more efficient architecture of profit, Wikipedia is using the new technologies to fuel a great humanistic enterprise. Wikipedia has taken the model of open source software and applied it to general knowledge. The addition of a few governance measures only serves to demonstrate the increasing maturity of the project.
mapping books 06.15.2006, 11:45 AM
Gutenkarte is an effort to map books by MetaCarta. The website takes text from books in Project Gutenberg, searches them for the appearance of place names, and plots them on a map of the world using their own GeoParser API, creating an astonishing visualization of the world described in a text. Here, for example, is a map of Edward Gibbon's Decline and Fall of the Roman Empire:
(Click on the picture to view the live map.) It's not perfect yet: note that "china" is in the Ivory Coast, and "Asia" seems to be located just off the coast of Cameroon. But the map does give an immediate sense of the range of Gibbon's book: in this case, the extent of the Roman world. The project is still in its infancy: eventually, users will be able to correct mistakes.
Gutenkarte suggests ways of looking at texts not dissimilar from that of Franco Moretti, who in last year's Graphs, Maps, Trees: Abstract Models for Literary History (discussed by The Valve here) discussed how making maps of places represented in literature could afford a new way of discussing texts. Here, for example, is a map he constructed of Parisian love affairs in the novel, demonstrating that lovers were usually separated by the Seine:
(from the "Maps" chapter, online here if you have university access to the New Left Review.) Moretti constructed his maps by hand, with the help of grad student labor; it will be interesting to see if Gutenkarte will make this sort of visualization accessible to all.
the music world steps into the net neutrality debate 06.15.2006, 11:02 AM
I'm still getting my head wrapped about this tune by the BroadBand. Written and performed by Kay Hanley (former lead singer of Letters to Cleo), Jill Sobule ("I Kissed a Girl" with one-time MTV staple video starring Fabio), and Michelle Lewis, "God Save the Internet" is another step in making the issues surrounding net neutrality more public. Perhaps my favorite lyric is "Jesus wouldn't mess with our Internet." Cheeky lyrics aside, the download page does include links to resources for the inspired activist, including a provocative editorial from allhiphop.com on why the African American community should be concerned about net neutrality. The telecommunication lobby is financing an well-funded campaign to implement the pro-telecom polices it supports. It is still unclear how effective the net neutrality movement will be, but it is slowly expanding beyond legal scholars into the general cultural sphere. The increasing involvement from the pop culture world, be it alt-rock or hip hop, will extend the movement's reach to more people and encourage more discourse. All this will hopefully result in a balanced and fair approach to telecommunications policy and legislation.
an important guide on filmmaking and fair use 06.15.2006, 10:31 AM
"The Documentary Filmmakers' Statement of Best Practices in Fair Use," by the Center for Social Media at American University is another work in support of fair-use practices to go along with the graphic novel "Bound By Law" and the policy report "Will Fair Use Survive?".
"Will Fair Use Survive" (which Jesse previously discussed) takes a deeper policy analysis approach. "Bound By Law" (also reviewed by me) uses an accessible tact to raise awareness in this area. Whereas, "The Statement of Best Practice" is geared towards the actual use of copyrighted material under fair use by practicing documentary filmmakers. It is an important compliment to the other works because the current confusion over claiming fair use has resulted in a chilling effect which stops filmmakers from pursuing projects which require (legal) fair use claims. This document give them specific guidelines on when and how they can make fair use claims. Assisting filmmakers in their use of fair use will help shift the norms of documentary filmmaking and eventually make these claims easier to defend. This guide was funded by the Rockefeller Foundation, the MacArthur Foundation and Grantmakers in Film and Electronic Media.
reflections on the hyperlinked.society conference 06.14.2006, 7:48 AM
Last week, Dan and I attended the hyperlinked.society conference hosted at the University of Pennsylvania's Annenberg School of Communication. An impressive collection of panelists and audience members gathered to discuss issues that are emerging as we place more value onto hyperlinks. Here are a few reflections on what was covered at the one day conference.
David Weinberger made a good framing statement when he noted that links are the architecture of the web. Through technologies, such as Google Page Rank, linking is not only a conduit to information, but it is also now a way of adding value to another site. People noted the tension between not wanting to link to a site they disagreed with (for example, an opposing political site) which would increase its value in ranking criteria and with the idea that linking to other ideas a fundamental purpose of the web. Currently, links are binary, on or off. Context for the link is given by textual descriptions given around the link. (For example, I like to read this blog.) Many suggestions were offered to give the link context, through color, icon or tags within the code of the link to show agreement or disagreement with the contents of the link. Jesse discusses overlapping issues in his recent post on the semantic wiki. Standards can be developed to achieve this, however we must be take care to anticipate the gaming of any new ways of linking. Otherwise, these new links will became another casualty of the web, as seen with through the misuse of meta tags. Meta tags were key words included in HTML code of pages to assist search engines on determining the contains of the site. However, massive misuse of these keywords rendered meta-tags useless, and Google was one of the first, if not the first, search engine to completely ignore meta-tags. Similar gaming is bound to occur with adding layers of meaning to links, and must be considered carefully in the creation of new web conventions, lest these links will join meta-tags as footnote in HTML reference books.
Another shift I observed, was an increase in citing real quantifiable data be it from both market and academic research on people's web use. As Saul Hansell pointed out, the data which is able to be collected is only a slice of reality, however these snapshots are still useful in gaining understand how people are using new media. The work of Lada Adamic (whose work we like to refer to in ifbook) on mapping the communication between political blogs will be increasingly important in understand online relationships. She also showed more recent work on representing how information flows and spreads through the blogosphere.
Some of the work by presented by mapmakers and cartographers showed examples of using data to describe voting patterns as well as cyberspace. Meaningful maps of cyberspace are particularly difficult to create because as Martin Dodge noted, we want to compress hundreds of thousands of dimensions into two or three dimensions. Maps are representations of data, at first they were purely geographic, but eventually things such as weather patterns and economic trends have been overlaid onto their geographic locations. In the context of hyperlinks, I look forward to using these digital maps as an interface to the data underlaying these representations. Beyond voting patterns (and privacy issues aside,) linking these maps to deeper information on related demographic and socio-economic data and trends seems like the logical next step.
I was also surprised at what was not mentioned or barely mentioned. Net neutrality and copyright were each only raised once, each time by an audience members' question. Ethan Zuckerman gave an interesting anecdote that the Global Voices project became an advocate for the Creative Commons license because they found it to be a powerful tool to support their effort to support bloggers in the developing world. Further, in the final panel of moderators, they mentioned that privacy, policy, tracking received less attention then expected. On that note, I'll close with two questions that lingered in my mind, as I left Philadelphia for home. I hope that they will be addressed in the near future, as the importance of hyperlinking grows in our lives.
1. How will we deal with link rot and their ephemeral nature of link?
Broken links and archiving links will become increasing important as the number of links along with our dependence on them grow in parallel.
2. Who owns our links?
As we put more and more of ourselves, our relationships and our links on commercial websites, it is important to reflect upon what are the implications when we are at the same time giving ownership of these links over to Yahoo via flickr and News Corp via my.space.
microsoft enlists big libraries but won't push copyright envelope 06.14.2006, 2:42 AM
In a significant challenge to Google, Microsoft has struck deals with the University of California (all ten campuses) and the University of Toronto to incorporate their vast library collections - nearly 50 million books in all - into Windows Live Book Search. However, a majority of these books won't be eligible for inclusion in MS's database. As a member of the decidedly cautious Open Content Alliance, Windows Live will restrict its scanning operations to books either clearly in the public domain or expressly submitted by publishers, leaving out the huge percentage of volumes in those libraries (if it's at all like the Google five, we're talking 75%) that are in copyright but out of print. Despite my deep reservations about Google's ascendancy, they deserve credit for taking a much bolder stand on fair use, working to repair a major market failure by rescuing works from copyright purgatory. Although uploading libraries into a commercial search enclosure is an ambiguous sort of rescue.
a bone-chilling message to academics who dare to become PUBLIC intellectuals 06.13.2006, 8:43 AM
Juan Cole is a distinguished professor of middle eastern studies at the University of Michigan. Juan Cole is also the author of the extremely influential blog, Informed Comment which tens of thousands of people rely on for up-to-the-minute news and analysis of what is happening in Iraq and in the middle east more generally. It was recently announced that Yale University rejected Cole's nomination for a professorship in middle eastern studies, even after he had been approved by both the history and sociology departments. As might be expected there is considerable outcry, particularly from the progressive press and blogosphere criticizing Yale for caving in to what seems to have been a well-orchestrated campaign against Cole by the hard-line pro-Israel forces in the U.S.
Most of the stuff I've read so far seems to concentrate on taking Yale's administration to task for their spinelessness. While this criticism seems well-founded, I think there is a bigger issue that isn't being addressed. The conservatives didn't go after Cole simply because of his political ideas. There are most likely people already in Yale's Middle Eastern studies dept. with politics more radical than Cole's. They went after him because his blog, which reaches out to a broad general audience is read by tens of thousands and ensures that his ideas have a force in the world. Juan once told me that he's lucky if he sells 500 copies of his scholarly books. His blog however ranks in the Technorati 50 and through his blog he has also picked up influential gigs in Salon and NPR.
Yale's action will have a bone-chilling effect on academic bloggers. Before the Cole/Yale affair it was only non-tenured professors who feared that speaking out publicly in blogs might have a negative impact on their careers. Now with Yale's refusal to approve the recommendation of its academic departments -- even those with tenure must realize that if they dare to go outside the bounds of the academy to take up the responsibilities of public intellectuals, that the path to career advancement may be severely threatened.
We should have defended Juan Cole more vigorously, right from the beginning of the right-wing smear against him. Let's remember that the next time a progressive academic blogger gets tarred by those who are afraid of her ideas.
smarter links for a better wikipedia 06.13.2006, 7:58 AM
As Wikipedia continues its evolution, smaller and smaller pieces of its infrastructure come up for improvement. The latest piece to step forward to undergo enhancement: the link. "Computer scientists at the University of Karlsruhe in Germany have developed modifications to Wikipedia's underlying software that would let editors add extra meaning to the links between pages of the encyclopaedia." (full article) While this particular idea isn't totally new (at least one previous attempt has been made: platypuswiki), SemanticWiki is using a high profile digital celebrity, which brings media attention and momentum.
What's happening here is that under the Wikipedia skin, the SemanticWiki uses an extra bit of syntax in the link markup to inject machine readable information. A normal link in wikipedia is coded like this [link to a wiki page] or [http://www.someothersite.com link to an outside page]. What more do you need? Well, if by "you" I mean humans, the answer is: not much. We can gather context from the surrounding text. But our computers get left out in the cold. They aren't smart enough to understand the context of a link well enough to make semantic decisions with the form "this link is related to this page this way". Even among search engine algorithms, where PageRank rules them all, PageRank counts all links as votes, which increase the linked page's value. Even PageRank isn't bright enough to understand that you might link to something to refute or denigrate its value. When we write, we rely on judgement by human readers to make sense of a link's context and purpose. The researchers at Karlsruhe, on the other hand, are enabling machine comprehension by inserting that contextual meaning directly into the links.
SemanticWiki links look just like Wikipedia links, only slightly longer. They include info like
- categories: An article on Karlsruhe, a city in Germany, could be placed in the City Category by adding
[[Category: City]]to the page.
- More significantly, you can add typed relationships.
Karlsruhe [[:is located in::Germany]]would show up as Karlsruhe is located in Germany (the : before is located in saves typing). Other examples: in the Washington D.C. article, you can add [[is capital of:: United States of America]]. The types of relationships ("is capital of") can proliferate endlessly.
- attributes, which specify simple properties related to the content of an article without creating a link to a new article. For example,
Adding semantic information to links is a good idea, and hewing closely to the current Wikipedia syntax is a smart tactic. But here's why I'm not more optimistic: this solution combines the messiness of tagging with the bother of writing machine readable syntax. This combo reminds me of a great Simpsons quote, where Homer says, "Nuts and gum, together at last!" Tagging and semantic are not complementary functions - tagging was invented to put humans first, to relieve our fuzzy brains from the mechanical strictures of machine readable categorization; writing relationships in a machine readable format puts the machine squarely in front. It requires the proliferation of wikipedia type articles to explain each of the typed relationships and property names, which can quickly become unmaintainable by humans, exacerbating the very problem it's trying to solve.
But perhaps I am underestimating the power of the network. Maybe the dedication of the Wikipedia community can overcome those intractible systemic problems. Through the quiet work of the gardeners who sleeplessly tend their alphanumeric plots, the fact-checkers and passers-by, maybe the SemanticWiki will sprout links with both human and computer sensible meanings. It's feasible that the size of the network will self-generate consensus on the typology and terminology for links. And it's likely that if Wikipedia does it, it won't be long before semantic linking makes its way into the rest of the web in some fashion. If this is a success, I can foresee the semantic web becoming a reality, finally bursting forth from the SemanticWiki seed.
I left off the part about how humans benefit from SemanticWiki type links. Obviously this better be good for something other than bringing our computers up to a second grade reading level. It should enable computers to do what they do best: sort through massive piles of information in milliseconds.
How can I search, using semantic annotations? - It is possible to search for the entered information in two differnt ways. On the one hand, one can enter inline queries in articles. The results of these queries are then inserted into the article instead of the query. On the other hand, one can use a basic search form, which also allows you to do some nice things, such as picture search and basic wildcard search.
For example, if I wanted to write an article on Acting in Boston, I might want a list of all the actors who were born in Boston. How would I do this now? I would count on the network to maintain a list of Bostonian thespians. But with SemanticWiki I can just add this:
<ask>[[Category:Actor]] [[born in::Boston]], which will replace the inline query with the desired list of actors.
To do a more straightforward search I would go to the basic search page. If I had any questions about Berlin, I would enter it into the Subject field. SemanticWiki would return a list of short sentences where Berlin is the subject.
But this semantic functionality is limited to simple constructions and nouns—it is not well suited for concepts like 'politics,' or 'vacation'. One other point: SemanticWiki relationships are bounded by the size of the wiki. Yes, digital encyclopedias will eventually cover a wide range of human knowledge, but never all. In the end, SemanticWiki promises a digital network populated by better links, but it will take the cooperation of the vast human network to build it up.
e-paper takes another step forward 06.12.2006, 11:59 AM
With each news item of flexible display technology, I get the feeling that we are getting closer to the wide spread use of e-paper. The latest product to appear is Seiko Epson's QXGA e-paper, which was recently introduced at a symposium given by the Society for Information Display. Even from the small jpeg, the text looks sharp and easy to read. Although e-paper will not replace all paper, I'm looking forward to the day I can store all my computer manuals on e-paper. Computer manuals are voluminous and quickly become outdated with each new upgrade. I typically repeatedly use only a few pages of the entire manual. All these reasons makes them a great candidate for e-paper. Perhaps, the best benefit is that I can use the new found shelf space for print books where I value the vessel as much as the content.
what is a book? 06.12.2006, 7:09 AM
What is a book? This is a question we will want to answer if we want to enable books to reflect the electronic age and not the ink-on-paper era, just as Gutenberg and his heirs fully exploited that once-new technology back when, well, the ink was still fresh.
I don't think a precise definition is possible, certainly not one that will clearly and unambiguously delimit books, journals, magazines, newspapers, and any other print media, and also add electronicity without claiming blogs, RSS feeds, wikis, mail-lists, and website forums. Each of these are distinct entities, yet might share every salient feature with most of the others at its margins.
I will instead go after What is our notion of a book? What is it I expect you to mean when you use that word instead of, say, "magazine" or "website"? 
So let us begin with this: "A book is something you read." And by that we will not mean something we watch or view. 
While in a sense we have passed the buck to another philosophical discussion — What constitutes reading? — this allows us to now regard children's books as entries into reading, and not annotated drawings. Moreover we have escaped making some arbitrary rules about the proportion of words to drawing or whether the artwork "illustrates" the text and such like.
Now saying a book is something you read means I regard a book of photographs as a book only in how I approach it psychologically based on its physical presentation. Remove the binding literally and figuratively and the book is no more — a slideshow of Ansel Adams photographs is no more a book  than it is a newspaper. The essence of book has expired along with the physical book.
And this starts us down a different path to answering our question of "What is a book?" If I can't define a book the way I might define a hammer or an element in the periodic table or a songbird, I can at least identify characteristics or expectations that we all generally associate with a book. What results is less a definition in the dictionary sense than it is a diagnosis — any object meeting a majority of these symptoms will fall under our designation of "book," even though other objects share some traits and not every trait is met by every instance.
So. What do we know about a book? Let us look at the general knowledge about books, the type that we use daily to distinguish books from other text media, as well as separating it from other media generally and from other artforms.
- A book presumes a commitment of time and involvement from the reader. No one expects to pore over a magazine for a month, to give twelve or fifteen or twenty hours involvement to Newsweek or Architectural Digest, but a worthy book can claim that time or more. In the implied contract between the reader and author, this is something we readers pay and based on which the author can set her sights much higher (or deeper) than with the alternatives.
- A book permits the reader to set his own pace. I don't mean "you read slowly and I'm a fast reader" but that when reader and author fully engage we readers can slow down and reflect on what's been said. We can savor the language, we can re-read the page, even copy the most expressive sentences in our commonplace books, all the while tussling with the words on the page, their meaning, their color, their elegance or abruptness or unexpected appearance, which operate in conjunction with but also separately from the meaning, from the ideas or events they convey.
"Reading maketh a full man ... and writing an exact man," Bacon said, and while the philosophers have mined the territory between what we intend when we put things into words and what we each understand those words to mean, the gap in communication is not complete. In reading and in writing we do find understanding in these glyphs on a page, and it comes entirely from our brains. And we might note how books cannot engage our several senses, except peripherally as we grasp a hefty book or screw up our nose at the cheap glue in a paperback's binding. The vast capability of our visual acuity  is set aside and become a mere doorman to the intellect, which assumes the operative role in our reading, particularly what Bill Hill calls "ludic reading." 
And we cannot hurry or slow down our understanding, but only delight in its delights and accompany its anguished plodding through tortuous texts. And so when I say a book lets us set our own pace — as a movie, symphony, dance or play cannot — I mean the pace at which our intellect maunders or gambols through the material set before it.
And every other part of us is diminished, as an audience sits quietly in the dark before a brightly lit actor on stage. Now is not the moment we notice the hard bottom of our chair or the light fading at day's end or hunger or the voices of others conversing or calling to us; these are subordinate as our minds engage in work or recreation.
When people wonder whether a "book" might not in our future be so multidimensional, with sound, video, interactivity, and mutability to our desires, I say "yes, but." Yes, these can be and should be and will be incorporated. But if "book" no longer means the intellect is permitted to come to the foreground in this way, if text and how it requires this is diminished to insignificance, then we will have thrown the baby out with the bathwater and what we have then will perhaps be entertaining and educational and absorbing, but it will not be a book, whatever label attaches to it.
- A book has an author's voice, what Wayne Booth calls the implied author , with whom we converse or in whose academy we study or at whose feet we sit to hear the tales of the unfamiliar and entertaining. But we have an almost palpable relation with that author that is not so very different than we have with our friends.
This isn't so easy with a movie, say, or a play, a TV show. We are more likely to engage on that level with the actors portraying the characters. In the message, the mood, the impression we take away, can we say confidently where the author leaves off and the director begins? We have an interaction but it is at a remove, it is less personal.
Will the same author's voice be distinct in networked and collaborative books? Or will it be drowned out? Perhaps what we know of installation and performance art will guide us here — as art moved off the walls and away from the close and tangible, the artist did not disappear, did not transmogrify into an actor or impresario. The essence of art survived and with it the artist.
Like that famous dictum about obscenity from Potter Stewart, when he wrote that he might not be able to provide a test for it, "but I know it when I see it," we must be guided by our intuition. Some aspects can change radically if the essence of the book is still recognizable. When we ask, What is a book? we know any answer will be slippery but our certainty is unwavering. In our test, it requires only that we remember the greater part of any book resides not in the physical, but in the invisible world. Then whether we have one author or a collaboration, unchanging text or mutable, physical pages or electronic, static images or dynamic, audio, video, connection to the web or not, whatever the manifestation the future brings us, there should be no confusion. Then as now, each of us will know a book when we see it.
 In part my conclusion of indefinability is based on similar effort undertaken years ago, when I was in graduate school. One professor set the students in his seminar to define what a poem was. As we attributed features to "a poem," it was not hard to find counter examples — I remember a Thomas Wolfe sample brought in to counter the notion that rhythm distinguished poetry from prose. Language, purpose, length, rhythm, meter, rhyme, fixed patterns, brevity of expression — every feature could be countered with a prose sample that met the criteria and poems (which we all agreed were poems) that did not. Although we each had a notion of what constitutes a poem, we couldn't create a definition that encompassed the essence of those notions.
What we settled on was the most rudimentary of differentiation, and yet unassailable — a poem is a text in which the author has decided where one line ends and the next begins.
 For instance, FTrain, a site written by Paul Ford in multiple voices, using multiple personae/bylines, mixing pieces that are not always obviously differentiable as being fiction, biography or memoir, as well as essay and reporting, and not incidentally relying on original musical compositions for full comprehension of the site.
 The audio book by this taxonomy is the platypus of content. Yes, it is a book. And yet we say mammals do not have bills and birds do, despite the contrary example of the platypus. Of course, the matter of illustrations, footnotes, maps, charts and such that we often utilize in a book do not fit so well in the audio book, so it is indeed an odd duck.
 It may come as a surprise that the contrary question of "Is a slideshow of The Castle a book?" is not that readily answered. It may well be. Assuming we are not seeing it formatted in Powerpoint bullets, the distinction between the pages of one of today's e-books and a "slide" in that slideshow seems minuscule, one of projection onto a wall instead of display on a handheld device or computer. But the cohesiveness a binding provides those Ansel Adams photographs is more than matched in a novel by the linearity of the text, the consecutiveness of the sentences, the structure of a story being told. Without a binding, the photographs stand on their own, independent despite their sequence. Not so the text, where each page connects to its predecessor and successor. If we are to rule that a slideshow is not a book — not even a group-read book — it will have to be because it fails the criteria discussed later on.
 I repeat a famous observation noting how immediately in a crowded room we find someone's eyes resting on us, and how small the actual visual information is, a fraction of a fraction of one percent of all that is visible to our eyes. Yet we scarcely recognize that we are the most visual of creatures.
 In his classic book, The Rhetoric of Fiction, another book I encountered first in graduate school.
inanimate alice 06.10.2006, 10:30 AM
My friend Sue Thomas sent me a link to work by an artist going by the name of Babel. The first piece I looked at, Inanimate Alice is a wonderful throwback to early interactive media work which mixed audio, video, text and images in simple ways but to powerful effect. Josh Feldman's Consciousness, Amanda Goodenough's charming Inigo and Faithful Camel stories, Rodney Greenblatt's Wonder Window, and Eric Swenson's notorious BLAM! come immediately to mind. (Looked for links to online versions of these works but didn't find any -- not surprising since they are 14-19 years old. I Think I'll write the authors and try to assemble an online exhibit of some of this early work. ) If you like Inanimate Alice and know of similar work (past or present) please send us a reference.
from the real to the virtual and back again 06.09.2006, 7:20 AM
In 2004, as the Matrix Ping Pong video link bounced its way from inbox to inbox, people where amused by the re-creation of a ping pong match with Matrix style special effects, using people instead of computer technology. Viewers were amazed at the elaborate costumes, only to be topped by even more amazing choregraphy. Perspective changes and camera angles are reproduced. Influences of Matrix 360 camera spinning and earlier Cantonese martial arts films are pervasive. Part of its success was the evident work and planning that was required to design and execute the scene. The idea of simulating the simulated was both ingenious and topical. However media criticism aside, it's just a pleasure to watch.
The clip comes from a popular Japanese television show Kasou Taishou, where contestants performs skits before a panel of judges. These skits often involve re-creating camera work and special effects of film. That same year, Neil Tennant and Chris Lowe of the UK pop band Pet Shop Boys release the video for the song "Flamboyant." In the video, a (stereo)typical Japanese corporate employee is seen struggling to design a skit for the show. Interspersed in the video are mock Japanese ads starring Tennant and Lowe. Two years later, they take the idea one step further recently their their new video, for "I'm with Stupid." In it, Matt Lucas and David Walliamsthe stars of the British comic skit series "Little Britian" to replicate PBS videos "Go West" and "Can You Forgive Her." The result is a bizarre re-intereption of the CGI intensive PBS videos.
When I first started on this post, I was going to try to say that these examples are a "reaction" to the increasing virtual parts of lives. However, my thinking has shifted towards this reading this phenomenon as the process of "reflection" that has a long traditional in cultural production. As our lives are becoming increasingly virtual, synthetic, and digital, our analogue lives reflect back the new digital nature of what we experience. Like a house of mirrors, people are reflecting back what they see. These mirrors, as found in the amusement parks, distort the original image, bending and stretching people's reflection, but not beyond recognition. The participants on Kasou Taishou started copying the images from the Matrix, which itself is a reflection or new interpretation of the fight choreography of Cantonese martial arts films. Pet Shop Boys first merely replay their reflection (with splices of fake Japanese commerical staring themselves.) Things get much more interesting when Tennant and Lowe realize that the truly interesting part of the Flamboyant video was re-creating the digital with the analogue, while adding their own personal distortion through a distinctly British comedic lens.
Advances in telecommunication and media production technology have blown open the opportunity to create and share these types of cultural call and response we are witnessing. The history of parody is a prime example of this, a traditional cultural dialogue through media artifacts. I'm not all surprised, in this case, that Japan is playing a role here. In that, I have always been both fascinated and amazed by the observed way that Japanese culture seems to balance the respect of tradition with the advancement of modernity, especially with technology. Although, I realize that distance and language barriers may mask the tensions between these cultural forces. Part of the balance is achieved by taking the old and infusing it into the new rather than completely reject the old. Further, in the case of the real simulating the virtual, the diversity of modes of creation and distribution is extremely telling. Traditional roles are blurred. The one-to-many versus many-to-many broadcast models, East v. West cultural dominance, corporate v. independent media and pro/am production distinctions are being rendered meaningless. The end result is a far richer landscape of cultural production.
congress passes telecom bill, breaks internet 06.09.2006, 2:29 AM
The benighted and corrupt U.S. House of Representatives, well greased by millions of lobbying dollars, has passed (321-101) the new telecommunications bill, the biggest and most far-reaching since 1996, "largely ratifying the policy agenda of the nation's largest telephone companies" (NYT). A net neutrality amendment put forth by a small band of democrats was readily defeated, bringing Verizon, Bell South, AT&T and the rest of them one step closer to remaking America's internet in their own stupid image.
more evidence of academic publishing being broken 06.08.2006, 10:17 AM
Stay Free! Daily reprints an article from the Wall Street Journal on how the editors of scientific journals published by Thomson Scientific are coercing authors to include more citations to articles published by Thomson Scientific:
Dr. West, the Distinguished Professor of Medicine and Physiology at the University of California, San Diego, School of Medicine, is one of the world's leading authorities on respiratory physiology and was a member of Sir Edmund Hillary's 1960 expedition to the Himalayas. After he submitted a paper on the design of the human lung to the American Journal of Respiratory and Critical Care Medicine, an editor emailed him that the paper was basically fine. There was just one thing: Dr. West should cite more studies that had appeared in the respiratory journal.
If that seems like a surprising request, in the world of scientific publishing it no longer is. Scientists and editors say scientific journals increasingly are manipulating rankings -- called "impact factors" -- that are based on how often papers they publish are cited by other researchers.
"I was appalled," says Dr. West of the request. "This was a clear abuse of the system because they were trying to rig their impact factor."
Read the full article here.
shirky (and others) respond to lanier's "digital maoism" 06.08.2006, 1:37 AM
Clay Shirky has written an excellent rebuttal of Jaron Lanier's wrong-headed critique of collaborative peer production on the Internet: "Digital Maoism: The Hazards of the New Online Collectivism." Shirky's response is one of about a dozen just posted on Edge.org, which also published Lanier's essay.
Shirky begins by taking down Lanier's straw man, the cliché of the "hive mind," or mob, that propels collective enterprises like Wikipedia: "...the target of the piece, the hive mind, is just a catchphrase, used by people who don't understand how things like Wikipedia really work."
He then explains how they work:
Wikipedia is best viewed as an engaged community that uses a large and growing number of regulatory mechanisms to manage a huge set of proposed edits. "Digital Maoism" specifically rejects that point of view, setting up a false contrast with open source projects like Linux, when in fact the motivations of contributors are much the same. With both systems, there are a huge number of casual contributors and a small number of dedicated maintainers, and in both systems part of the motivation comes from appreciation of knowledgeable peers rather than the general public. Contra Lanier, individual motivations in Wikipedia are not only alive and well, it would collapse without them.
I haven't finished reading through all the Edge responses, but was particularly delighted by this one from Fernanda Viegas and Martin Wattenberg, creators of History Flow, a tool that visualizes the revision histories of Wikipedia articles. Building History Flow taught them how to read Wikipedia in a more sophisticated way, making sense of its various "arenas of context" -- the "talk" pages and massive edit trails underlying every article. In their Edge note, Viegas and Wattenberg show off their superior reading skills by deconstructing the facile opening of Lanier's essay, the story of his repeated, and ultimately futile, attempts to fix an innacuracy in his Wikipediated biography.
Here's a magic trick for you: Go to a long or controversial Wikipedia page (say, "Jaron Lanier"). Click on the tab marked "discussion" at the top. Abracadabra: context!
These efforts can also be seen through another arena of context: Wikipedia's visible, trackable edit history. The reverts that erased Lanier's own edits show this process in action. Clicking on the "history" tab of the article shows that a reader -- identified only by an anonymous IP address -- inserted a series of increasingly frustrated complaints into the body of the article. Although the remarks did include statements like "This is Jaron -- really," another reader evidently decided the anonymous editor was more likely to be a vandal than the real Jaron. While Wikipedia failed this Jaron Lanier Turing test, it was seemingly set up for failure: would he expect the editors of Britannica to take corrections from a random hotmail.com email address? What he didn't provide, ironically, was the context and identity that Wikipedia thrives on. A meaningful user name, or simply comments on the talk page, might have saved his edits from the axe.
Another respondent, Dan Gillmor, makes a nice meta-comment on the discussion:
The collected thoughts from people responding to Jaron Lanier's essay are not a hive mind, but they've done a better job of dissecting his provocative essay than any one of us could have done. Which is precisely the point.
julian dibbell on GAM3R 7H30RY 06.07.2006, 3:37 PM
In an age of the hyperlink and the blogosphere, there has been some question whether there's a future of the book at all, but the warm, productive dialogue that's shaping G4M3R 7H30RY may well be it.
Then again, if G4M3R 7H30RY's argument is right, books may well have to cede their role as the preeminent means of understanding culture to another medium altogether: the video game. Wark sets out here on a quest for nothing less than a critical theory of games....and the mantric question he carries with him is "Can we explore games as allegories for the world we live in?" Turns out we can, but the complexity of contemporary games is such that no one mind is up to mapping it all, and Wark's experiment in collaborative revision may be the best way to do the exploring.
updike's tattoo 06.07.2006, 12:41 AM
I was startled but not surprised to read about John Updike's denigration of the future of ebooks at BookExpo. Had he tattooed it on his forehead he couldn't have made clearer his idealization of 19th-century structures and modes of thinking. His talk represented the final glorification of the author/artist/creator as a higher being ingrained with heroic capabilities unapproachable by mere mortals. For Updike and all those unable to cross into the new Canaan of electronicity, the apotheosis of the artist fits into the tradition of history as a history of heroes. There are but a few gods of literature as is only natural, I expected him to say, and if you have art made by whole masses of people, many of them unidentifiable, we'll have regressed to the period of Notre Dame cathedral or the Pyramids, in which no individuals were glorified for their contributions to art or to the era when writing went unsigned or when the writer assumed the mantle of some greater person, to glorify them and spread their thinking.*
This hero worship that Updike has wallowed in for the last 40 years has addled his brain. Reading some of his remarks reminded me of a screed published in the Saturday Review of Literature back in the 1970's, if memory serves, by Louis Untermeyer, decrying the abominably inadequate generation of poets who couldn't use rhyme or rhythm to make their way out of a paperbag. The rant was entertaining and almost credible in its denunciations — except for Untermeyer's having chosen one of the great poems of the 20th century — Frank O'Hara's "The Day Lady Died" — as his example of the witless drivel this shiftless new generation was producing. Untermeyer and Updike belong to the same class of critic as the French academicians who dismissed the Impressionists or the Fauves ("wild beasts"), blind to the future and in love with their own tinny emulation of the greater artists who preceded them. (Who will put Updike in the same list as Tolstoy or Faulkner or Fielding or Isak Dinesen? They made new forms, indelibly, while the best that can be said of Updike is that he stood alone as a prolific writer of magazine pieces.)
It's been said** that new scientific theories don't win over their opponents so much as they are accepted by the new generation and the old generation dies off. The same holds true in art, of course. The precocious writers of the coming generation will cut their teeth on blogs and networked books and media that will require visual acuity and improvisational methods that make Updike's juvenilia*** feel as antiquated as William Dean Howells or James Fenimore Cooper. A living fossil. What a fall from the pantheon he occupies in his imagination.
* I'm thinking specifically of the authors of Revelations and several of the Gnostic gospels.
** Apparently most authoritatively in Thomas Kuhn's Structure of Scientific Revolutions. Updike's remarks provide striking evidence of Kuhn's theory of incommensurability of paradigms — if you are fully caught up in the old paradigm you have no way of assessing the new, lacking common values, language and experience with its proponents.
*** Updike has published, what, 36 books of fiction? We'll be generous and include the first quarter in this categorization.
physical books and networks 06.06.2006, 2:35 AM
The Times yesterday ran a pretty decent article, "Digital Publishing Is Scrambling the Industry's Rules", discussing some recent experiments in book publishing online. One we've discussed here previously, Yochai Benkler's The Wealth of Networks, which is available as both a hefty 500-page brick from Yale University Press and in free PDF chapter downloads. There's also a corresponding readers' wiki for collective annotation and discussion of the text online. It was an adventurous move for an academic press, though they could have done a better job of integrating the text with the discussion (it would have been fantastic to do something like GAM3R 7H30RY with Benkler's book).
Also discussed is the new Mark Danielewski novel. His first book, House of Leaves, was published by Pantheon in 2000 after circulating informally on the web among a growing cult readership. His sophmore effort, due out in September, has also racked up some pre-publication mileage, but in a more controlled experiment. According to the Times, the book "will include hundreds of margin notes listing moments in history suggested online by fans of his work who have added hundreds of annotations, some of which are to be published in the physical book's margins." Annotations were submitted through an online forum on Danielewski's web site, a forum that does not include a version of the text (though apparently 60 "digital galleys" were distributed to an inner circle of devoted readers).
The Times piece ends with an interesting quote from Danielewski, who, despite his roots in networked samizdat, is still ultimately focused on the book as a carefully crafted physical reading experience:
Mr. Danielewski said that the physical book would persist as long as authors figure out ways to stretch the format in new ways. "Only Revolutions," he pointed out, tracks the experiences of two intersecting characters, whose narratives begin at different ends of the book, requiring readers to turn it upside down every eight pages to get both of their stories. "As excited as I am by technology, I'm ultimately creating a book that can't exist online," he said. "The experience of starting at either end of the book and feeling the space close between the characters until you're exactly at the halfway point is not something you could experience online. I think that's the bar that the Internet is driving towards: how to further emphasize what is different and exceptional about books."
Fragmented as our reading habits (and lives) have become, there's a persistent impulse, especially in fiction, toward the linear. Danielewski is probably right that the new networked modes of reading and writing might serve to buttress rather than unravel the old ways. Playing with the straight line (twisting it, braiding it, chopping it) is the writer's art, and a front-to-end vessel like the book is a compelling restraint in which to work. This made me think of Anna Karenina, which is practically two novels braided together, the central characters, Anna and Levin, meeting just once, and then only glancingly.
I prefer to think of the networked book not as a replacement for print but as a parallel. What's particularly interesting is how the two can inform one another, how a physical book can end up being changed and charged by its journey through a networked process. This certainly will be the case for the two books in progress the Institute is currently hosting, Mitch Stephens' history of atheism and Ken Wark's critical theory of video games. Though the books will eventually be "cooked" by a print publisher -- Carroll & Graf, in Mitch's case, and a university press (possibly Harvard or MIT), in Ken's -- they will almost certainly end up different for their having been networkshopped. Situating the book's formative phase in the network can further boost the voltage between the covers.
An analogy. The more we learn about the evolution of biological life, the more we understand that the origin of species seldom follows a linear path. There's a good deal of hybridization, random mutation, and general mixing. A paper recently published in Nature hypothesizes that the genetic link between humans and chimpanzees is at least a million years more recent than had previously been thought based on fossil evidence. The implication is that, for millennia, proto-chimps and proto-humans were interbreeding in a torrid cross-species affair.
Eventually, species become distinct (or extinct), but for long stretches it's a story of hybridity. And so with media. Things are not necessarily replaced, but rather changed. Photography unleashed Impressionism from the paint brush; television, as Kathleen Fitzpatrick's new book argues, acted as a foil for the postmodern American novel. The blog and the news aggregator may not kill the newspaper, but they will undoubtedly change it. And so the book. You see that glint in the chimp's eye? A period of interbreeding has commenced.
in publishers weekly... 06.05.2006, 9:54 AM
We've got a column in the latest Publishers Weekly. An appeal to publishers to start thinking about books in a network context.
what the book has to say 06.02.2006, 7:26 PM
About a week ago, Jeff Jarvis of Buzz Machine declared the book long past its expiration date as a useful media form. In doing so, he summed up many of the intriguing possibilities of networked books:
The problems with books are many: They are frozen in time without the means of being updated and corrected. They have no link to related knowledge, debates, and sources. They create, at best, a one-way relationship with a reader. They try to teach readers but don't teach authors. They tend to be too damned long because they have to be long enough to be books.
I'm going to tell him to have a look at GAM3R 7H30RY.
Since the site launched, discussion here at the Institute keeps gravitating back to the shifting role of the author. Integrating the text with the discussion as we've done, we've orchestrated a new relationship between author and reader, merging their activities within a single organ (like the systole-diastole action of a heart). Both activities are altered. The text, previously undisturbed except by the author's hand, is suddenly clamorous with other voices. McKenzie finds himself thrust into the role of moderator, collaborating with the reader on the development of the book. The reader, in turn, is no longer a solitary explorer but a potential partner in a dialogue, with the author or with fellow readers.
Roger Sperberg elaborated upon this in a wonderful post about GAM3R 7H30RY on Teleread:
A serious text, published in a format designed to elicit comments by readers -- this is new territory, since every subsequent reader has access to the initial text and to comments, improvements, criticisms, tangents and so on contributed by the body of readers-who-came-before, all incorporated into the, um, corpus.
This is definitely not the same as "I wrote it, they published it, individuals read and reviewed it, readers purchased it and shared their comments (some of them) with others in readers' circles." Even a few days after publication, there are plenty of contributions and perhaps those of Ray Cha, Dave Parry and Ben Vershbow are inseparable now from the initial comments of author McKenzie Wark, since I read them not after the fact but co-terminously (word? not "simultaneously" but "at the same time"). My own perception of the author's ideas is shaped by the collaborating readers' ideas even before it has solidified. What the author has to say has broadened almost immediately into what the book has to say.
Right around the same time, Sol Gaitan arrived independently at basically the same conclusion:
This brings me to pay attention to both, contents and process, which I find fascinating. If I choose to take part, my reading ceases to be a solitary act. This reminds me of the old custom of reading aloud in groups, when books were still a luxury. That kind of reading allowed for pauses, reflection and exchange. The difference now is that the exchange affects the book, but it's not the author who chooses with whom he shares his manuscript, the manuscript does.
McKenzie (the author) then replied:
Not only is reading not here a solitary act, but nor is it conducted in isolation from the writer. It's still an asymmetrical process. Someone asked me in email why it wasn't a wiki. The answer to which is that this author isn't that ready to play that dead.
Eventually, if selections from the comments are integrated in a subsequent version -- either directly in the text or in some sort of appending critical section -- Ken could find himself performing the role of editor, or curator. A curator of discussion...
Or perhaps that will be our job, the Institute. The shifting role of the editor/publisher.