Listing entries tagged with the_networked_book
the meaning of life? can you find an answer on the web? 08.17.2005, 3:13 PM
On October 10, 2004, I was sitting with my laptop at a cafe in New York City trying to avoid writing a paper for my first-year humanities class. In a moment of despair, I typed "what is the meaning of life?" into an online forum. Fifty thousand hits and two thousand answers later…
That's the cover copy for David Seaman's first book "The Real Meaning of Life.," due out this September. The book is a print version of the impromptu networked book, generated online in response to his question. Aphorisms like "be grease not glue," and "there is not point to life, and that is exactly what makes it so special," came from Buddhists, born-again Christians, atheists, waitresses, students, and recovering heart attack patients.
The public platform that the web offers ordinary people, introduces a new way to contemplate this perennial question. Typing "what is the meaning of life?" into wikipedia. yields an extensive post with over 500 edits and a lively discussion page. Here is an excerpt:
The person who asks "What is the meaning of life?" is pondering life's purpose, in the context "Why are we here?", or is searching for a justification or goal as in "What should I do with my life"? Thus, we've separated the main query into two different questions: one about the objective purpose of life ("Why are we here?", and the other about subjective purpose in life ("What should I do with my life?"). Many claim that life has an objective purpose, though they differ as to what this purpose is, or where it comes from. Others deny that an objective purpose of anything is possible. Purposes, they argue, are by their very nature purely subjective. Subjective purpose of course varies from person to person. In some ways the quandary is a circular argument, the enquirer is in the midst of life seeking to validate life, or be it the meaning of it.
Books have, traditionally, been vehicles for the contemplation of this circular question. Scripture, scholarly texts, poetry, novels, self-help books, how-to books, grapple with the issue—"why are we here? And what should I do with my life?"—in various ways. It is interesting to see how the question plays out in the interactive space of the web.
Type "what is the meaning of life?" into the Google search engine and it yields 62,300 responses. Including an "Ask Yahoo" page from 1998 in which Juan asks the Yahoo search team to find the meaning of life for him. The letter he gets back reccommends a visit to the Yahoo meaning of life page. It also offers this advice:
Now, if you're looking for the meaning of your life in particular, then we're afraid we have to fall back on the somewhat predictable response: "It's up to you." Many people try to give lasting meaning to their lives by making the world a better place than when they entered it, either through scientific, philosophical, or artistic contributions. Others try by raising children that can themselves make contributions and preserve important societal and religious values for future generations.
There are also quite a few personal web pages that address the question. One particularly poignent example is JaredStory.com a site by and about Jared High, a young boy who took his own life shortly after a violent beating by a school bully. This heartbreaking site is filled with biblical quotations, audio and video of Jared, information about suicide, bullying, and a transciption of the lawsuit filed by his grieving parents.
Taken together these online "answers" create a wonderful mosaic of humanity striving to know itself and to connect with the universe. The web gives us an opportunity to read this interlinked accumulation of wisdom on a scale never before possible.
the networked graphic novel 08.16.2005, 12:29 PM
Media artist Andy Deck's "Panel Junction" is a digital graphic novel that uses collaborative software to transform the authoring process. "Panel Junction" is one of the first open source networked books to visit the graphic novel genre, but judging from the response Deck is getting, it is probably not the last.
fictional pop star gets buried on wikipedia 08.15.2005, 2:00 PM
Jamie Kane, fictional heartthrob and pop star, is the subject of a new BBC online game where players must solve the mystery of the singer's death (story). Last Friday, a Jamie Kane article appeared on Wikipedia (original, current) that made no acknowledgement of the boy band singer's fictional status. Encylopedians soon sniffed out the viral marketing ploy and edited the page, making it a likely candidate for deletion. Today, the BBC admitted that the page had been set up by an employee, but not as part of an official marketing campaign.
Should we be worried about the veracity of information on the web? Of course. On Wikipedia? Definitely. But not because of episodes like this. If anything, this demonstrates how Wikipedia can work quite well for pop culture, and how the community can respond swiftly to so-called vandalism. I say, let Jamie Kane have his page, just not under false pretenses. Let the page incorporate the history of this tiny scandal. That's one of the things I find most fascinating about Wikipedia - that it can handle that kind of self-awareness.
collected letters: a mountain of email 08.05.2005, 12:21 PM
Fernanda Viégas, a Ph.D. candidate at the Sociable Media Group at the M.I.T. Media Lab has developed an attractive visualization tool for email archives. Appropriately, she chooses the metaphor of a mountain. Colors, like rock layers, represent the various people you have kept up with. The more recent contacts are toward the peak.
It's terrifying to think of all the email I have generated and received since I first started corresponding electronically in the mid-90s. If they were letters stuffed in shoe boxes, they would probably fill a house. Some of those shoe boxes have been vaporized (I lost a lot of my college letters (probably for the best) when my student accounts were closed. And now, probably foolishly, I use webmail, which could vanish just as easily.
A vast majority of letters are transient things, hardly worth a second look, let alone saving. But some can turn out to be valuable keys to the past - a way to unlock a mind or a relationship from an earlier time. Collected letters have always been an invaluable tool for literary and historical studies. But since letter writing is an all but dead practice, we must turn to emails for epistolary evidence. And not just email, since today's communication practices are so diverse. Text messages, instant messenger chats, phonecalls, video conferencing - these are all avenues for our social selves. 99% may be vapor, but it would be sad to lose that salient 1% that gives flight to memory.
I came across a new program (not free) that records Skype conversations as .mp3 files. Skype is a free VoIP program (voice over internet protocol) that probably spells ultimate doom for traditional phone services. Years later, assuming your hard drive hasn't been wiped and the file format is still readable, plug in your headphones and listen to your collected letters.
wikipedia stats 07.26.2005, 1:42 PM
- Over 4,000 new articles a day.
- An average of 14 edits per entry.
- About 57,000 people who have edited at least 10 times.
- About 2.7 million edits in May
It's hard to deny that something is going on here. I wonder, though... Insofar as it assumes the mantle of the print encyclopedia, Wikipedia represents a migration of print-based knowledge to the web. Many of its most devoted contributors acquired their learning from a print world of libraries, books and periodicals. But what happens later, after this formative period, when your average user is someone who derives the bulk of their learning from the web (from resources like Wikipedia)? How do you maintain the Wikipedia article on the Spanish Golden Age when that same article is your primary source?
Perhaps Wikipedia's encyclopedia-ness is just an early skin that it will eventually shed. It may well devolve into a sprawling compendium of trivia and eccentric niche outposts (in some ways, that's what it already is). Or it may evolve further into that strange hybrid animal: a reference work on current events (as it has proven itself to be with the new pope and the London bombings). On the other hand, if serious scholars decide that Wikipedia is a legitimate resource worthy of investment, we might see it sustain itself as a reliable reference work (without, of course, losing all the trivia and current events). To do this, it might, paradoxically, need to stay grounded in the offline world.
convergence sightings 07.25.2005, 6:15 PM
1: BLOGS AND RADIO. The NY Times profiles a new show on Public Radio International that draws its inspiration from the web. "Open Source from PRI" uses a public blog to cull topics and story ideas from registered commenters:
"Open Source will not be a show about blogs. It will use blogs to be a show about the world."
Open Source offers podcasts of its programs, available for free subscription on Odeo and iTunes.
2. WIRES CROSSED. The Associated Press is preparing to launch an online video streaming network from which members can syndicate clips for presentation on their news sites. Clips will be streamed by member sites over branded video players that AP will provide. In exchange, AP will take a portion of revenues generated by interspersed video ads. Most news networks are racing to upgrade their sites to offer more video content alongside text (see "television merging with the web" on this blog).
wiki wiki: snapshot etymology 06.23.2005, 3:49 PM
Found on Flickr: the famous "wiki wiki" shuttle bus at the Honolulu airport. In Hawaiian pidgin, "wiki wiki" means "quick," or "informal," and is what inspired Ward Cunningham in 1995 to name his new openly editable web document engine "wiki", or the WikiWikiWeb.
(photo by cogdogblog)
The 2005 Computers and Writing Conference 06.22.2005, 12:36 PM
Stanford University hosted the 2005 Computers and Writing conference this past weekend. Each session was rife with "future of the book" food for thought. This is an informal summary, with apologies to all the fabulous presentations that I don't mention (sorry, being only one person, I could not attend them all). Some of the major themes (which dovetail nicely with issues we are exploring at the institute) included: Open Source, new interpretations of literacy and "writing," the changing role of the teacher/student, performance, multimodality, and networked community. It is important to note that these themes often blur together in a complicated interdependence. This thematic interplay was evident in the pre-conference workshops which included instruction in open source tools and applications like Drupal that allow for multimodality and the creation of communal authoring environments. Workshops in "Reading Images" and "Using Video to Teach Writing" addressed multiple modalities and new concepts of writing.
I was excited to see that the Computers and Writing community understands the potential of, and imperative for, Open Source. It's practical advantages (free and customizable) and it's philosophical advantages (community-based and built for sharing rather than for selling) make it ideally suited to the goals of the educational community. Open Source came up over and over during the presentations and was featured in the first town hall session "Open Source Opens Thinking." The session challenged the Computers and Writing community "to consider a position statement of collective principles and goals in relation to Open Source." Such a statement would be useful and productive; I'm hoping it will materialize.
The changing role of the teacher and student was evident in several presentations: most notably, the pilot program at Penn State (see my earlier post) in which students publish their "papers" on a wiki. The wiki format allows for intensive peer-review and encourages a culture of responsibility.
There was a lot of speculation about how writing will evolve and how other modalities might be incorporated into our notion of literacy. Andrea Lunsford's keynote speech addressed this issue, calling for a return to oral and embodied "performative literacies." She referred to Tara Shankar's MIT dissertation "Speaking on the Record," which confronts the way we privilege writing above other modalitites for knowledge and education. She says: "Reading and writing have become the predominant way of acquiring and expressing intellect in Western culture. Somewhere along the way, the ability to write has become completely identified with intellectual power, creating a graphocentric myopia concerning the very nature and transfer of knowledge. One of the effects of graphocentrism is a conflation of concepts proper to knowledge in general with concepts specific to written expression."
Shankar calls for new practices that embrace oral communication. She introduces a new word: "to provide a counterpart to writing in a spoken modality: speak + write = sprite. Spriting in its general form is the activity of speaking "on the record" that yields a technologically supported representation of oral speech with essential properties of writing such as permanence of record, possibilities of editing, indexing, and scanning, but without the difficult transition to a deeply different form of representation such as writing itself."
The need for a multimodal approach to writing was addressed in the second Town Hall meeting "Composition Beyond Words." Virginia Kuhn opened by calling for a reconsideration of "writing," and the goals of visual literacy. Bradley Dilger reminded us that literacy goes beyond "the letter;" we need multiple interfaces for the same data because not everyone looks at data the same way. Madeleine Sorapure pointed out that writing with computers is determined by underlying code structures which are, themselves, a form of writing. She quoted Loss Pequeno Glazier, "Code is the writing within the writing that makes the work happen." Gail Hawisher, talked about the 10 year process of incorporating multiple modalities into the first-composition courses at the University of Illinois. Cynthia Selfe addressed this struggle, saying: "colleges are not comfortable with multiple modalities." She advises the C&W community to "think about how to give professional development/support to resistant colleges in ways that are sustainable over time." Stuart Moulthrop also offered some cautionary words of advice. In addition to faculty and administration, Moultrop says students are resistant to multimodality. Code, for example, is fatally hard to teach non-programmers or visually oriented people. "There is a political problem," Moulthrop says, "we are living through a backlash moment. People are very angry about how fast the future has come down on them."
Some participants delivered "papers" that attempted to demonstrate these new multimodal imperatives. Most notably, Todd Taylor's presentation, "The End of Composition," which asked, "Can a paper be a film?" Todd argues "yes" with a cinematic montage of sampled and remixed clips along with original footage, which was enthusiastically received by the audience (alt. review in Machina Memorialis blog.) Morgan Gresham's Town Hall presentation was a student-produced video and a question to the audience; is this just a remake of a bad commercial, or is it a "paper"? Christine Alfano's presentation experimented with a hypertext, "Choose Your Own Adventure," style that allowed the audience to determine the trajectory of the talk. Once the selection was made, she dropped the other two papers/options to the floor. The choice, unfortunately for me, eliminated the material that I most wanted to hear about (Shelly Jackson's Patchwork Girl). Additionally, "virtual" presentations were delivered during an online companion conference called: Computers and Writing Online 2005 When Content Is No Longer King: Social Networking, Community, and Collaboration This interactive online conference served, "as an acknowledgment of the value of social networks in creating discourse of and about scholarly work." CWOnline 2005 made both the submission and presentation process open to public review via the Kairosnews weblog. Despite some flaws, I thought these experimental presentations pushed at the boundaries of academic discourse in a useful way. They reminded us how far we have to go and how difficult the project of putting ideas into practice really is.
Finally, the conference highlighted ways in which computers are being used to cultivate community across cultures and institutions; and between students, teachers, and scholars. Sharing Cultures, a joint project of Columbia College Chicago and Nelson Mandela University Metropolitan University, in South Africa "creates two interconnected, on-line writing and learning communities…the project purposely includes students who traditionally have not had access to, or have been actively marginalized from, both digital and international experiences." Virginia Kuhn approached computers and community at the local level, with a service learning class called, "Multicultural America," which asked students to write an ebook documenting local history. The finished work is part of an ongoing display at a Milwaukee community center. This project inspired an interesting reversal; community members who worked with students on the project are now (thanks to a generous grant) coming to the University of Milwaukee for supplemental study. Within the academy there are also exciting opportunities for computer-based community-building. In her Town Hall presentation, Gail Hawisher said that literacy on campus is, "usually taken care of by first year composition." If we are to incorporate visual literacy into our definition of literacy then, "Perhaps we should be looking to art and design for literacy instead of just the English dept." This is an incredibly smart idea because, short of requiring composition teachers to have degrees in art, film, AND writing, collaborative efforts with other departments seem to be the best way to ensure a deep and rigorous understanding of the material. I had an interesting conversation with Stuart Moulthrop about this. We imagined a massively-multi-player game environment that would allow scholars from around world to collaborate on curriculum across institutional and disciplinary boundaries. Wouldn't it be great, we thought, if someone who wanted to teach an odd combination like, film/biology/physics, could put a course scenario into the game where it would be played out by biologists, film scholars, and physicists. In other words a kind of life-time learning environment for the experts, a laboratory for the exchange of knowledge across disciplinary boundaries, and place to weave together different strands of human insight in order to create a more complete "picture" of the universe.
on second thought... (wikis are hard) 06.21.2005, 10:24 AM
The LA Times has temporarily shelved its plans for running "wikatorials" - editorials that any reader can edit - due to a flood of "inappropriate material." The whole experiement with wikis was a risky move for a well-established newspaper to take, and it's not surprising that they immediately panicked once the riff raff showed up. It's hard to establish an open, collaborative environment from the top-down. Whereas, if you start from a point of low stakes, with little prestige on the line (as Wikipedia did), then the enterprise can evolve slowly, embarrassing missteps, spam and all.
Someone should start an experiment: dump the LA Times content in a non-affiliated wiki and try the wikatorials there. Give it time, let the community build, work out the hiccups, and then give the LA Times a call.
the 2005 computers and writing online conference 06.06.2005, 3:30 PM
The Institute for the Future of the Book is presenting a paper at the 2005 Computers and Writing Online Conference. Our presentation, entitled "Sorting the Pile: Making Sense of A Networked Archive," discusses our experience building a networked archive for our Gates Memory Project and the insights it provided regarding the evolution of books in the networked environment.
The conference began on Tuesday, May 31, and runs through Monday, June 13. It is an online conference that is open-access, Creative Commons-licensed, and hosted on a weblog. Drawing upon the conference's theme of exploring the increasing value of the network and collaborative practices within it, presenters examine the role(s) played by social networking applications and other technologies that are intended to foster social interaction, community, and collaboration. Alongside studying the technologies themselves, presenters will observe and describe the ways that writers and users are engaging the technologies and how such engagement is changing our ideas about writing and teaching writing, and, more broadly, the concepts of rhetoric and composition themselves. We very much hope you'll get involved by leaving your comments, or, if you prefer, respond on your own weblog and leave a trackback! Or write a response on your wiki! Or tag presentations on your del.icio.us or de.lirio.us list! You get the idea. This conference is meant to be networked.
The presentations are accessible to anyone with an internet connection, and anyone with an account at Kairosnews (registration is free) can leave comments. For more information, visit the CW Online 2005 weblog.
the city writes its book 05.25.2005, 4:06 PM
chicagocrime.org, the best use of Google Maps I've seen to date, has been making the web rounds over the past week. It generates maps using information scraped from Citizen ICAM, a public portal to the Chicago Police Department's database of reported crime. You can view by type of crime, street, date, police district, location type (i.e. alley, ATM, residence etc.), or a map of the whole city.
This is the latest in a series of living documents that have sprung up recently - web spaces tied by a thousand strings to real, physical places. I can imagine chicagocrime being integrated into a larger Chicago area web hub, or aggregator. Ideally, these hubs (see here and here) will combine the conviviality of the blog, the utility of craigslist, the diversity of Flickr or ourmedia, and the collective vigilance of citizen journalism. Other recently launched intitiatives of note are Bayosphere ("...of, by and for the Bay Area) and mnspeak.com ("twin cities: all day, all night"). The more people participate, the truer the picture of that place at that time. Are we moving past the primacy of the editor? Or will editors prove more important than ever before?
"ubiquitous social encyclopedia" 05.20.2005, 11:36 AM
Cellphedia, a thesis project at the Interactive Telecommunications Program at NYU, is a user-generated encyclopedia composed of text message Q&A from cell phones - a kind of mobile, hyper-abbreviated Wikipedia. But unlike Wikipedia, Cellphedia entries are not open to editing by the community, at least not yet. Inspired by Dodgeball, a popular friend-tracking service, Cellphedia suggests something more along the lines of a massive, multi-user trivia game than a serious knowledge resource. It's the kind of street research that is becoming more common. Answers on impulse. The web overlayed on the physical world.
BBC rhapsody 05.14.2005, 3:40 PM
Etymology: Latin rhapsodia, from Greek rhapsoidia recitation of selections from epic poetry, rhapsody, from rhapsoidos rhapsodist, from rhaptein to sew, stitch together - Merriam-Webster OnLine
lost recording of Douglas Adams, and, Flash in the pan 05.09.2005, 7:50 AM
I recently saw the new Hitchhiker's Guide to the Galaxy movie, which features as one of its central characters a very powerful electronic book - a guide to "life, the universe and everything." Coming away, I felt a bit uneasy. Could this be the future of the book in the age of Adobe-Macromedia? As portrayed in the film, the Guide is essentially a compendium of Flash animations, with a little bit of text, and a wry British voiceover. Granted, it's just a narrative device in a film, designed more for style than for content. But is this any less true in real life?.. with all these websites built in Flash, and all the Flash-enhanced garbage on television - especially in ads and sports coverage (notice how TV's become a lot more like a video game?). The same goes for the film. Though chock-a-block with spiffy visual effects, and flavored with Douglas Adams' unmistakable wit, it's basically all style, all pose - visual fireworks for a passive viewer. We have only just started to explore the frontier of media-rich, networked books. But if "FlashAcrobat" becomes the writing tool of choice, that just might end up preempting any serious consideration of an active, critical role for the reader. Books become the half time show at the Super Bowl. Flash frenzy...
Paul Boutin, writing last week in Slate, makes draws a more encouraging parallel to the fictional Guide: Wikipedia. "..a real-life Hitchhiker's Guide: huge, nerdy, and imprecise." I had not been aware that Adams, before his untimely death in 2001, had experimented with his own web version of the Guide, a sort of proto-Wikipedia called h2g2, hosted by the BBC. Flipping through just a few of the articles, it's interesting to see a collaborative work sustaining a unified authorial voice. The tone, not to mention the choice of subjects, comes across as unmistakably Adams - the ur-author - even though the guide was built by diverse contributors, in more or less the same fashion as Wikipedia. Here's the intro paragraph from the article "The Problem with Driving Directions":
"In the absence of in-car electronic route maps, driving directions are sets of instructions given to drivers in order for them to reach their desired destination. These basically come in two different forms: oral and written. Whether oral or written, they are widely used due to the fact that people often have no idea how to get to where they are going, and naturally assume that they are the only ones that do not know, and so ask someone else. Unfortunately, this other person tends not to know either."
Building on Boutin's comparison, you could argue that Wikipedia is simply imitating the tone and format of a paper encyclopedia, much as Adams' followers in h2g2 are emulating the style of his novels. As a reference tool, Wikipedia may have far outstripped Adams' project, but questions of accuracy and reliability persist. h2g2, on the other hand, sits much more comfortably in its skin, cheerfully acknowledging that it contains "many omissions," and "much that is apocryphal, or at least wildly inaccurate." A much more serious and important endeavor, Wikipedia is still wrestling with the anxiety of influence exerted by its forebear, the encyclopedia. Over time, will its voice change?
is the information any good?...don't ask Google 04.24.2005, 9:01 AM
Lately, I've been thinking about quantitative data vs. qualitative data and noticing that the web is really good at analyzing, packaging, and delivering the former, but woefully barren when it comes to the latter. The really elegant digital visualizations that I've seen work with quantitative data. They can show you, for example, the top news stories of the hour, day, or week; the spatial position and relative frequency of words in a novel; the most popular tags, etc... Search engines also privilege quantitative information; the first site that shows up on the Google list is usually the most popular. But determining the quality of that data is left, almost entirely, up to the user. Returning to a point I tried to make in an earlier post, the web is like high school popularity is not always a sign of quality, reliability, or substance.
Let's take the news for example: the results of a national survey on media consumption conducted by The Pew Research Center and released last year by the Brookings Institute, suggest “that news audiences are increasingly polarized, fragmented, and skeptical, opting for news outlets that most closely resemble their own ideologies…This shared skepticism not only applies to "opposition" news sources, but to the media in general—more than half of those surveyed said they don't trust the news media…Tom Rosenstiel, director of the Project for Excellent in Journalism. "People want to know, 'Why should I believe that?'"
Why can’t we use technology to answer this need? What if instead of serving up the most popular stories, we created search engines and visualizations that identified the best stories, ranking information according to quality? Programs that answer the following concerns:
• how well-informed is the writer/news agency?
• are they honest?
• how good is the writing?
• how good is the art/photography/video?
• what are their political motivations?
• who are they paid by/owned by?
Many of these questions require investigation and/or subjective answers. Since subjectivity is still a uniquely human form of processing and evaluating, what I am really calling for is a program that helps us organize the veritable sea of human opinion surging about on the web. The news is not the only area where humans need humans to figure out what they should pay attention to. The massive amount of content that is being generated through the web creates an urgent need for filters in almost every imaginable category. Someone needs to design a critical apparatus for our networked world.
web 3.0 - all consuming 04.22.2005, 4:41 PM
Dan Gillmor has written a nice, accessible overview of the evolution of the web in his periodic column for the Financial Times. As he sketches it, vesion 1.0 was a "fairly static," "read-only" affair - sites were relatively basic and we checked them for new content or downloads. Online retail and search engines sprang up, essentially to help us find things to read, while things like GeoCities made it possible for anyone to have their own site. With 2.0 it became a two-way street - a "read-write" web, with its poster child the blog. Now, we are learning how to weave all the pieces together and to recombine them in innovative ways - this is version 3.0.
The emerging web is one in which the machines talk as much to each other as humans talk to machines or other humans. As the net is the rough equivalent of a computer operating system, we’re learning how to program the web itself.
A big part of 3.0 are the "web services" that can be built with a site's "applications programming interface," or API. An API is essentially a window into a site's code that programmers can use to build derivative applications. Google, Yahoo, Amazon, and Flickr all have APIs. Gillmor points to a wonderful site - ALL consuming - that uses the Amazon API to build communities around the media - books, music, film - that people are consuming. You simply post the latest entree in your media diet - anything that can be found on Amazon - and then add tags and comments. People inevitably find each other through what they are reading and discussions can ensue. This is an interesting step toward the real-time reading communities that will be possible when we have dynamic electronic books that can plug into the network.
wikipedia keeps apace 04.20.2005, 1:57 PM
Barely 24 hours after being selected as the 265th Pope, Cardinal Joseph Ratzinger, now Benedict XVI, has his own Wikipedia article. Actually, Ratzinger did previously have his own page, but it was moved yesterday to the new Benedict XVI address and has since undergone a massive overhaul. The revision history, already quite long, captures in miniature the stormy debate that has raged across the world since the news broke. Early on in the history, you see the tireless Wikipedians wrestling over passages dealing with the pontiff's early years in Germany, where he was a member of the Hitler Youth (membership was compulsory). One finds evidence of a virtual tug-of-war waged over a photograph of Ratzinger as a boy, wearing what appears to be the crisp uniform and official pin of the Hitlerjugend. The photo was eventually scrapped amid doubts about its veracity and copyright status.
Scanning across the revision history, it's hard not be to impressed by the vigilance, passion and sheer fussiness that go into the building of a Wikipedia article. Like referees, the writers are constantly throwing down flags for excessive "editorializing" or "POV," challenging each other on accuracy, grammar, and structure. There are also frequent acts of vandalism to deal with (all the more so, I imagine, with an article like this). Earlier today, for instance, some teenager replaced the Pope's headshot with a picture of himself. But within a minute, it was changed back. The strength of the Wikipedia is the size of its community - illustrating the "group-forming networks law" that Kim discusses in the previous post, "the web is like high school."
Not long ago, I posted about a new visualization tool that depicts Wikipedia revision histories over time, showing the shape of an article as it grows and the various users that impact it. For articles on controversial subjects - like popes - it would be fascinating to see these histories depicted as conversations, for that is, in essence, what they are. Any conversation that involves more than two parties cannot be accurately portrayed by a linear stream. There are multiple forks, circles, revolutions, and returns that cannot be captured by a straight line. Often, we are responding to something further up (or down) in the stream, but everything appears sequentially according to the time it was posted. We are still struggling on the web to find a better way to visualize conversations.
It's also strange to think of an encyclopedia article as news. But that's definitely what's happening here, and that's why Dan Gillmor calls attention to the article on his blog ("How the Community Can Work, Fast"). If newspapers are the "rough draft of history" and encyclopedias are the stable, authoritative version, it seems Wikipedia is somewhere in the middle.
This image sums it up well. It appears at the top of the Benedict XVI page, or above any other article that is similarly au courant.
the web is like high school 04.20.2005, 11:20 AM
Social networking software is breeding a new paradigm in web publishing. The exponential growth potential of group forming networks is shifting the way we assign value to websites. In paper entitled "That Sneaky Exponential--Beyond Metcalfe's Law to the Power of Community Building" Dr. David P. Reed, a computer scientist, and discoverer of “Reed’s Law,” a scaling law for group-forming architectures, says: “What's important in a network changes as the network scale shifts. In a network dominated by linear connectivity value growth, "content is king." That is, in such networks, there is a small number of sources (publishers or makers) of content that every user selects from. The sources compete for users based on the value of their content (published stories, published images, standardized consumer goods). Where Metcalfe's Law dominates, transactions become central. The stuff that is traded in transactions (be it email or voice mail, money, securities, contracted services, or whatnot) are king. And where the GFN law dominates, the central role is filled by jointly constructed value (such as specialized newsgroups, joint responses to RFPs, gossip, etc.).”
Reed makes a distinction between linear connectivity value growth (where content is king) and GFNs (group forming networks, like the internet) where value (and presumably content) is jointly constructed and grows as the network grows. Wikipedia is a good example, the larger the network of users and contributors the better the content will be (because you draw on a wider knowledge base) and the more valuable the network itself will be (since it has created a large number of potential connections). He also says that the value/cost of services or content grows more slowly than the value of the network. Therefore, content is no longer king in terms of return on investment.
Does this mean that the web is becoming more like high school, a place where relative value is assigned based on how many people like you? And where popularity is not always a sign of spectacular “content.” You don’t need to be smart, hard-working, honest, nice, or interesting to be the high-school "it" girl (or boy). In some cases you don’t even have to be attractive or rich, you just have to be sought-after. In other words, to be popular you have to be popular. That’s it.
SO...if vigorously networked sites are becoming more valuable, are we going to see a substantial shift in web building strategies and goals—from making robust content to making robust cliques? Dr. Reed would probably answer in the affirmative. His recipe for internet success: “whoever forms the biggest, most robust communities will win.”
as u like it - a networked bibliography 04.19.2005, 12:05 PM
This past weekend I attended some of the keynote lectures at the Interactive Multimedia Culture Expo at the Chelsea Art Museum in New York. Among the speakers was Clay Shirky, who gave a quick, energetic talk on "folksonomies" - user-generated taxonomies (i.e. tags) - and how they are changing, from the bottom up, the way we organize information. Folksonomies are still in an infant stage of development, and it remains to be seen how they will develop and refine themselves. Already, it is getting to be a bit confusing and overwhelming. We are in the process of building, collectively, one tag at a time, a massive library. Clearly, we need tools that will help us navigate it.
Something to watch is how folksonomies are converging with social software platforms like Flickr. What's interesting is how communities form around specific interests - photos, for instance - and develop shared vocabularies. You also have the bookmarking model pioneered by del.icio.us, which essentially empowers each individual web user as a curator of links. People can link to your page, or subscribe with a feed reader. Eventually, word might spread of particular "editors" with particularly valuable content, organized particularly well. New forms of authority are thereby engendered.
Shirky mentioned an interesting site that is sort of a cross between these two models. CiteULike takes the tag-based bookmark classification system of del.icio.us and applies it exclusively to papers in academic journals, thereby carving out a defined community of interest, like Flickr.
"CiteULike is a free service to help academics to share, store, and organise the academic papers they are reading. When you see a paper on the web that interests you, you can click one button and have it added to your personal library. CiteULike automatically extracts the citation details, so there's no need to type them in yourself. It all works from within your web browser. There's no need to install any special software."
Essentially, CiteULike is an enormous networked bibliography. On the first page, recently posted papers are listed under the header, "everyone's library." To the right is an array of the most popular tags, varying in size according to popularity (like in Flickr). Each tag page has an RSS feed that you can syndicate. You can also form or join groups around a specific subject area. As of this writing, there are articles bookmarked from 6,498 journals, primarily in biology in medicine, "but there is no reason why, say, history or philosophy bibliographies should not be equally prevalent." So says Richard Cameron, who wrote the site this past November and is its sole operator. Citations are automatically extracted for bookmarked articles, but only if they come from a source that CiteULike supports (list here, scroll down). You can enter metadata manually if you are are not submitting from a vetted source, but your link will appear only on your personal bookmarks page, not on the homepage or in tag searches. This is to maintain a peer review standard for all submitted links, and to guard against "lunatics." CiteULike says it is looking to steadily expand its pool of supported sources.
CiteULike might eventually fizzle out. Or it might mushroom into something massively popular (it's already running in five additional languages). Perhaps it will merge with other social software platforms into a more comprehensive folksonomic universe. Perhaps Google will buy it up. It's impossible to predict. But CiteULike is a valuable experiment in harnessing the power of focused communities, and in creating the tools for navigating our nascent library. It might also solve some of the problems put forth in Kim's post, "weaving textbooks into the web." Worth keeping an eye on.
one tree in a forest of information 04.14.2005, 12:36 AM
"tree accesses the source code of a web domain through it's url and transforms the syntactic structure of the web site into a tree structure represented by an image. this image illustrates a tree with trunk, branches and ramifications. first each tree is initialized, than all html links are detected, chronologically saved and finally displayed."
It also builds separate trees for external links, creating entire forests of information. For some reason, our one external tree was a stubby, brown little runt for worldbook.com/info, which I'm pretty sure we do not link to, so I've left it off. I'm not sure why it didn't pick up on our real external links. I'm also not sure exactly how to read the tree, but I guess I can see the basic nature of if:book represented - i.e. a blog is pretty simple structurally but with a lot of content, hence our shaggy, dense foliage on a slender trunk. I also made trees for Google and the New York Times and they were much less woolly and not green like ours.
The texone guys have also recorded the sound of data forests growing. Apparently, every node on a tree - each trunk segment, branch, and leaf - emits a piece of MIDI data - a digital note, varying in pitch according to placement on the tree (high notes are toward the top). They recorded the output for different trees and filtered it through various sound palettes (different types or arrays of instruments). A couple of examples are posted on their site. I've put one of them below.
contagious media: symptom of what's to come? 04.13.2005, 11:32 AM
Here's a rare peek into the inner workings of the institute: our discussion about viral media that came out of a debate over what to do with the Gates Memory Project. I've excerpted from last night's email conversation....
Ben starts by saying:
Genesis and entropy are both accelerated on the web. Within moments, you can get something out there and have everybody talking about it. But the life can drain out just as quickly. I think it's fair to say that energy [for the Gates Collective Memory project] is waning, but by refocusing on a single goal, we can perhaps keep this thing afloat...
absolutely do not want to stop yet; haven't done enough to have any lasting impact;
Not to derail the conversation by dragging into the realms of the meta, but might the arc that Ben's describing (an initial flair-up of interest, followed by declining returns) be interesting in & of itself? It seems like the internet is very good at blowing up interesting things at the moment (viz: the contagious media thing Kim forwarded), but it's (generally) not very good at sustaining interest (or scrutiny). (A major & significant exception: when a community springs up around something.) Occasionally you get a "where are they now?" thread on Boing Boing or Slashdot or something, but that's very much the exception & not the rule.
This is maybe something that's important if we're considering the future of books. The information arc of the printed book seems to be very different: if there's not a media circus around the launch of the book, there's a very slow pickup, lasting, conceivably, a very long time. Electronic media seem to be much more time-sensitive.
But not a particularly novel one. Certainly someone's done some thinking about this? I'm not sure where to start looking . . .
Some ideas of where to start looking:
Eyebeam's Contagious Media Experiments
Exhibition at the New Museum of Contemporary Art / Chelsea
April 28 - June 4, 2005
Review/preview of show
This is kinda the opposite of what I'm interested in here. I think it's great that the Internet spreads things virally, but these things burn out very quickly: the Peretti's projects seemed "yesterday" a couple years ago. Nobody checks into blackpeopleloveus.com regularly - people visited once & got the joke (or didn't). Do we really need a loving history of "all your base are belong to us"? It was funny - and certainly signifies a moment of our collective interaction with the Internet very precisely - but a museum exhibitions seems almost beside the point. You don't put a pop song into a museum - and I say that with a full appreciation of pop songs.
To carry the pop song analogy further: in a pop-song world, can you have Bach? if you wanted to have Bach?
(I don't think The Gates really fit into this sort of framework, because there were personal interactions with them. Ben - for example - can tell stories about the gates in a way that we can't really tell (interesting) stories about the dancing baby.)
Social critiques like www.whatisvictoriassecret.com which posed women in sexy underware barfing over the toilet really did say something about body image and the way the advertising industry manipulates women. And the Nike sweatshop emails forced Nike to address labor issues. These websites are not built to last in the same way oil paintings and poems are, but I do think they are a significant cultural commentary and a new form of activism. In this sense I suppose, the Gates do not fit, because they have no political goals.
We should also consider contagious media that parodies an over-hyped current event, a good example is this blog written by Brittany Spears' fetus Don't forget, the most popular website about the Gates was a parody (the Sommerville Gates). it followed this formula, went viral and got tens of thousands of hits.
I think the contagious media element is important for our project. The Gates themselves were temporary and the material we are gathering is, ostensibly, finite (i.e. Nobody is going to go out and take a picture of the Gates tomorrow). Therefore, we need to draw attention to the project now. I don't think personal interactions or the potential for stories/complexity prevents us from making at least some part of this project contagious.
I don't get the pop-song analogy. We do have museums for pop-music. Jazz, Motown, Elvis, the Beatles, they are not trivial and we still have Bach.
I agree it would be interesting to look at the project in terms of its arc - a web arc versus a print arc. It might be interesting also to consider this in terms of closed and exposed. Writing a book is a relatively solitary and contained act (unless it's built on interviews and field research). But still, a work in progress is usually kept very private and tucked away. Only upon being published does it open up to the world. Our project, however, started with a large number of people and a fair bit of attention, but then gradually contracted to an inner core. Now we try to make sense of that dizzying encounter with the larger world. You could say that print books embody thinking before speaking, whereas the web fosters speaking first and thinking later, or not at all.
As for Bach, I think he's pretty much impossible in a pop world, except as reduced to a pop song - the played-to-death cello suite accompanying a Lexus gliding across your TV. Someone today with Bach's genius probably couldn't impact the development of music in nearly as big a way. Maybe he would just become a scientist. And it's true, we don't really put pop songs in museums. Only one of the things Kim mentions is a place, and that's a museum to a legendary person, not a song. I suppose there's the rock and roll hall of fame, but that strikes me as going to the taxidermist's and calling it a zoo. I guess what I'm trying to say is that there's a similar entombed quality to this Contagious Media Showdown Eyebeam is hosting. It's proof that the "all your base," "blackpeopleloveus" variety of web contagion is passé. Drag racing diseases isn't subversive, it's just referential. But I agree with Kim that there continue to be interesting and sometimes powerful instances of contagious media. But a big part of their power is that they come out of nowhere. The minute you announce that something is contagious, you kind of kill its coolness. I wonder if anything worthwhile will come out of that contest.
It's interesting to analyze all this in terms of trying to make something coherent and lasting on the web. But I'm not sure we need to lob a contagious grenade of our own. What sort of thing are you imagining?
--end of email exchange, conversation continues in the comment field--
hub media 04.13.2005, 8:34 AM
Another grassroots media experiment has sprung up in the hinterlands: YourHub.com, a cluster of community portals in the greater Denver metropolitan area that, like Bluffton Today, invites users to forge their own local news from submitted stories, images, ads and events listings. And like its South Carolina counterpart, YourHub is being launched by a larger media company, The Rocky Mountain News.
(via Dan Gillmor)
britannica storms wikipedia - networked accumulatio 04.04.2005, 2:04 PM
Beginning on as an april fool's prank, Britannica's hostile takeover of Wikipedia has snowballed over the past few days into a sprawling collaborative goof-off on a nerdy conspiracy theory. The article is currently being considered for deletion, or consignment to Wikipedia's Bad Jokes and Other Deleted Nonsense archive. A funny specimen of web accumulatio (thanks, Infocult).
visions of revisions 03.30.2005, 12:52 PM
What does the evolution of a complex, multi-authored document look like over time? Below are revision histories of two Wikipedia articles, "Brazil" and "love," as rendered by History Flow Visualization, a new application from alphaWorks, the emerging technologies division at IBM.
Changes are depicted as parallelograms along two axes, the vertical axis representing the document's length, and the horizontal axis representing time. The tool offers "community" or single-author views, and uses color to emphasize or isolate specific information - i.e. to distinguish authors, or to measure age of a contribution. (view screenshots)
If you open Wikipedia's revision history of the Brazil article, you find a daunting list of hundreds of recorded changes. It's hard to get any sense of how this history compares in overall shape, complexity, and pattern of growth to that of love. But with the alphaWorks tool, it's clear at a glance that the Brazil article almost tripled in size in 2003, and seems to have been suddenly saturated in yellow (perhaps representing the preponderant influence of a single author?). We're looking not at a list, but a situation: in 2003, a self-designated authority on Brazil swaggered in and assumed leadership of the country's wiki-destiny, whereas love seems to have grown at a fairly constant rate with a pretty consistent mix of contributors - no swaggering, yellow Brazilians.
I'd say that the alphaWorks tool suggests something powerful, but is probably of limited use. It's good at providing the quick glance, but seems a little too mashed and muddled for line-by-line analysis. Good visualization tools are those that give a sense of the whole but also allow for minute investigation. At their best, they convey information meaningfully, even movingly. Nurturing a complex, multi-authored work is in some ways like raising a child. You mark its height against the wall, take photographs, file away old homework assignments, gather artifacts - in short, you construct a history, of "a hundred indecisions..and..a hundred visions and revisions."
the book as community: wikicities 03.29.2005, 10:09 AM
Jimmy Wales, creator of the not-for-profit Wikipedia has launched a for-profit, ad-supported site called Wikicities, which offers users "free MediaWiki hosting for a community to build a free content wiki-based website."
In yesterday's Wall Street Journal, Vauhini Vara noted that gaming communities have been particularly enthusiastic. “Laurence Parry, a 22-year-old computer programmer, who co-founded a wiki dedicated to a computer-game series called Creatures and now spends up to several hours a day updating the site. Before the Creatures wiki existed, fans of the game swapped tips in scattered online forums.”
The brilliance of this idea is it just might succeed in centralizing the online gathering place. Internet-based tribal communities that meet to discuss common interests can create their own “city” within the greater universe of Wikicities.com The communal spaces can be defined in the recommended “mission statement.” Wikicities also gives users advice on “developing your community” and “setting boundaries.”
In a recent Wired Magazine article Daniel H. Pink wondered if this is, in fact, a new idea.
It may feel like we've been down this road before - remember GeoCities and theglobe.com? But Wales says this is different because those earlier sites lacked any mechanism for true community. "It was just free homepages," he says. WikiCities, he believes, will let people who share a passion also share a project. They'll be able to design and build projects together.
bible fragments reunite in digital space 03.23.2005, 4:28 PM
Plans were recently announced for the digitization of the Codex Sinaiticus, the world's oldest existing Bible, which currently resides in four separate chunks in Egypt, Russia, Germany and Britain. Dating back to the mid-4th century, the Codex contains large portions in Greek of the Old Testament, and the complete New Testament, including several non-canonical epistles.
"The project encompasses four strands: conservation, digitisation, transcription and scholarly commentary to make the Codex available for a worldwide audience of all ages and levels of interest. There are plans for a range of projects including a free to view website, a high quality digital facsimile and CD Rom. It is intended that this project will be a model for future collaborations on other manuscripts."
a book by Lawrence Lessig and you 03.16.2005, 3:00 PM
Lawrence Lessig is inviting everyone to help revise and update his landmark 1999 book Code and Other Laws of Cyberspace on a public wiki, as a way of drawing "upon the creativity and knowledge of the community." (story in Mercury News)
From the site: "This is an online, collaborative book update; a first of its kind. Once the the project nears completion, Professor Lessig will take the contents of this wiki and ready it for publication. The
resulting book, Code v.2, will be published in late 2005 by Basic Books. All royalties, including the book advance, will be donated to Creative Commons."
As an experiment with networked books, this has a couple of big things going for it. For one, it is a pre-existent work with a large reader community. Like a stone tossed in the water, it creates ripples. Version 2 might benefit by incorporating these ripples. Secondly, Lessig will retain ultimate editorial authority, so we can be pretty sure that the final revision will be focused and well-shaped. And lastly, Lessig's subject is so vast, so multi-dimensional, that the book will almost certainly benefit from broad reader/writer input. And for someone like Lessig, who is as much an activist as a scholar, constantly running around the world spreading his ideas, it is a nice way of asking for assistance in the time-consuming process of updating of a book that the world needs sooner rather than later.
Incidentally, Lessig will be appearing on April 7 at the New York Public Library with Wilco frontman Jeff Tweedy to discuss the question, "Who Owns Culture?" moderated by Steven Johnson. (thanks, NEWSgrist)
Tweedy says: "A piece of art is not a loaf of bread. When someone steals a loaf of bread from the store, that's it. The loaf of bread is gone. When someone downloads a piece of music, it's just data until the listener puts that music back together with their own ears, their mind, their subjective
the film is over and the credits have begun to roll 03.13.2005, 9:42 PM
went to the armory show over the weekend. most conceptually interesting piece i saw was a large "poster" which described the credits to a film as an entry point to an ever-expanding chain of related information . . . basically suggesting that all knowledge can be linked in a semantic web to all other knowledge . . . theoretically the internet will develop to the point that this will be actually true instead of just conceptiually true. (emabarrassed to admit that in my excitement i didn't get the name of the artist; if anyone recognizes the work, please tell us.)
describing humanity in data sets 03.04.2005, 11:14 AM
Yahoo's recently released commemorative microsite, "Yahoo Netrospective: 10 years, 100 moments," is a selection of one hundred significant moments in the history of the web (1995–2005). The format for the site was inspired by the work of information architect Jonathan Harris. Harris created 10 x 10, a piece visually identical to, but considerably more interesting than the Yahoo birthday card, (whose content leans quite heavily toward self-promotion, i.e. there are 20 mentions of Yahoo products and no mention at all of Google.) By contrast, Harris’ 10 x 10 builds its fascinating content from RSS feeds. The piece selects the most frequently used words from the major news networks to assemble an hourly “portrait” of our world. "What interests me is trying to find descriptions of humanity in very large data sets, creating programs that tell us something about ourselves," Harris told Wired News. "We set them free and they come back and tell us what we are like."
What makes Harris’ work interesting is the self-discipline he exercises in designing these objective systems. By withholding the urge to edit (except, perhaps, when Yahoo is involved) he allows an authentic “picture” of current events, of human behavior online, of the fluid exchange of words and images. His linguistic self-portrait WordCount™, harvests data from the British National Corpus®. WordCount displays the 86,800 most commonly used words in the English language in order of their commonness. Harris alleges that "observing closely ranked words tells us a great deal about our culture. For instance, “God” is one word from “began”, two words from “start”, and six words from “war”. I tried WordCount and was instantly addicted. To read WordCount or 10 x 10, you have to interact with it and bring meaning to it. Or put another way, you have to be willing to bring meaning to it. This is quite different from the way we experience traditional narratives, whose structure and meaning are crafted by the writer and handed down to the reader. I am eagerly anticipating his next project which, he told Wired, "involves looking at human feelings on a large scale from the web."
harnessing the collective mind: the ultimate networked book 02.23.2005, 10:10 PM
Dr. Douglas C. Engelbart, who invented the computer mouse and is also credited with pioneering online computing and e-mail, advocates networked books as tools for building what he calls a "dynamic knowledge repository. This would be a place," Engelbart said, in a recent interview with K. Oanh Ha at Mercury News, "where you can put all different thoughts together that represents the best human understanding of a situation. It would be a well-formed argument. You can see the structure of the argument, people's assertions on both sides and their proof. This would all be knit together. You could use it for any number of problems. Wikipedia is something similar to it."
How to conceptualize, organize, build, and use a “book” of that scale, is the project of Engelbart’s Bootstrap Institute. In the "Reasons for Action" section of their website, Engelbart gives his perception of why we need such a book. It reads as follows:
• Our world is a complex place with urgent problems of a global scale.
• The rate, scale, and complex nature of change is unprecedented and beyond the capability of any one person, organization, or even nation to comprehend and respond to.
• Challenges of an exponential scale require an evolutionary coping strategy of a commensurate scale at a cooperative cross-disciplinary, international, cross-cultural level.
• We need a new, co-evolutionary environment capable of handling simultaneous complex social, technical, and economic changes at an appropriate rate and scale.
• The grand challenge is to boost the collective IQ* of organizations and of society. A successful effort brings about an improved capacity for addressing any other grand challenge.
• The improvements gained and applied in their own pursuit will accelerate the improvement of collective IQ. This is a bootstrapping strategy.
• Those organizations, communities, institutions, and nations that successfully bootstrap their collective IQ will achieve the highest levels of performance and success.
"Towards High-Performance Organizations: A Strategic Role for Groupware," a paper written by Dr. Engelbart in 1992, outlines practical ideas for the architecture of this vast and comprehensive networked book.
All of this is meaty food for thought with regards to our ongoing thread "the networked book." I am wondering what blog readers think about this? Assuming it becomes possible to collect, map, and analyze the thoughts and opinions of a large community, will it really be to our advantage? Will it necessarily lead to solving the complex problems that Dr. Engelbart speaks of, or will the grand group-think lead to certain dystopian outcomes which may, perhaps, cancel out its IQ-raising value?
building the cathedral: collaborative authorship and the internet 02.16.2005, 5:25 PM
The World Wide Web is, quite possibly, the most collaborative multi-cultural project in the history of mankind. Millions of people have contributed personal homepages, blogs, and other sites to the growing body of human expression available online. It is, one could say, the secular equivalent of the medieval cathedral, designed by a professional, but constructed by non-professionals, regular folk who are eager to participate in the construction of a legacy. Such is the context for projects like Wikimedia and the Semantic Web, designed by elite programmers, built by the masses.
One of the most pressing questions with regard to collaborative authorship is, can the content be trusted? Does the anonymous group author have the same authority as the credentialed single author? Is our belief in the quality of information inextricably connected to our belief in the authority of the writer? Wikimedia (the non-profit organization that initiated Wikipedia, Wikibooks,, Wiktionary, Wikinews, Wikisource, and Wikiquote) addresses these concerns by offering a new model for collaborative authorship and peer review. Wikipedia's anonymously published articles undergo peer review via direct peer revision. All revisions are saved and linked; user actions are logged and reversible. “This type of constant editing,” Wikimedia co-director Angela Beesley alleges, “allows you to trust the content over time.” The ambition of Wikimedia is to create a neutral territory where, through open debate, consensus can be reached on even the most contentious topics. The Wikimedia authoring system sets up a democratic forum where contributors construct their own rulespace and policies emerge from consensus-based, rather than top-down, processes. So the authority of the Wikimedia collaborative book depends, in part, on a collective self-discipline that is defined by and enforced by the group.
The collaborative authoring environment engendered by the web will make even more ambitious and far-reaching projects possible. Projects like the Semantic Web, which aims to make all content searchable by allowing users to assign semantic meaning to their work, will organize the prodigious output of collaborative networks, and could, potentially, cast the entire web as a collaboratively authored “book.”
"finally, I have a Memex!" 02.01.2005, 3:40 PM
There's an essay worth reading in the ny times book review this past sunday by Steven Johnson about a powerful semantic desktop management and search tool recently released for Macs. The software (called DEVONthink) not only helps organize and briskly sift through readings, clippings, quotes, and one's own past writings, but assists in the mysterious mental processes that are at the heart of writing - associative trains, useful non sequiturs, serendipitous stumbles. In effect, we now have a tool resembling the Memex device described in the seminal 1945 essay, As We May Think by visionary engineer Vannevar Bush. Working with the cutting edge technologies of his day - microfilm, thermionic tubes, and punch, or "Hollerith," cards - Bush pondered how technology might help humanity to manage and make use of its vast systems of information. His recognition of the basic problem is no less relevant today: "Our ineptitude in getting at the record is largely caused by the artificiality of systems of indexing." Fast forward to 2005. Now, the holy grail of search is the Semantic Web - moving beyond the artificiality of crude content-based queries and bringing meaning, relevance, and associations into the mix.
"Consider a future device for individual use, which is a sort of mechanized private file and library. It needs a name, and to coin one at random, ``memex'' will do. A memex is a device in which an individual stores all his books, records, and communications, and which is mechanized so that it may be consulted with exceeding speed and flexibility. It is an enlarged intimate supplement to his memory." - Vannevar Bush
It's quite suggestive that DEVONthink's semantic search function can to an extent be trained, taking the obnoxious little puppy on Windows search toward its full potential - a sleek, truffle-tuned hound. When Johnson loads his body of work onto the computer, the hound picks up the distinctive scent of his writing, which in turn suggests affinities, similarities, and connections to other materials - truffles - that will find their way into later works.
Says Johnson on his latest blog post, which goes into much greater detail than the Times piece:
"I have pre-filtered the results by selecting quotes that interest me, and by archiving my own prose. The signal-to-noise ratio is so high because I've eliminated 99% of the noise on my own."
But it is significant that DEVONthink is not useful for searching entire books (the author's own manuscripts notwithstanding). Currently, the tool is ideal for locating chunks of text that fall within the "sweet spot" of 50-500 words. If your archives include entire book-length texts, then the honing power is diminished. DEVONthink is optimal as a clip searcher. File searching remains a frustrating enterprise.
Johnson makes note of this:
"So the proper unit for this kind of exploratory, semantic search is not the file, but rather something else, something I don't quite have a word for: a chunk or cluster of text, something close to those little quotes that I've assembled in DevonThink. If I have an eBook of Manual DeLanda's on my hard drive, and I search for "urban ecosystem" I don't want the software to tell me that an entire book is related to my query. I want the software to tell me that these five separate paragraphs from this book are relevant. Until the tools can break out those smaller units on their own, I'll still be assembling my research library by hand in DevonThink."
Another point (from the Times piece) worth highlighting here, which relates to our discussion of the networked book:
"If these tools do get adopted, will they affect the kinds of books and essays people write? I suspect they might, because they are not as helpful to narratives or linear arguments; they're associative tools ultimately. They don't do cause-and-effect as well as they do 'x reminds me of y.' So they're ideally suited for books organized around ideas rather than single narrative threads: more 'Lives of a Cell' and 'The Tipping Point' than 'Seabiscuit.'"
And what about other forms of information - images, video, sound etc.? These media will come to play a larger role in the writing process, given the ease of processing them in a PC/web context. Images and music trump language in their associative power (a controversial assertion, please debate it!), and present us with layers of meaning that are harder to dissect, certainly by machine. It is an inchoate hound to be sure.
from the nouveau roman to the nouveau romance 01.28.2005, 3:52 PM
from the nouveau roman . . .
I've been working out of the Brooklyn Public Library lately, which has free wireless internet and an interesting collection of books. The organizing principle seems to be, as far as I can tell, that everything remotely interesting gets stolen. This means, in practice, that they have an exceptional collection of criticism of the French nouveau roman, which seems to have gathered dust on the shelves there since the early 1960s. The nouveaux romanciers were a loosely-knit group of novelists from the 1960s determined to shake the French novel out of existential doldrums through the use of new styles of narrative. Nathalie Sarraute and Alain Robbe-Grillet's microscopic examinations of everyday life might be seen as exemplary of the movement, though the novels of Marguerite Duras are probably the most widely read today.
To me, the most interesting of them is Michel Butor, who wrote four increasingly experimental novels in the early 1960s, and then tired of writing novels altogether. Mobile, his next major production, confused the critics immensely, some of whom declared that not only was it not a novel, it wasn't a book at all. Mobile is fantastic: it's a travel guide to the United States presented as a collage, abandoning the author's voice for bits of history, advertising, and found text. Following the example of Stéphane Mallarmé, the texts are spread over the pages, an analogue to the spatial journey the book describes, presenting a range of sensory (and historical) impressions of America. The French version has the text rotated 90 degrees so you have to hold the book sideways, a feature sadly not carried over into Richard Howard's otherwise wonderful English translation (recently republished by the Dalkey Archive). While the author's voice seems to be absent in favor of his found materials, there's clearly a subtext: the history of racism underlying the country from it's deep history to the present Butor found in 1964. More than a book, the effect on the reader is like that of the film-essays of Chris Marker (I'm thinking particularly of A Grin without a Cat) and Agnes Varda.
Butor continued to experiment with forms: he made radioplays for simultaneous voices, and has worked in collaboration with just about any sort of artist that can be imagined. Though he's produced an enormous amount of work since the 1960s, only a tiny fraction of it has been independent work. One of the first of his collaborations was with the composer Henry Pousseur; in the late 1960s, the two of them wrote an opera called Votre Faust, "your Faust". It was a modern retelling of the Faust story, but with a twist: at certain points during the production, the audience was asked to vote on what should happen next. Depending on how the audience voted (or failed to vote, which was also taken into account), the opera might have any of 25 different endings. After a long public gestation, it was finally produced in 1969 in Milan. It went over like a lead balloon, and subsequently largely vanished from sight, though the critics' pre-performance excitement remains frozen in time at the Brooklyn Public Library. LPs were evidently put out at the time. I'm curious what exactly was on them - was it a full recording of all the possible music, letting home listeners construct their own personal opera, or did it only contain one version?
Butor is still happily alive and still churning out poetry and other works; at some point in the nineties, he had his own website, though he doesn't look to have updated it in a while. His art, though, seems to have been perpetually ahead of his time: while Votre Faust didn't work in a live setting, it might have made a fine CD-ROM or DVD. I don't know if he's ever written specifically for electronic media, as Chris Marker has; I'd love to see what he would do with it.
. . . to the nouveau romance
"Harlequin" has achieved brand ubiquity: a "harlequin" is a trashy, disposable romance novel, just like a "kleenex" is a tissue and a "xerox" is a copy. We don't even bother thinking about the word any more than we usually think about romance novels. Do the romance novel and the Future of the Book have anything in common? Of course not! any right-thinking future-bookist would angrily declare. The future, as everybody knows, is the domain of science fiction, not the romance. A look at eharlequin.com, Harlequin's website, suggests that this might not be the case. The first surprise: how much content they have online. The second surprise: how much is interactive, and how much is devoted to the process of writing. Look at how much there is in the writing bulletin board, dedicated to helping the users write their own romance: templates for various varieties of romances that Harlequin publishes, advice on business, suggestions for those with writer's block.
There's also participatory authoring: in the Writing Round Robin, participants take turns writing chapters of a novel, and critiquing others' chapters. Unlike some of the open source and wiki novels elsewhere on the web, this is highly moderated writing: note the rules here. This might be expected: Harlequin, after all, is a publishing house, and experimentation isn't being done for experimentation's sake, but because it fits into a business model.
But to bring this back around to Butor's opera: consider eharlequin's Interactive Novel, where chapters are added one at a time, and the readers vote on how the work should progress: a chapter's written (or put online) accordingly. Right now the meddling readers are worrying themselves over whether or not Tess is pregnant with Derek's baby.
It's become a truism that porn drives technology: see here for one of the many observations of this. (Who first made this connection? Does it date back to before the VCR?) It might not be so surprising that seems romance is doing the same thing in the popular arena of the novel. Even more surprising might be that it's romance where this is happening. Sarah Glazer, writing in the New York Times Book Review was surprised to find that the biggest current growth market for ebooks is in romances. Is the future of the book to be found in the romance? It seems counterintuitive, but there seems to be more of a participatory literary culture at Harlequin's website than a quick scrutiny of some scifi publishers' websites would reveal. (I'd love to be proved wrong about this - can anyone provide examples?)
There's almost certainly no direct line that goes from Butor and Pousseur's Votre Faust to eharlequin.com's Interactive Read, except, I suppose, in the head of this particular reader. There's a whole history of interactive fiction that I've omitted - Julio Cortázar's Hopscotch, Milorad Pavić's Dictionary of the Khazars, a whole slew of Choose-Your-Own-Adventure books. But it's interesting that Butor & Pousseur's unsuccessful attempt ("It was very difficult to play. . . But we both have made many efforts to make it easy to realize. Without success." notes Butor in an online interview) should be taken up in such an unlikely form.
The romance novel, everyone concurs, is not art. There's not a great deal of critical theory thrown around about romances. The New Novelists were all about creating critical context for their fiction: Robbe-Grillet kicked things off with Pour un nouveau roman, a collection of essays on the novel's past and present, and Butor wrote a piece titled "The Future of the Book", among many others. This might be why the nouveau roman is generally considered a failure: it didn't end up remaking the mainstream of fiction. The contrast with eharlequin might be instructive: outside of the critical eye (and with the support of publishers) romance readers are becoming authors, seemingly constructing their own possible future of the book.
tools for collaborative writing 01.24.2005, 5:58 PM
SubEthaEdit is an elegant collaborative writing and editing tool, originally designed for coders, but increasingly popular among educators, especially writing teachers. And if you're using it for non-commercial purposes, it's free! Here's a fun piece written during the blizzard by a 3-person group using the software, courtesy of Slashdot. It's a piece of collaborative writing about collaborative writing. Very meta. Reading it through once, I couldn't really pick out individual voices.
networked book/book as network 01.14.2005, 8:49 PM
Book as Network/Networked Book? That's the koan I've been puzzling for the last few weeks. Can something made from, let's say, hundreds of semi-anonymous contributors or commentors be considered a book? Is this what the texting generation is going to want—something a little less single-author, a little more…bloggy? The possibility makes me slightly sick and dizzy (I'm still paying student loans for a single-author oriented MFA in creative writing). At the same time it’s kind of exciting. Could, for example, my newest favorite blog Overheard in the Office a spin off of Overheard in New York be considered a dynamic anthology?
What about multi-player game-based narrative formats like Sims; are they the digital equivalent of networked novels? Bob recently sent me a link to an article entitled: Sims 2 hacks spread like viruses. Apparently, hackers have infected the Sims 2 universe, messing with individual games/narratives. About this Bob says: this seems so interesting to me if we consider it as one of the strands of future narrative where the author evolves into a god who creates a universe that people populate and mess with as people do; i.e. that the author creates a starting place for an unfolding story. Of course this has been a visible strand since the advent of computer games, especially the large multi-player ones -- but for me the added bit here, that the mortals are messing with the game's code and thus vastly increasing the scope of the game, brings the whole subject up with renewed interest.
The future book will be a networked book or a "processed book" as Joseph Esposito calls it. To process a book, he says, is more than simply building links to it; it also includes a modification of the act of creation, which tends to encourage the absorption of the book into a network of applications, including but not restricted to commentary.
A modification of the act of creation...what, exactly, does this mean for the craft of storytelling? Is it changing utterly? And is somebody going to tell the MFA programs?
btw. if you know any great examples of networked books let me know. I'm building a collection.
city chromosomes - an sms chronicle 12.20.2004, 1:29 PM
Found this on textually.org. The City Chromosomes project is a sort of scrapbook of the city of Antwerp made entirely from text messages beamed in from mobile phones. Further evidence of the new genres emerging from this technology. An english version has just been published under a Creative Commons license.
Also take a look at this sister project, CityPoems, from Leeds. Posts from the Leeds project are interspersed through the english version of "Chromosomes."
From the introduction to "City Chromosomes":
"The city of Antwerp is full of writers. And many of these writers describe their city, often in splendid stories, novels and poems that gain a wide readership. In this way, they determine a large part of our image of the city. But what about the people who only readers, or even those who do not care for reading, what do they think of the city? And would it not be possible to persuade them to write this down?
"This was the point of departure for the City Chromosomes project. We got the idea of gathering sms messages. Nearly everybody has a mobile phone. Everybody has a moment to spare to type in a message. This was the ideal way to make the project accessible to everybody. The people of Antwerp, and anyone else with something to say about city, could submit their impressions anonymously. We established 25 text sites across the city, and the contributors could indicate with a simple code to which part of the city their message applied. By means of posters, flyers and ads, we asked people for their impressions. The only restriction: the messages should not be longer than 160 characters."
Posted by ben vershbow at 01:29 PM
| Comments (0)
tags: Microlit , SMS , Social Software , antwerp , cellphone , experiment , locative , mobile , text_messaging , the_networked_book , txt , ubiquitous_computing
Parsing the Behemoth: Thought Experiments 12.06.2004, 10:33 AM
Bob talks about the book as metaphor. It is the thing that does the heavy lifting, a technology that allows us to convey our thoughts through a concrete vehicle. This site looks at how that vehicle is changing as a new electronic means of conveying written information begins to come of age.
When asked to imagine a metaphor for “the book,” we come up with something more organic, a lumbering behemoth with a hundred arms, waving anemone-like through the air to catch out particles of human discourse. The creature has some kind of hair or fur entangled with innumerable flotsam and jetsam. It is buzzing with attendant parasitical organisms, and encrusted with barnacles. To ask if the behemoth has a future is not the right question because the book, as we are picturing it in this analogy, is an immortal. The electronic incarnation of the book does not kill the old behemoth, but rather becomes part of it.
In his afterword to “the Future of the Book,” Umberto Eco noted that:
“In the history of culture it has never happened that something has killed something else, something has profoundly changed something else.” We are interested in the nature of this change as it relates to the book and its evolution.
To examine this heavy lifting device, to define and to understand this aggregate behemoth is the project of our “future of the book” blog. To begin, we have initiated a few thought experiments and put forth several questions that we hope will engender productive discourse. We welcome ideas and suggestions for future experiments.
Posted by Kim White at 10:33 AM
| Comments (0)
tags: Blogosphere , General , Thought Experiments , blogs , book , books , ebook , ebooks , history_of_the_book , the_form_of_the_book , the_networked_book