if:book's first year 12.31.2005, 3:16 PM
I spent several hours last night and this morning looking over all the posts since we started if:book last december. It's been a remarkably interesting experience working with my colleagues, exploring and defining the boundaries of our interests and effort. Here are a few posts i picked out for one reason or another. On monday we'll post a new revised mission statement for the institute .
1. Three Books That Influenced Your Worldview: The List
we launched the site with the results of our first though experiment in which we asked people to name the three books that most influenced their world view. the results were very interesting. check out the exchange with Alan Kay too.
2. networked book/book as network
kim wrote this first if:book post which mentioned the concept of a "networked book" -- a subject that we keep coming back to and find increasingly exciting.
3. genre-busting books
sol gaitan was our most frequent guest blogger. the breadth of her cultural knowledge and her constant reminder that the boundaries of our world extend beyond the hyper-connected coasts of the U.S. are a crucial and welcome contribution.
4. from the nouveau roman to the nouveau romance
one of a dozen or so long posts from Dan who took a seemingly obscure subject and wove it into a deliciously interesting discussion completely relevant to our effort to understand the shifting landscape of intellectual discourse. a more recent one
5. contagious media: symptom of what's to come?
first time we experimented with making our work open and transparent. this idea grew over time and is now in the draft of our new mission statement which says, Academic institutes arose in the age of print, which informed the structure and rhythm of their work. The Institute for the Future of the Book was born in the digital era, and we seek to conduct our work in ways appropriate to the emerging modes of communication and rhythms of the networked world. Freed from the traditional print publishing cycles and hierarchies of authority, the Institute seeks to conduct its activities as much as possible in the open and in real time.
6. ted nelson & the ideologies of documents
a brilliant post by Dan about the importance of (much-maligned visionary) Ted Nelson's views on the way we choose to structure and represent knowledge.
7. it seems to be happening before our eyes, Pt 1 and Pt2
2005 is likely to be remembered as the year that we started to pay more attention to individual voices in the blogosphere than the mainstream media. The NY Times and Washington Post may never recover from the exposures that showed they were in cahoots with the Bush administration over Plamegate and the admission of wholesale unauthorized wire-tapping.
8. blog reading: what's left behind
dan wrote this post about the deficiencies of the structure of blogs. it's a recurring theme at the institute and you'll see a lot more about it in the coming year.
9. transliteracies: research in the technological, social, & cultural practices of online reading
ben re-posted this interesting discussion by Alan Liu on the changing nature of reading and browsing in an online context.
10. flushing the net down the tubes
ben's first post on the crucial subject of the coming battle in which the telcos and cable companies will try to turn the web into a broadcast medium favoring the big media companies over individual voices.
11. sober thoughts on google: privatization and privacy
thanks to ben's thougtful posts, the institute has gained a reputation for developing an even-handed view of Google book scanning and searching project.
12. the "talk to me" crew talks with the institute
now that we've got our cool new offices in williamsburg (brooklyn), we've been inviting an interesting group of folks to lunch. Liz and Bill were two of our favorite visitors, written up in a nice post by Ray. Other interesting visitors were Ken Wark, Tom De Zengotita and Mitchelll Stephens.
13. the future of the book: korea, 13th century
couldn't resist including ben's write-up to a buddhist monastery in korea -- both because it has the most beautiful photo that appeared in the blog and for one of my favorite images . . . the whole monastery a kind of computer, the monks running routines to and from the database.
Wikipedia to consider advertising 12.30.2005, 4:29 PM
The London Times just published an interview with Wikipedia founder Jimmy Wales in which he entertains the idea of carrying ads. This mention is likely to generate an avalanche of discussion about the commercialization of open-source resources. While i would love to see Wikipedia stay out of the commercial realm, it's just not likely. Yahoo, Google and other big companies are going to commercialize Wikipedia anyway so taking ads is likely to end up a no-brainer. As i mentioned in my comment on Lisa's earlier post, this is going to happen as long as the overall context is defined by capitalist relations. Presuming that the web can be developed in a cooperative, non-capitalist way without fierce competition and push-back from the corporations who control the web's infrastructure seems naive to me.
Posted by bob stein at 4:29 PM
defending the creative commons license 12.30.2005, 3:47 PM
interesting question came up today in the office. there's a site, surferdiary.com, that reposts every entry on if:book. they do the same for several other sites, presumably as a way to generate traffic to their site and ultimately to gather clicks on their google supplied ads. if:book entries are posted with a creative commons license which allows reuse with proper attribution but forbids commercial use. surferdiary's use seems to be thoroughly commercial. some of my colleagues think we should go after them as a way of defending the creative commons concept. would love to know what people think?
remix: the movie 12.29.2005, 5:51 PM
Wired magazine reports that Michela Ledwidge is advancing the future of filmmaking, by taking cues from game moding. In early 2006, she will post all the raw material for her 10 minute sci-fi short film, Sanctuary, to the website:www.modfilms.com . People will be able to edit their own versions of the film. Sanctuary continues the open source trend also fostered by recording artists such as Jay Z who released vocal files of his Black album to encourage DJs to create their own mixes. Some die-hard Jay Z fans even posted everything you need in a similar fashion to Ledwidge.
I hope that many people submit films. I am very interested to see what the results will be on the aggregate level. While many people will undoubtedly create mashup versions with external content, I am especially curious to see the results from people who do not add much additional material. How many interesting stories can be told using these basic parts? Is there a "correct" shot selection or a traditional (i.e. "I learned to edit in film school") edit? How wedded are we to traditional film narrative conventions which dictate what is "good" and "bad"? Will only a few compelling narratives arise or will many?
why google and yahoo love wikipedia 12.29.2005, 3:16 PM
From Dan Cohen's excellent Digital Humanities Blog comes a discussion of the Wikipedia story that Cohen claims no one seems to be writing about -- namely, the question of why Google and Yahoo give so much free server space and bandwith to Wikipedia. Cohen points out that there's more going on here than just the open source ethos of these tech companies: in fact, the two companies are becoming increasingly dependent on Wikipedia as a resource, both as something to repackage for commercial use (in sites such as Answers.com), and as a major component in the programming of search algorithms. Cohen writes:
Let me provide a brief example that I hope will show the value of having such a free resource when you are trying to scan, sort, and mine enormous corpora of text. Let's say you have a billion unstructured, untagged, unsorted documents related to the American presidency in the last twenty years. How would you differentiate between documents that were about George H. W. Bush (Sr.) and George W. Bush (Jr.)? This is a tough information retrieval problem because both presidents are often referred to as just "George Bush" or "Bush." Using data-mining algorithms such as Yahoo's remarkable Term Extraction service, you could pull out of the Wikipedia entries for the two Bushes the most common words and phrases that were likely to show up in documents about each (e.g., "Berlin Wall" and "Barbara" vs. "September 11" and "Laura"). You would still run into some disambiguation problems ("Saddam Hussein," "Iraq," "Dick Cheney" would show up a lot for both), but this method is actually quite a powerful start to document categorization.
Cohen's observation is a valuable reminder that all of the discussion of Wikipedia's accuracy and usefulness as an academic tool is really only skimming the surface of how and why the open-souce encyclopedia is reshaping the way knowledge is made and accessed. Ultimately, the question of whether or not Wikipedia should be used in the classroom might be less important than whether -- or how -- it is used in the boardroom, by companies whose function is to repackage, reorganize and return "the people's knowledge" back to the people at a tidy profit.
Posted by lisa lynch at 3:16 PM
another view on the stacey/gamma flap 12.28.2005, 12:42 PM
For an alternative view of Lisa's earlier post ... i wonder if Gamma's submission of Adam Stacey's image with the "Adam Stacey/Gamma" attribution doesn't show the strength of the Creative Commons concept. As i see it, Stacey published his image without any restrictions beyond attribution. Gamma, a well-respected photo agency started distributing the image attributed to Stacey. Isn't this exactly what the CC license was supposed to enable -- the free-flow of information on the net. perhaps Stacey chose the wrong license and he didn't mean for his work to be distributed by a for-profit company. If so, that is a reminder to all of us to be careful about which Creative Commons license we choose. One thing i'm not clear on is whether Gamma referenced the CC license. They are supposed to do that and if they didn't they should have.
phone photo of london underground nominated for time best photo;
photo agency claims credit for creative commons work 12.28.2005, 10:26 AM
Moblog co-founder Alfie Dennen is furious that the photo agency Gamma has claimed credit for a well-known photo of last summer's London subway bombing --first circulated on Moblog under a Creative Commons liscence -- that was chosen for Time's annual Best Photo contest. Dennen and others in the blogosphere are hoping that photographer Adam Stacey might take legal action against Gamma for what seems to be a breach of copyright.
We at the Institute are still trying to figure out what to make of this. Like everyone else who has been observing the increasing popularity of the Creative Commons license, we've been wondering when and how the license will be tested in court. However, this might not be the best possible test case. On one hand, it seems to be a somewhat imperious "claiming" of a photo widely celebrated for being produced by a citizen journalist who was committed to its free circulation. One the other hand, it seems unclear whether Dennen and/or Stacey are correct in their assertion that the CC license that was used really prohibits Gamma from attaching their name to the photo.
The photo in question, a shot of gasping passengers evacuating the London Underground in the moments after last summer's bombing (in the image above, it's the second photo clockwise), was snapped by Stacey using the camera on his cellphone. Time's nomination of the photo most likely reflects the fact that the photo itself -- and Stacey -- became something of a media phenomenon in the weeks following the bombing. The image was posted on Moblog about 15 minutes after the bombing, and then widely circulated in both print and online media venues. Stacey subsequently appeared on NPR's All Things Considered, and the photo was heralded as a signpost that citizen journalism had come into its own.
While writing about the photo's appearance in Time, Dennen noticed that Time had credited the photo to Adam Stacey/Gamma instead of Adam Stacey/Creative Commons. According to Dennen, Stacey had been contacted by Gamma and had turned down their offer to distribute the photo, so the attribution came as an unpleasant shock. He claims that the license chosen by Stacey clearly indicates that the photo be given Creative Commons attribution. But is this really clear? The photo is attributed to Stacey, but not to Creative Commons: does this create a grey area? The license does allow commercial use of Stacey's photo, so if Gamma was making a profit off the image, that would be legal as well.
Dennen writes on his weblog that he contacted Gamma for an explanation, arguing that after Stacey told the agency that he wanted to distribute the photo through Creative Commons, they should have understood that they could use it, but not claim it as their own. Gamma responded in an email that, "[we] had access to this pix on the web as well as anyone, therefore we downloaded it and released it under Gamma credit as all agencies did or could have done since there was no special requirement regarding the credit." They also claimed that in their conversation with Stacey, Creative Commons never came up, and that a "more complete answer" to the reason for the attribution would be available after January 3rd, when the agent who spoke with Stacey returned from Christmas vacation.
Until then, it's difficult to say whether Gamma's claim of credit for the photo is accidental or deliberate disregard. Dennen also says that he's contacting Time to urge them to issue a correction, but he hasn't gotten a response yet. I'll follow this story as it develops.
mothlight 12.28.2005, 2:40 AM
i. the text of light
The filmmaker Stan Brakhage is one of those people whose work hangs in the back of my mind with a frequency well out of proportion to my actually engaging with his work. I first (and really, only, until recently) saw his films about seven years ago, when he introduced a marathon screening of what must have been almost his complete works. Hours later, I stumbled out of the theatre, knowing that this was someone whose work I should have seen years before, having seen on the screen something new to me, a new way of looking at the possibility of film. I've felt an analogous sensation with only a handful of artists and writers; I've found it again in the luminously fractured English of Amos Tutuola, Ray Johnson's conceptual games of "correspondance", and Michel Butor's reimagining of the page and narrative.
But Brakhage. His films tend to be short and silent. His editing, if it can be called such, is quick - often an image only shows for a frame, then it's gone. In most of his films, he cuts quickly between shots; in some of his work he abandoned the camera entirely to work directly with the film stock itself, painting on it or gluing things to it. Jean-Luc Godard said that the cinema was the truth twenty-four times a second; rarely has this been so literally explicated as in Brakhage's films. Mitchell Stephens, in The Rise of the Image, the Fall of the Word saw in the lighting-fast editing style that Brakhage introduced a possible future for communication. But Brakhage was following his own very particular muse. His aim, he declared in one of his later films, was to show on film what the eye sees when it is closed, the phenomenon of dancing spots of color that's been termed hypnogogic vision.
Mothlight, stills from which can be seen floating about this post, was one of the films that stuck most clearly in my mind. In it, Brakhage set out to show "what a moth might see from birth to death if black were white and white were black." The ratio of the size of the moth to that of ourselves is roughly that of the size of a film negative to the blown up film; with this in mind, and a collection of dead moths that had flown into a light and perished, Brakhage composed a three-minute film, every frame of which is composed of things from the moth's world: wings, plant leaves, and flowers. Brakhage pasted these objects directly onto the film stock. When the film is projected there's a rush of images made unfamiliar by size: a moth wing or a flower twenty feet wide is something we've never seen before. Though constructed of objects that we imagine we know, the moth's world as Brakhage depicts it is utterly foreign to us. Speed has a lot to do with it: the eye can't possibly process twenty-four different images per second. The moth's world is much faster than our own.
The memory of that flicker of images has stayed with me, as much for its ephemerality as anything else: I'd seen something briefly, not long enough to remember the images themselves, but long enough to remember the film more clearly than nearly anything else I saw that year.
ii. the act of seeing with one's own eyes
The Criterion Collection released a two-DVD set of some of Brakhage's films a few years ago, a year or so before he passed away and was briefly in the news again. I'm not sure why I held off buying a copy of the DVD: maybe a devotion to the ephemerality of memory? I did convince an old roommate, a painter, to buy his own copy soon after it was released: independently, he had been trying to capture with oils hypnogogic visions of his own. But I didn't get a copy until a few weeks ago, when after finding him again in Mitchell Stephens's book, I broke down for an online Christmas DVD sale. It arrived, and I inserted one of the discs into my Powerbook, curious to see how his films compared to my memory of them.
Watching him the first time, I'd been supine before the screen. Watching him on my laptop was something different, something surprising. My first impulse after starting one of the films playing was the obvious one when something's going too fast, but not an option that one has when you're part of an audience: to hit the pause button. The moth wings, spider webs, flowers, and blades of grass instantly snap into startling focus: around every object, you can see the halo of glue that Brakhage used to hold it to the celluloid. Then another pleasant surprise: on the Apple DVD player, if you then press the right arrow button, you can advance one frame. Not another frame in the same shot, as one would expect in an ordinary film, but another image entirely, though sometimes, you realize, a connected object: in some frames one sees the top of a plant leaf, in the following, the bottom, exactly as Brakhage constructed the film. (Images of the film stock itself - not just screencaptures from the DVD - can be seen at critic Fred Camper's website, which offers a dizzying amount of information on Stan Brakhage.)
Technically, this is not very exciting at all: pausing to see a crisp frame is just one of the niceties of the DVD that we're all used to. But what this does to the viewer's experience of the film is immeasurable. Instead of the imposed stricture of watching the images projected at 24 frames per second, you're free to proceed through Brakhage's frames at any rate you like - his film becomes something like a slide show. As great a change, though, is imparted to the viewer, who goes from being a passive recipient of speeding images to an active participant with control over what's being shown.
This isn't, it's worth pausing to consider, something that would have been possible with a VHS tape. Video didn't respect the frames of film, and a paused VHS tape generally gives you a blurred and indistinct image. Presumably with a projector and a copy of the original films, you could do the same thing. (This would also resolve the incongruity of looking at images that are meant to have light projected through them rather than being composed of red, green, and blue blips of it, as on my computer screen.) Alas, not many of us have our own movie theater to try this out in. For the rest of us, this amount of control is something that arrives with new digital media, and deserves to be considered as a function of it.
From time to time, Bob talks to the programmers busy making Sophie about imagining film as being a book that's flipping through 86,400 pages per hour. This sort of talk tends to throw them into conniptions (to paraphrase: nobody in their right mind thinks that way & how current processors don't have nearly enough computational power and they probably won't for the next fifty years). But despite their objections, that's almost exactly what we have here, if not through design. It's worth noting, of course, that the tools for reading film in this way aren't yet perfect: while I can press the right arrow to advance a frame in my DVD player, I can't, for some reason, press the left arrow to go back a frame: you have to rewind. I can't look at several consecutive frames together without a fair amount of work. There's more work for the programmers.
Thinking about technology for the past month or so, I've often found myself in a mild funk, which might be the sort of thing one expects to set in around the end of the year, when I, at least, find myself wanting to neatly box the disjointed events of the past year to take up to the attic for storage. The crux of my worrying: while there's clearly no shortage for ideas of new ways to say things - as even a cursory reading of this blog will readily attest - there seems to be a comparative paucity of new ways to understand things. Maybe this makes sense: people like novelty. It's more exciting to announce something brand new and different than to find a new way to look at something familiar. Who can be bothered to care about a fourteenth way of looking at a blackbird when you can make your own genetically-modified fuchsia- and chartreuse-birds?
My funk wasn't straight misoneism: I'm all for new forms else I wouldn't be working here. But if we're to create new forms that resonate as strongly as the physical book has been able to historically - a project that I suspect Brakhage considered himself engaged in - it's just as important to find new ways to understand how what we've created works. And this is I think why the simple gesture of hitting the "pause" button in the middle of a film feels like something of a revelation to me, puncturing my December miasma. It's not a blinding Damascene conversion, and that's perhaps the point: it's a realization that there are plenty of possibilities for new ways to look at things. We just need to notice them.
The late Guy Davenport, one of Brakhage's friends and a kindred spirit, defended the unity of some of his own work - short stories that included his own drawings as an integral part of the story - by arguing that text, picture, and film weren't in opposition, but were all images alike:
"A page, which I think of as a picture, is essentially a texture of images. . . . The text of a story is therefore a continuous graph, kin to the imagist poem, to a collage (Ernst, Willi Baumeister, El Lissitzky), a page of Pound, a Brakhage film."
(from "Ernst Machs Max Ernst", pp. 374-5 in The Geography of the Imagination)
Davenport's declaration can be turned inside-out: we can now take a Brakhage film and read it as a series of pages. The word and the image still aren't quite the same thing, but digital media allows us to think about them in some of the same ways. Watching with the pause button ready, we can scrutinize the composition of a single frame of film just as you might scrutinize an individual line or words in a poem, a page of a book.
Or again: historically, the coming of the book might be seen as freeing the reader from the dominion of time. The pre-literate can only listen to a text being read, while the literate is free to read at leisure. It's a pause button, of a sort. Brakhage's moth seems an apt tool for thinking here. I haven't done the math, but I'd imagine the ratio of the length of a moth's life to our own is about the same as the ratio of the moth's size to our own. When we look at a moth, we see a being utterly bound by time. But it doesn't have to be that way.
bookcrossing.com and the future of the book 12.27.2005, 12:39 PM
I came across an an interesting overview piece on the future of the book in Global Politician, an online magazine that largely focuses on reporting underreported global issue stories. The author of the piece, economist and political consultant Sam Vaknin, covers much of the terrain we usually cover here at the Institute, but he also make an interesting point about how the online book-swapping collective Bookcrossing has been turning paper books into "networked books" over the past four years. Vaknin writes:
Members of the BookCrossing.com community register their books in a central database, obtain a BCID (BookCrossing ID Number) and then give the book to someone, or simply leave it lying around to be found. The volume's successive owners provide BookCrossing with their coordinates. This innocuous model subverts the legal concept of ownership and transforms the book from a passive, inert object into a catalyst of human interactions. In other words, it returns the book to its origins: a dialog-provoking time capsule.
I appreciate the fact that Vaknin draws attention to the ways in which books can be conceptually transformed by ventures such as BookCrossing even while they remain physically unchanged. Currently, there are only about half a million BookCrossing members, making the phenemenon somewhat less popular than podcasting, but given that most BookCrossing members are serious readers -- and highly international -- the movement is still noteworthy.
the future of the book: korea, 13th century 12.27.2005, 11:35 AM
Nestled in the Gaya mountain range in southern Korea, the Haeinsa monastery houses the Tripitaka Koreana, the largest, most complete set of Buddhist scriptures in existence -- over 80,000 wooden tablets (enough to print all of Buddhism's sacred texts) kept in open-air storage for the past six centuries. The tablets were carved between 1237 and 1251 in anticipation of the impending Mongol invasion, both as a spiritual effort to ward off the attack, and as an insurance policy. They replaced an earlier set of blocks that had been destroyed in the last Mongol incursion in 1231.
From Korea's national heritage site description of the tablets:
The printing blocks are some 70cm wide 24cm long and 2.8cm thick on the average. Each block has 23 lines of text, each with 14 characters, on each side. Each block thus has a total of 644 characters on both sides. Some 30 men carved the total 52,382,960 characters in the clean and simple style of Song Chinese master calligrapher Ou-yang Hsun, which was widely favored by the aristocratic elites of Goryeo. The carvers worked with incredible dedication and precision without making a single error. They are said to have knelt down and bowed after carving each character. The script is so uniform from beginning to end that the woodblocks look like the work of one person.
I stayed at the Haeinsa temple last Friday night on a sleeping mat in bare room with a heated floor, alongside a number of noisy Koreans (including the rather sardonic temple webmaster -- Haiensa is a Unesco World Heritage site and so keeps a high profile). At three in the morning, at the call to the day's first service, I tramped around the snowy courtyards under crisp, chill stars and watched as the monks pounded a massive barrel-shaped drum hanging inside a pagoda. This was for the benefit of those praying inside the temple (where it sounds like distant thunder). Shivering to the side, I continued to watch as they rang a bell the size of a Volkswagen with a polished log swung on ropes like a wrecking ball. Next to it, another monk ripped out a loud, clattering drum roll inside the wooden ribs of a dragon-like fish, also suspended from the pagoda's roof. It was freezing cold with a biting wind -- not pleasant to be outside, and at such an hour. But the stars were absolutely vivid. I'm no good at picking out constellations, but Orion was poised unmistakeably above the mountains as though stalking an elk on the other side of the ridge.
It's a magical, somewhat harsh place, Haiensa. The Changgyeonggak, the two storage halls that house the Tripitaka, were built ingeniously to preserve the tablets by blocking wind, facilitating ventilation and distributing moisture. You see the monks busying themselves with devotions and chores, practicing an ancient way of life founded upon those tablets. The whole monastery a kind of computer, the monks running routines to and from the database. The mountains, Orion, the drum all part of the program. It seemed almost more hi-tech than cutting edge Seoul.
More on that later.
holiday round up 12.26.2005, 2:07 PM
The institute is pleased to announce the release of the blog Without Gods. Mitchell Stephens is using this blog as a public workshop and forum for his work on his latest book which focuses on the history of atheism.
The wikipedia debate continues as Chris Anderson of Wired Magazine weighs in that people are uncomfortable with wikipedia because they cannot comprehend that emergent systems can produce "correct" answers on the marcoscale even if no one is really watching the microscale. Lisa poses that Anderson goes too far in the defense of wikipedia and that blind faith in the system is equally disconcerting, if not more so.
ITP Winter 2005 show had several institute related projects. Among them were explorations in digital graphic reinterpretation of poetry, student social networks, navigating New York through augmented reality, and manipulating video to document the city by freezing time and space.
Lisa discovered an interesting historical parallel in an article from Dial dating back to 1899. The specialized bookseller's demise is lamented with the introduction of department store selling books as lost leader, not unlike today's criticisms of Amazon. As libraries are increasing their relationships with the private sector, Lisa notes that some bookstores are playing a role in fostering intellectual and culture communities, a role which libraries traditionally held.
Lisa looked at Reed Johnson assertion in the Los Angeles Times that 2005 was the year that mass media has given way to the consumer-driven techno-cultural revolution.
The discourse on game space continues to evolve with Edward Castronova's new book Synthetic Worlds. As millions of people spend more time in the immersive environments, Castronova looks at how the real and the virtual are blending through the lens of an economist.
In another advance for content creation volunteerism, LibriVox is creating and distributing audio files of public domain literature. Ranging from the Wizard of Oz to the US Constitution, Lisa was impressed by the quality of the recordings, which are voiced and recorded by volunteers who feel passionate about a particular work.
without gods: an experiment 12.22.2005, 7:27 AM
The institute is pleased to announce the launch of Without Gods, a new blog by New York University journalism professor and media historian Mitchell Stephens that will serve as a public workshop and forum for the writing of his latest book. Mitch, whose previous works include A History of News and the rise of the image the fall of the word, is in the early stages of writing a narrative history of atheism, to be published in 2007 by Carroll and Graf. The book will tell the story of the human struggle to live without gods, focusing on those individuals, "from Greek philosophers to Romantic poets to formerly Islamic novelists," who have undertaken the cause of atheism - "a cause that promises no heavenly reward."
Without Gods will be a place for Mitch to think out loud and begin a substantive exchange with readers. Our hope is that the conversation will be joined, that ideas will be challenged, facts corrected, queries and probes answered; that lively and intelligent discussion will ensue. As Mitch says: "We expect that the book's acknowledgements will eventually include a number of individuals best known to me by email address."
Without Gods is the first in a series of blogs the institute is hosting to challenge the traditional relationship between authors and readers, to learn how the network might more directly inform the usually solitary business of authorship. We are interested to see how a partial exposure of the writing process might affect the eventual finished book, and at the same time to gently undermine the notion that a book can ever be entirely finished. We invite you to read Without Gods, to spread the word, and to take part in this experiment.
wikipedia and 'alien logic:' the debate gets spiritual 12.21.2005, 1:23 PM
If you like Mitchell Stephen's book-blog about the history of atheism, you might want to compare Mitchell's approach to that of "The Long Tail," a book-blog written by Chris Anderson of Wired Magazine. Like Stephens, Anderson is trying to work out his ideas for a future book online: his book looks at the technology-driven atomizaton of our economy and culture, a phenomenon Anderson (and Wired) doesn't seem particularly troubled by.
On December 18, Anderson wrote a post about what he saw as the real reason people are uncomfortable with Wikipedia: according to Anderson, we're unable to reconcile with the "alien logic" of probabilistic and emergent systems, which produce "correct" answers on the macro-scale because "they are statistically optimized to excel over time and large numbers" -- even though no one is really minding the store.
On one hand, Anderson's been saying what I (and lots of other people) have been saying repeatedly over the past few weeks: acknowledge that sometimes Wikipedia gets things wrong, but also pay attention to the overwhelming number of times the open-source encyclopedia gets things right. At the same time, I'm not comfortable with Anderson's suggestion that we can't "wrap our heads around" the essential rightness of probabalistic engines -- especially when he compares this to not being able to wrap our heads around probibalistic systems. This call for greater faith in the algorithm also troubles Nicholas Carr, who responds agnostically:
Maybe it's just the Christmas season, but all this talk of omniscience and inscrutability and the insufficiency of our mammalian brains brings to mind the classic explanation for why God's ways remain mysterious to mere mortals: "Man's finite mind is incapable of comprehending the infinite mind of God." Chris presents the web's alien intelligence as something of a secular godhead, a higher power beyond human understanding... I confess: I'm an unbeliever. My mammalian mind remains mired in the earthly muck of doubt. It's not that I think Chris is wrong about the workings of "probabilistic systems." I'm sure he's right. Where I have a problem is in his implicit trust that the optimization of the system, the achievement of the mathematical perfection of the macroscale, is something to be desired....Might not this statistical optimization of "value" at the macroscale be a recipe for mediocrity at the microscale - the scale, it's worth remembering, that defines our own individual lives and the culture that surrounds us?
Carr's point is well-taken: what is valuable about Wikipedia to many of us is not that it is an engine for self-regulation, but that it allows individual human beings to come together to create a shared knowledge resource. Anderson's call for faith in the system is swinging the pendulum too far in the other direction: while other defenders of Wikipedia have pointed out ways to tinker with the encyclopedia's human interface, Anderson implies that the human interface -- at the individual level -- doesn't quite matter. I don't find this particularly conforting: in fact, this idea seems much scarier than Seigenthaler's warning that Wikipedia is a playground for "volunteer vandals."
itp winter 2005 show 12.21.2005, 11:48 AM
New York University's Interactive Telecommunications Program recently had its Winter 2005 show. As always, the show was packed with numerous projects and visitors. Some of the work touched upon ideas we think about at the institute.
A few projects explored new ways to mediate New York. Leif Mangelsen and Jung Oh, in Time Scanned, created static panoramic images by stitching together slivers of digital video to document New York over time and space. Moving beyond the traditional guide book and map, the augmented reality project, DataCity looked at how we navigate New York. In this case, Shagun Singh, Jon Kirchherr and Saranont Limpananont proposed to layer contextual information on an interactive display system to enhance the experience of traveling through the city.
Saiyanthan Sriskandarajah created, The Wasteland, a digital representation of T.S. Eliot's poem. Each letter is encoded into a binary format and then printed with a large format printer. The end result is an abstracted digital representation of a literary work.
Joshua Knowles, Adam Asarnow, Charles Pratt, and Rocio Barcia created Itp.licio.us which was a new twist to the facebook, and explored folksonomy, privacy, and social networks by asking fellow first year students to tag each other. The successful end result (students received an average of 29.4 tags) also addressed issues of internet mediated social interaction and making public the personal information of what classmates think of others.
Although, the twice a year itp shows can be a bit of an overwhelming experience, they offer a glimpse (albeit scaled down) of emerging applications of technology which are often just around the corner for mainstream use.
the future of the book(store), circa 1899 and 2005 12.20.2005, 2:47 PM
Leafing through an 1899 issue of the literary magazine The Dial, I came across an article called "The Distribution of Books" which resonated with the present moment at several uncanny junctures, and got me thinking about the evolving relationship between publishers, libraries, bookstores, and Google Book Search -- thoughts which themselves evolved after a conversation with a writer from Pages magazine about the future of bookstores.
"The Distribution of Books" focused mainly on changes in the way books were marketed and distributed, warning that bookstores might go out of business if they failed to change their own business practices in response. "Once more the plaint of the bookseller is heard in the land," lamented the author, "and one would be indeed stony-hearted who could view his condition without concern."
According to "The Distribution of Books," what should have been the privileged domain of the bookseller was being eroded at the century's end by the book sales of "the great dealers in miscellaneous merchandise." The article was referring to the department stores that sold books at a loss in order to lure in customers: a bit less than a century later, critics would make the same claims about Amazon, that great dealer in miscellaneous merchandise now celebrating its tenth anniversary. "The Distribution of Books" also complains of the direct marketing practices of publishers who attempted to market to readers directly. This past year, similar complaints were made after Random House joined Scholastic and Simon and Schuster this year in establishing a direct-sale online presence.
Of course, 2005 is not 1899, and this is what makes the Dial piece so startling in its familiarity: in 1899, after all, the distinction between publisher and bookseller was much fresher than now. Hybrid merchant/tradesman who printed, marketed and distributed books at the same time had been the norm for a much longer interval than the shop owner who ordered books from a variety of different publishing houses. In this sense, the publisher's "new" practice of selling books directly was in fact a modification of bookselling practices that predated the specialized bookshop. Ultimately, the Dial piece is less about the demise of the bookseller than about the imagined demise of a relatively recent phenomenon -- the specialized book seller with an investment in promoting the culture of books generally rather than the work of a specific author or publisher.
This tension between specialization and generalization also revealed itself in the article's most indignant passage, in which the author expressed outrage over the idea that libraries might themselves get involved in bookselling. According to the Dial, bookstore owners had been subjected to:
an onslaught so unexpected and so startling it left [them] gasping for breath -- [a suggestion] made a few months ago by librarian Dewey, who calmly proposed that the public libraries throughout the country should be book-selling as well as book-circulating agencies... Booksellers have always looked askance at public libraries, not understanding how they create an appetite for reading that is sure in the end to redound to the bookseller's advantage, but their suspicious fears never anticipated the explosion in their camp of such a bombshell as this.
After delivering the "bombshell," the author goes on to reassure the reader that Dewey's suggestion (yes, that would be Melvil Dewey, inventor of the Dewey Decimal System) could never be taken seriously in America: such a venture on the part of the nation's libraries would represent a socialistic entangling of the spheres of government and industry. Books sold by libraries would be sold without an eye to profit, conjectured the author, and publishing ----and perhaps the notion of the private sector itself -- would collapse. "If the state or the municipality were to go into the business of selling books at cost, what should prevent it from doing the like with groceries?"
While the Dial piece made me think about the ways in which the perceived "new" threats to today's bookstores might not be so new, it also made me consider how Dewey's proposal might emerge in modified form in the digital era. While present-day libraries haven't been proposing the sale of books, they certainly are planning to get into the business of marketing and distribution, as the World Digital Library attests. They are also proposing, as Librarian of Congress librarian James Billington has said, a shift toward significant partnerships with for-profit businesses which have (for various reasons) serious economic stakes in sifting through digital materials. And, as Ben noted a few weeks ago, libraries themselves have been using various strategies from online retailers to catalog and present information.
Just as libraries are starting to embrace the private sector, many bookstores are heading in the other direction: driven to the verge of extinction by poor profits, they are reinventing themselves as nonprofits that serve a valuable social and cultural function. Sure, books are still for sale, but the real "value" of a bookstore is now lies not in its merchandise, but in the intellectual or cultural community it fosters: in that respect, some bookstores are thus akin to the subscription libraries of the past.
Is it so impossible to imagine a future in which one walks into a digital distribution center, orders a latte, and uses an Amazon-type search engine to pull up the ebook that can be read at one's reading station after the requisite number of ads have flashed on the screen? Is this a library? Is this a bookstore? Does it matter? Should it?
mass culture vs technoculture? 12.20.2005, 8:54 AM
It's the end of the year, and thus time for the jeremiads. In a December 18 Los Angeles Times article, Reed Johnson warns that 2005 was the year when "mass culture" -- by which Johnson seemed to mean mass media generally -- gave way to a consumer-driven techno-cultural revolution. According to Johnson:
This was the year in which Hollywood, despite surging DVD and overseas sales, spent the summer brooding over its blockbuster shortage, and panic swept the newspaper biz as circulation at some large dailies went into free fall. Consumers, on the other hand, couldn't have been more blissed out as they sampled an explosion of information outlets and entertainment options: cutting-edge music they could download off websites into their iPods and take with them to the beach or the mall; customized newcasts delivered straight to their Palm Pilots; TiVo-edited, commercial-free programs plucked from a zillion cable channels. The old mass culture suddenly looked pokey and quaint. By contrast, the emerging 21st century mass technoculture of podcasting, video blogging, the Google Zeitgeist list and "social networking software" that links people on the basis of shared interest in, say, Puerto Rican reggaeton bands seems democratic, consumer-driven, user-friendly, enlightened, opinionated, streamlined and sexy.
Or so it seems, Johnson continues: before we celebrate too much, we need to remember the difference between consumers and citizens. We are technoconsumers, not technocitizens, and as we celebrate our possibilites, we forget that "much of the supposedly independent and free-spirited techno-culture is being engineered (or rapidly acquired) by a handful of media and technology leviathans: News Corp., Apple, Microsoft, Yahoo, and Google, the budding General Motors of the Information Age."
I hadn't thought of Google as the GM of the Information Age. I'm not at all sure, actually, that the analogy works, given the different ways in which GM and Google leverage the US economy -- fifty years hence, Google plant closures won't be decimating middle America. But I'm very much behind Johnson's call for more attention to media consolidation in the age of convergence. Soon, it's going to be time for the Columbia Journalism Review to add the leviathans listed above to its Who Owns What page, which enables users to track the ownership of most old media products, but currently comes up short in tracking new media. Actually, they should consider updating it as of tomorrow, when the final details of Google's billion dollar deal for five percent of AOL are made public.
line between the real and game space... a peek into the future? 12.19.2005, 12:13 PM
As Lisa noted in her comment to an previous post on class and gaming, the Economist reviewed the new book by Edward Castronova entitled, Synthetic Worlds : The Business and Culture of Online Games.
Castronova, also wrote an essay that was included in the Game Design Reader that was behind the "Making Games Matter" panel we attended. This essay marks the first analysis of the economics of people and their interactions in a virtual reality. Interesting to note, it has yet to be formally published in an academic economics journal.
In these studies, Castronova calculates the economics of the virtual by looking at what people are willing to pay in real currency for online gaming characters and their associated costs. As previously posted, people are making their livings in these virtual spaces by creating and selling their avatars. We are entering an era where the boundaries between the real and virtual are blurring.
Although some affluent gamers are buying their way into the higher echelons of game spaces such as EverQuest, there is still the opportunity for anyone with enough time and skill to create advanced characters. Where as in the real world, there are only a limited number of players in the NBA and CEO positions in the Fortune 500 companies. There is enough "room" in the game space to allow for many top tier characters, because the vast majority of the "normal" characters are bots run by the gaming engine.
Is the online game space the utopian society where everyone can be equal and rich and powerful? Is this a peek at the future of the real world when robots take over all the jobs that people don't want to perform?
librivox -- free public domain books read aloud by volunteers 12.19.2005, 9:26 AM
Just read a Dec. 16th Wired article about a Canadian Hugh McGuire's brilliant new venture Librivox. Librivox is creating and distributing free audiobooks by asking volunteers to create audio files of works of literature in the public domain. The files are hosted on the Internet Archive and are available in MP3 and OGG formats.
Thus far, Librivox -- which has only been up for a few months -- has recorded about 30 titles, relying on dozens of volunteers. The website promotes the project as the "acoustical liberation of the public domain" and claims that the ultimate goal is to liberate all public domain works of literature. For now, titles cataloged on the website include L Frank Baum's The Wizard of Oz, Joseph Conrad's The Secret Agent and the U.S. Constitution.
Using Librivox couldn't be easier: clicking on an entry will bring you to a screen which allows you to select a Wikipedia entry on the book in question, the e-Gutenberg file of the book, an alternate Zip file of the book, and the Librivox audio version, available chapter by chapter with the names of each volunteer reader noted prominently next to the chapter information.
I listened to parts of about a half-dozen book chapters to get a sense of the quality of the recordings, and I was impressed. The volunteers have obviously chosen books they are passionate about, and the recordings are lively, quite clear and easy to listen to. As a regular audiobook listener, I was struck by the fact that while most literary audiobooks are read by authors who tend to work hard at conveying a sense of character, the Librivox selections seemed to convey, more than anything, the reader's passion for the text itself; ie, for the written word. Here at the Institute we've been spending a fair amount of time trying to figure out when a book loses it's book-ness, and I'd argue that while some audiobooks blur the boundary between book and performance, the Librivox books remind us that a book reduced to a stream of digitally produced sound can still be very much a book.
The site's definitely worth a visit, and, if you've got a decent voice and a few spare hours, there's information about how to become a volunteer reader yourself. And finally, don't miss the list of other audiolit projects on the lower right-hand corner of the homepage: there are many voices out there, reading many books -- including Japanese Classical Literature For Bedtime, if you're so inclined.
last week: wikipedia, r kelly, gaming and google panels, and more... 12.18.2005, 4:27 PM
Here's an overview of what we've been posting over the last week. As well, a few of us having been talking about ways to graphically represent text, so I thought I would include a mind map of this overview.
As a follow up to the increasingly controversial wikipedia front, Daniel Brandt uncovered that Brian Chase posted false information about John Seignthaler that was reported here last week. To add fuel to the fire, Nature weighed in that Encyclopedia Britannica may not be as reliable as Wikipedia.
Business Week noted a possible future of pricing for data transfer. Currently, carries such as phone and cable companies are developing technology to identify and control what types of media (voice, images, text or video) are being uploaded. This ability opens the door to being able to charge for different uses of data transfer, which would have a huge impact on uploading content for personal creative use of the internet.
Liz Barry and Bill Wetzel shared some of their experiences from their "Talk to Me" Project. With their "talk to me" sign in tow, they travel around New York and the rest of the US looking for conversation. We were impressed at how they do not have a specific agenda besides talking to people. In the mediated age, they are not motivated by external political/ religious/ documentary intentions. What they do document is available on their website, and we look forward to see what they come up with next.
The Google Book Search debate continues as well, via a panel discussion hosted by the American Bar Association. Interestingly, publishers spoke as if the wide scale use of ebooks is imminent. More importantly and even if this particular case settles out of court, the courts have a pressing need to define copyright and fair use guidelines for these emerging uses.
With the protest of the WTO meetings in Hong Kong this past week, new journalism forms took one step forward. The website Curbside @ WTO covered the meetings with submissions from journalism students, bloggers and professional journalists.
McDonalds filed a patent which suggests that it intends to offer clips of movies instead of the traditional toys in their kids oriented Happy Meals. Lisa pondered if a video clip can successfully replace a toy, and if it does, what the effects on children's imaginations might be.
R. Kelly's experiments in form and the "serial song" through his Trapped in the Closet recordings. While R Kelly has varying success in this endeavor, Dan compared the experience of not only the serial novel, but also Julie Powell's foray into transferring her blog into book form and what she might have learned from R. Kelly (its hard to make unified pieces maintain an overall coherency.)
The world of academic publishing was challenged with a proposal calling to create an electronic academic press. This segment seems especially ripe for the shift to digital publishing as many journals with small circulations face raising printing and production costs.
Sol and others from the institute attended "Making Games Matter," a panel with contributors from The Game Design Reader: A Rules of Play Anthology, edited by Katie Salen and Eric Zimmerman. The discussion covered among other things: involving the academy in creating a discourse for gaming and game design, obstacles in studying and creating games, and the game "industry" itself. The book and panel called out for games and gaming to undergo a formal study akin to the novel and the experience of reading. Also, in the gaming world, the class economics of the real and virtual began to emerge as a Chinese firm pays employees to build up characters in MMOGs to sell to affluent gamers.
off to seoul 12.17.2005, 7:02 AM
Over the next couple of weeks I will be traveling in South Korea, the land that invented moveable type (1234), and which to this day is cooking up the future of the book on a high flame: from massivly multiplayer online games, to Samsung's Ubiquitous Dream Hall, to the massively multiplayer citizen journalism site OhmyNews. It will take me about 20 hours to get there but I feel I'll be stepping a few years into the future. I expect... well, I have no idea what to expect. And all this futurama is only the tip of the iceberg. I have a camera and it shouldn't be too hard to find an internet connection, so expect a few postcards.
watching wikipedia watch 12.16.2005, 10:37 AM
In an interview in CNET today, Daniel Brandt of Wikipedia Watch -- the man who tracked down Brian Chase, the author of the false biography of John Seigenthaler on Wikipedia -- details the process he used to track Chase down. I found it an interesting reality check on the idea of online anonymity. I was also a bit nonplussed by the fact that Brandt created a phony identity for himself in order to discover who had created a fake version of the real Seigenthaler. According to Brandt:
All I had was the IP address and the date and timestamp, and the various databases said it was a BellSouth DSL account in Nashville. I started playing with the search engines and using different tools to try to see if I could find out more about that IP address. They wouldn't respond to trace router pings, which means that they were blocked at a firewall, probably at BellSouth...But very strangely, there was a server on the IP address. You almost never see that, since at most companies, your browsers and your servers are on different IP addresses. Only a very small company that didn't know what it was doing would have that kind of arrangement. I put in the IP address directly, and then it comes back and said, "Welcome to Rush Delivery." It didn't occur to me for about 30 minutes that maybe that was the name of a business in Nashville. Sure enough they had a one-page Web site. So the next day I sent them a fax. [they didn't respond, and] The next night, I got the idea of sending a phony e-mail, I mean an e-mail under a phony name, phony account. When they responded, sure enough, the originating IP address matched the one that was in Seigenthaler's column.
Overall, I'm still having mixed feelings about Brandt's "bust" of Brian Chase -- mostly because of the way the event has skewed discussion of Wikipedia, but partly because Chase's outing seems to have damaged the hapless-seeming Chase much more than Seigenthaler had been damaged by the initial fake post. The CNET interview suggests that Brandt might also have some regrets about the fallout over Chase, though Brandt frames his concern as yet another critique of Wikipedia. Brandt claims he is uncomfortable about the fact that Chase has a Wikipedia biography, since "when this poor guy is trying to send out his resume," employers will google him, find the Wikipedia entry, and refuse to hire him: since Wikipedia entries are not as ephemeral as news articles, he adds, the entry is actually "an invasion of privacy even more than getting your name in the newspaper." This seems to be an odd bit of reasoning, since Brandt, after all, was the one who made Chase notorious.
When asked by the CNET interviewer how he would "fix" Wikipedia, Brandt maintained an emphasis on the idea that biographical entries are Wikipedia's Achilles heel, an belief which is tied, perhaps, to his own reasons for taking Wikipedia to task -- a prominent draft resister in the 1960s, Brandt discovered that his own Wikipedia post had links he considered unflattering. He explained to CNET that his first priority would be to "freeze" biographies on the site which had been checked for accuracy:
I would go and take all the biographical articles on living persons and take them out of the publicly editable Wikipedia and put them in a sandbox that's only open to registered users. That keeps out all spiders and scrapers. And then you work on all these biographies and get them up to snuff and then put them back in the main Wikipedia for public access but lock them so they cannot be edited. If you need to add more information, you go through the process again. I know that's a drastic change in ideology because Wikipedia's ideology says that the more tweaks you get from the masses, the better and better the article gets and that quantity leads to improved quality irrevocably. Their position is that the Seigenthaler thing just slipped through the crack. Well, I don't buy that because they don't know how many other Seigenthaler situations are lurking out there.
"Seigenthaler situations." This term could either come in to use as a term to refer to the dubious accuracy of an online post -- or, alternately, to refer to a phobic response to open-source knowledge construction. Time will tell.
Meanwhile, in the pro-Wikipedia world, an article in the Chronicle of Higher Education today notes that a group of Wikipedia fans have decided to try to create a Wikiversity, a learning center based on Wiki open-source principles. According to the Chronicle, "It's not clear exactly how extensive Wikiversity would be. Some think it should serve only as a repository for educational materials; others think it should also play host to online courses; and still others want it to offer degrees." I'm curious to see if anything like a Wikiversity could get off the group, and how it will address the tension around open-source knowledge that been foregrounded by the Wikipedia-bashing that has taken place over the past few weeks.
Finally, there's a great defense of Wikipedia in Danah Boyd's Apophenia. Among other things, Boyd writes:
We should be teaching our students how to interpret the materials they get on the web, not banning them from it. We should be correcting inaccuracies that we find rather than protesting the system. We have the knowledge to be able to do this, but all too often, we're acting like elitist children. In this way, i believe academics are more likely to lose credibility than Wikipedia.
the net as we know it 12.16.2005, 7:27 AM
There's a good article in Business Week describing the threat posed by unregulated phone and cable companies to the freedom and neutrality of the internet. The net we know now favors top-down and bottom-up publishing equally. Yahoo! or The New York Times may have more technical resources at their disposal than your average blogger, but in the pipes that run in and out of your home connecting you to the net, they are equals.
That could change, however. Unless government gets pro-active on the behalf of ordinary users, broadband providers will be free to privilege certain kinds of use and certain kinds of users, creating the conditions for a broadcast-oriented web and charging higher premiums for more independently creative uses of bandwidth.
Here's how it might work:
So the network operators figure they can charge at the source of the traffic -- and they're turning to technology for help. Sandvine and other companies, including Cisco Systems, are making tools that can identify whether users are sending video, e-mail, or phone calls. This gear could give network operators the ability to speed up or slow down certain uses.
That capability could be used to help Internet surfers. BellSouth, for one, wants to guarantee that an Internet-TV viewer doesn't experience annoying millisecond delays during the Super Bowl because his teenage daughter is downloading music files in another room.
But express lanes for certain bits could give network providers a chance to shunt other services into the slow lane, unless they pay up. A phone company could tell Google or another independent Web service that it must pay extra to ensure speedy, reliable service.
One commenter suggests a rather unsavory scheme:
The best solution is to have ISPs change monthly billing to mirror cell phone bills: X amount of monthly bandwidth any overage customer would be charged accordingly. File sharing could become legit, as monies from our monthly bills could be funneled to the apprioprate copyright holder (big media to regular Joe making music in his room) and the network operators will be making more dough on their investment. With the Skypes of the world I can't see this not happenning!
It seems appropriate that when I initially tried to read this article, a glitchy web ad was blocking part of the text -- an ad for broadband access no less. Bastards.
the "talk to me" crew talks with the institute 12.15.2005, 5:29 PM
Liz Barry and Bill Wetzel, the people behind Talk to Me, stopped by the institute offices for lunch today. It is easy to describe what they do, they carry a sign that says "talk to me" and travel the country talking to strangers. However, it is a bit harder to categorize what they do. While not quite a social experiment, they playfully recounted how various places contextualize what they do. In the Upper West Side of New York they are quasi-therapists, while further south in the East Village they are performance artists. Recently, they biked across the country and back, all the while talking to strangers.
The thing that struck me is how they spend their time talking to people just to do it, without some agenda. They are not fund raisers for a non-profit or religious organization, nor do they take money from people after they talk to them (although they accept paypal and mailed donations.) There is no big book deal, reality tv show, or documentary film project looming in the background. They just wanted to start talking to different people and over three years later, the conversation is still ongoing. When I was in graduate school, by my second year, I started feeling that I only did things, so that I could document them for future projects. I get no such impression from Bill and Liz.
With blogs, photo sharing services, social networking sites, and affordable digital photography and video cameras, anyone can become a content creator and publisher. Documentation begins to drive all activity. Often, I have seen people walking in Times Square with a digital video camera in hand. Oblivious to their surroundings, they were completely preoccupied with documenting everything. Will they ever watch the endless hours of footage they are recording? Obviously, the camera filters their experience. When Liz and Bill set up shop in Time Square, they mainly want to engage in conversation. Their experiences would be very different if they held cameras, because the interaction shifts from a conversation to an interview.
I am glad that they collected some photos along their journey and recorded their thoughts in journals. I am also glad that they did not let that documentation process interfere with their project, whatever "it" is.
google book search debated at american bar association 12.15.2005, 3:50 PM
Last night I attended a fascinating panel discussion at the American Bar Association on the legality of Google Book Search. In many ways, this was the debate made flesh. Making the case against Google were high-level representatives from the two entities that have brought suit, the Authors' Guild (Executive Director Paul Aiken) and the Association of American Publishers (VP for legal counsel Allan Adler). It would have been exciting if Google, in turn, had sent representatives to make their case, but instead we had two independent commentators, law professor and blogger Susan Crawford and Cameron Stracher, also a law professor and writer. The discussion was vigorous, at times heated -- in many ways a preview of arguments that could eventually be aired (albeit under a much stricter clock) in front of federal judges.
The lawsuits in question center around whether Google's scanning of books and presenting tiny snippet quotations online for keyword searches is, as they claim, fair use. As I understand it, the use in question is the initial scanning of full texts of copyrighted books held in the collections of partner libraries. The fair use defense hinges on this initial full scan being the necessary first step before the "transformative" use of the texts, namely unbundling the book into snippets generated on the fly in response to user search queries.
...in case you were wondering what snippets look like
At first, the conversation remained focused on this question, and during that time it seemed that Google was winning the debate. The plaintiffs' arguments seemed weak and a little desperate. Aiken used carefully scripted language about not being against online book search, just wanting it to be licensed, quipping "we're just throwing a little gravel in the gearbox of progress." Adler was a little more strident, calling Google "the master of misdirection," using the promise of technological dazzlement to turn public opinion against the legitimate grievances of publishers (of course, this will be settled by judges, not by public opinion). He did score one good point, though, saying Google has betrayed the weakness of its fair use claim in the way it has continually revised its description of the program.
Almost exactly one year ago, Google unveiled its "library initiative" only to re-brand it several months later as a "publisher program" following a wave of negative press. This, however, did little to ease tensions and eventually Google decided to halt all book scanning (until this past November) while they tried to smooth things over with the publishers. Even so, lawsuits were filed, despite Google's offer of an "opt-out" option for publishers, allowing them to request that certain titles not be included in the search index. This more or less created an analog to the "implied consent" principle that legitimates search engines caching web pages with "spider" programs that crawl the net looking for new material.
In that case, there is a machine-to-machine communication taking place and web page owners are free to insert programs that instruct spiders not to cache, or can simply place certain content behind a firewall. By offering an "opt-out" option to publishers, Google enables essentially the same sort of communication. Adler's point (and this was echoed more succinctly by a smart question from the audience) was that if Google's fair use claim is so air-tight, then why offer this middle ground? Why all these efforts to mollify publishers without actually negotiating a license? (I am definitely concerned that Google's efforts to quell what probably should have been an anticipated negative reaction from the publishing industry will end up undercutting its legal position.)
Crawford came back with some nice points, most significantly that the publishers were trying to make a pretty egregious "double dip" into the value of their books. Google, by creating a searchable digital index of book texts -- "a card catalogue on steroids," as she put it -- and even generating revenue by placing ads alongside search results, is making a transformative use of the published material and should not have to seek permission. Google had a good idea. And it is an eminently fair use.
And it's not Google's idea alone, they just had it first and are using it to gain a competitive advantage over their search engine rivals, who in their turn, have tried to get in on the game with the Open Content Alliance (which, incidentally, has decided not to make a stand on fair use as Google has, and are doing all their scanning and indexing in the context of license agreements). Publishers, too, are welcome to build their own databases and to make them crawl-able by search engines. Earlier this week, Harper Collins announced it would be doing exactly that with about 20,000 of its titles. Aiken and Adler say that if anyone can scan books and make a search engine, then all hell will break loose and millions of digital copies will be leaked into the web. Crawford shot back that this lawsuit is not about net security issues, it is about fair use.
But once the security cat was let out of the bag, the room turned noticeably against Google (perhaps due to a preponderance of publishing lawyers in the audience). Aiken and Adler worked hard to stir up anxiety about rampant ebook piracy, even as Crawford repeatedly tried to keep the discussion on course. It was very interesting to hear, right from the horse's mouth, that the Authors' Guild and AAP both are convinced that the ebook market, tiny as it currently is, is within a few years of exploding, pending the release of some sort of ipod-like gadget for text. At that point, they say, Google will have gained a huge strategic advantage off the back of appropriated content.
Their argument hinges on the fourth determining factor in the fair use exception, which evaluates "the effect of the use upon the potential market for or value of the copyrighted work." So the publishers are suing because Google might be cornering a potential market!!! (Crawford goes further into this in her wrap-up) Of course, if Google wanted to go into the ebook business using the material in their database, there would have to be a licensing agreement, otherwise they really would be pirating. But the suits are not about a future market, they are about creating a search service, which should be ruled fair use. If publishers are so worried about the future ebook market, then they should start planning for business.
To echo Crawford, I sincerely hope these cases reach the court and are not settled beforehand. Larger concerns about Google's expansionist program aside, I think they have made a very brave stand on the principle of fair use, the essential breathing space carved out within our over-extended copyright laws. Crawford reminded the room that intellectual property is NOT like physical property, over which the owner has nearly unlimited rights. Copyright is a "temporary statutory monopoly" originally granted ("with hesitation," Crawford adds) in order to incentivize creative expression and the production of ideas. The internet scares the old-guard publishing industry because it poses so many threats to the security of their product. These threats are certainly significant, but they are not the subject of these lawsuits, nor are they Google's, or any search engine's, fault. The rise of the net should not become a pretext for limiting or abolishing fair use.
curbside at the WTO 12.14.2005, 5:54 PM
A little while ago I came across this website maintained by a group of journalism students, business writers and bloggers in Hong Kong providing "frontline coverage" of the current WTO meetings. The site provides a mix of on-the-ground reporting, photography, event schedules, and useful digests of global press coverage of the week-long event and surrounding protests. It feels sort of halfway between a citizen journalism site and a professional news outlet. It's amazing how this sort of thing can be created practically overnight.
They have a number of good photo galleries. Here are the Korean farmers jumping into Hong Kong Harbor:
nature magazine says wikipedia about as accurate as encyclopedia brittanica 12.14.2005, 3:47 PM
A new and fairly authoritative voice has entered the Wikipedia debate: last week, staff members of the science magazine Nature read through a series of science articles in both Wikipedia and the Encyclopedia Britannica, and decided that Britannica -- the "gold standard" of reference, as they put it -- might not be that much more reliable (we did something similar, though less formal, a couple of months back -- read the first comment). According to an article published today:
Entries were chosen from the websites of Wikipedia and Encyclopaedia Britannica on a broad range of scientific disciplines and sent to a relevant expert for peer review. Each reviewer examined the entry on a single subject from the two encyclopaedias; they were not told which article came from which encyclopaedia. A total of 42 usable reviews were returned out of 50 sent out, and were then examined by Nature's news team. Only eight serious errors, such as misinterpretations of important concepts, were detected in the pairs of articles reviewed, four from each encyclopaedia. But reviewers also found many factual errors, omissions or misleading statements: 162 and 123 in Wikipedia and Britannica, respectively.
It's interesting to see Nature coming to the defense of Wikipedia at the same time that so many academics in the humanities and social science have spoken out against it: it suggests that the open source culture of academic science has led to a greater tolerance for Wikipedia in the scientific community. Nature's reviewers were not entirely thrilled with Wikipidia: for example, they found the Britannica articles to be much more well-written and readable. But they also noted that Britannica's chief problem is the time and effort it takes for the editorial department to update material as a scientific field evolves or changes: Wikipedia updates often occur practically in real time.
One not-so-suprising fact unearthed by Nature's staffers is that the scientific community contained about twice as many Wikipedia users as Wikipedia authors. The best way to ensure that the science in Wikipedia is sound, the magazine argued, is for scientists to commit to writing about what they know.
making games matter 12.14.2005, 3:08 PM
Making Games Matter, a roundtable discussion on the past, present and future of games at Parsons the New School for Design (12/9/05), was a thought-provoking event that brought together an interesting, and heterogeneous, group of experimental game developers, game designers, and seasoned academics. Participants ranged from the creators of Half-Life, Paranoia, and Adventure for the Atari 2600 to theorists of play history and game culture. This meeting was part of DEATHMATCH IN THE STACKS celebrating the launch of The Game Design Reader: A Rules of Play Anthology, edited by Katie Salen and Eric Zimmerman, and published by MIT Press. The book is a collection of essays that spans 50 years of game design and game studies.
The need to define the present of games was central to the conversation. The academics find that there is a lack of a precise vocabulary exclusive to games. At the same time, they question the use of certain terms by game designers. Videogames started outside the academy and they exhibit a certain hybrid nature, especially as they incorporate aspects of many disciplines. Now, when they are claiming their academic legitimacy, they encounter the "territorial" resistance distinctive of academia. Film or literature, for instance, can be defined within their own terms, but game theory still borrows from other disciplines to define itself. Even though games function as abstract linguistic systems, there is a resistance to analyze and to validate them. "Interactive narrative" is a new concept and it should be studied as such, not by substituting or superimposing it to other disciplines.
The term "industry" that kept coming up in the conversation, was questioned by one of the participants, as it was the use of the verb "to play" in reference to what one does with a videogame. However, do film schools question that film is an industry? What is book publishing anyway? On the other hand, the interactive nature of games, the fact that the players are part of them, is intimately tied to the notions of pleasure and enjoyment that are at the core of the concept of playing. New forms of media technology replace each other, but everyone who has played as a child has used some sort of toy, a medium for amusement and imaginative pretense. So, in fact, one "plays" videogames. When these questions were raised, game designers brought up, as a sort of definer, the differentiation between the industry as producer and the gamer as part of a community. This difference is illustrated in an article by Seth Schiesel, "For the Online Star Wars Game, It's Revenge of the Fans," in The New York Times (12/10/05). He reports on how for the players of the online Star Wars game, the camaraderie and friendship they developed with other players became far more important than playing itself, as they formed "relationships that can be hard to replicate in 'real life.'" This affirmation in itself provocative, raises important questions.
Last month, LucasArts and Sony's online game division, which have run Star Wars Galaxies since its introduction in 2003, unsatisfied with the product's moderate success, radically revamped the game in an attempt to appeal to a younger audience. But to thousands of players, mostly adults, the shifts have meant the destruction of online communities. "We just feel violated," said Carolyn R. Hocke, 46, a marketing Web technician for Ministry Medical Group and St. Michael's Hospital in Stevens Point, Wis. "For them to just come along and destroy our community has prompted a lot of death-in-the-family-type grieving," she said. "They went through the astonishment and denial, then they went to the anger part of it, and now they are going through the sad and helpless part of grieving. I work in the health-care industry, and it's very similar." One of the participants in Making Games Matter, referred to games as "stylized social interaction," and Scheisel's report shows a strikingly real side of those interactions.
After the roundtable, there was an event described as "an evening of discussion and playful debate with game critics, game creators, and game players about the past, present, and future of games." The make-up of the group shows a refreshing permeability that academia is reluctant to acknowledge, but that is enriching and opens up all kinds of possibilities for experimentation and innovation well beyond the mere notion of play.
yahoo buys del.icio.us and takes on google? 12.13.2005, 11:01 AM
Just as we were creating a del.icio.us account and linking it to our site, Yahoo announced the purchase of the company. This strategy of purchasing successful web service start-ups is nothing new for Yahoo (for example, flckr and egroups.) Del.icio.us's popularity has prompted lots of discussion has been going on across the internet, notably on slashdot as well as social software.
Del.icio.us started with the simple idea of putting bookmarks on the web. By making them public, it added a social networking component to the experience. Bookmarks, in a way, are an external representation of notable ideas in the mind of the owner.
They also announced a new partnership with Six Apart, who created Moveable Type. Although, they did not purchase Six apart. Six Apart has optimized their blogging software to work with Yahoo's small business hosting service.
In the end, these strategies make sense for Yahoo and other large media companies, because they are buying proven technologies and a strong user base. Small companies are often more nimble in thought and speed, and then able to develop novel technology.
Interestingly, the online discussion seem to be framing this event in terms of Yahoo versus Google. Microsoft is noticeably absent in the discussion. Perhaps, as Lisa suggested, they are focused on gaming right now. With each new initiative and acquisition, the debates about the services and strategies of Yahoo and Google sound more like discussions about competing fall line-ups of ABC, NBC and CBS.
trapped in the closet & the form of the blook 12.12.2005, 6:21 PM
Most of the people reading this blog probably don't give R. Kelly – the R&B singer known for his buttery voice and slippery morals – the attention that I do, which is completely understandable. But unfortunate, because he's very much worth keeping an eye on. For the past six months, he's been engaged in the most formally interesting experiment in pop music in a while. I'm referring, of course, to "Trapped in the Closet". Bear with me a bit: while it might seem like I'm off on a frolic of my own, this will get around to having something to do with the future of the book.
"Trapped in the Closet" is, in brief, Kelly's experiment in making a serialized pop song. The first installment of it ("Chapter 1") arrived on a CD single last May, squeezed between "Set in the Kitchen" (a song about sex in the kitchen) and "Sex in the Kitchen (remix)" (another song about sex in the kitchen). It's a three-and-a-half minute long song without a chorus in which Kelly lays out a plot involving multiple adulteries, a closet, and a cell phone that goes off at an inopportune moment. It ends on a cliffhanger – the narrator, hiding in the titular closet, draws his gun as the husband he's cuckolded is about to open the door. Kelly followed this up by releasing four subsequent chapters to the radio – followed shortly by music videos – which, rather than tying up loose ends, drew out the plot wider and wider, piling adultery upon adultery, bringing a gay pastor, a police officer, and leg cramps into the story. All the chapters have the same backing music and run to the same length. And despite revelation after revelation, they all end on a cliffhanger of some sort.
For the next seven chapters, Kelly moved directly to video: he's just released a DVD video of the first twelve chapters, where he and others act out the drama he's narrating for thirty-nine minutes. New characters are introduced and the plot becomes steadily more labyrinthine (a midget and an allergy to cherries figure prominently) and fails to resolve much of anything. Kelly's said to be busy thinking up a dozen more installments to the story. Through it all, the music remains the same; each episode is still three minute pop song, which do get played on the radio as such. Wikipedia does have a surprisingly good summary of the twists and turns of Kelly's saga, though it is written in an unfortunate wink-wink-nudge-nudge style. There's a video of the first chapter is available here; the Web being the Web, there's a lot of so-so derivative work here, and even machinima versions of the videos here.
What's interesting about this to me? It's partially interesting for the unbridled creativity of the endeavor: to all appearances, R. Kelly would seem to be making up the story as he goes along, happily jumping between media. But I find the most interesting aspect of this to be that R. Kelly is trying to construct a large story modularly. Each of the chapters of his story ostensibly should be able to have a life of its own as a pop song. This doesn't quite work because his plot has become fiendishly complicated, and none save the moved devoted can make out exactly what the relationship of Rufus to Bridget might be. Presumably this is why the latest chapters were released straight to DVD, where they play sequentially. But formally each of the chapters remains identical: they all have the same backing music, start with a revelation resolving the previous cliffhanger, and end by setting up a new cliffhanger. These constraints limit what Kelly can do with the song: accordingly, his plots must become progressively more ridiculous to keep the story interesting for his listeners or viewers.
There's an obvious analogy to the serialized novel, a recurring trope around here – we could once again trot out Charles Dickens (to whom Kelly might have been obliquely referring when he explained that " 'Trapped in the Closet' was designed to go around the world sort of like the Ghost of Christmas Past – house to house, this situation to that situation, sometimes exposing people in their regular lives"). But closer at hand, there's clearly a relevant comparison to be made to how entries function within a blog here. Just as "Trapped in the Closet" is composed of modular "chapters", blogs are composed of entries, which are intended to stand by themselves. Kelly's ongoing opera isn't quite a blog, but it's rather similar in structure.
What does it functionally mean to have a serialized narrative? One thing that shouldn't be forgotten when scrutinizing new media forms is that form inevitably inflicts itself on content. Another: the example par excellence of the serialized narrative is the soap opera, unglamorous as that might be. Because R. Kelly has to end each chapter on a cliffhanger, his plot must become even more convoluted with every chapter. Watching the thirty-nine minutes of Trapped in the Closet Chapters 1–12 is exhausting because of this: a three minute bon bon of plot becomes cloying sweet over time. At thirty-nine minutes, Kelly's DVD should feel like a movie. It doesn't: its repetitiveness makes it feels like something else entirely, something that we haven't quite seen before. Does it work? It's hard to say.
There's no lack of connection between the serialized narrative and the new media forms we survey here (note, for example Lisa's post from today). I'm most interested in the formal problem that arises from the publishing industry's latest bad idea, making books out of blogs. This does seem appealingly simple: people are writing online, if they're good and they've written enough, you can slap a cover on it and call it a book. It turns out, however, that a book is more than the sum of its parts. I'm willing to give R. Kelly the benefit of the doubt with his strange DVD because it doesn't quite feel like anything else. The problems with blogs presented as books, however, is that we expect them to behave like a book, which they don't.
An example at hand: a friend gave my girlfriend a copy of Julie & Julia, the book that was made from Julie Powell's blog, in which she reports on her attempts to cook all of the recipes in Julia Child's Mastering the Art of French Cooking. My girlfriend, a self-identified cookbook snob & long-time devotée of Julia Child, was predictably horrified, and has spent the past week complaining about how dreadful this book is. Part of her anger is an issue of substance: she believes that Julia Child should not be dealt with so flippantly. But part of what makes her angry about the book is how the book is written. It's not quite episodic – the editor wasn't quite so sloppy as string together a series of blog posts and call it a book – but it does inherit much of its character from its episodic origin, which is what brings me back around to R. Kelly.
What makes a blog readable isn't the same thing that makes a book readable. The two forms have different concerns: on a blog, an enormous part of the task of the writer is to make sure that what's written about are interesting enough that readers keep coming back. A reader might start reading a blog at any point, so this is an ongoing concern. (Thus R. Kelly's cliffhangers.) This isn't nearly as necessary with a physical book: readers still need to be hooked by the concept of the book, but generally you don't need to keep hooking them.
It might be best explained by looking at the difference between Mastering the Art of French Cooking & Powell's book. The former was conceived as a unified whole: it's a single big idea, elucidated in steps, from the simple to the complicated. Later parts of that book are built upon the former: they don't work well by themselves unless you've already absorbed the earlier information. Julie & Julia is constructed as a series of snapshots from the life of the author, each of which seeks to be individually interesting in and of itself. How does this play out in the pages of the book? An easy example: Powell's sex life keeps popping up in a rather gratuitous fashion. The subject isn't without relevance in a culinary work (M.F.K. Fisher could pull this off this sort of thing astonishingly well, for example); rather, it's the way in which it's constantly presented in passing. This makes perfect sense for a blog: a dash of sex spices up a blog entry nicely, and will keep the readers coming back for more. A blog is explicitly built on a relationship between the reader and the writer: the writer can respond to the readers. This doesn't work so well in a published book: this sort of interjection, rather than serving to keep the reader hooked, feels more like a constant distraction in a book not explicitly about food & sex. The reader's already bought the book. They don't need to be hooked again.
(Something of a counterexample: one of the most vexing things I found about Thomas de Zengatita's Mediated (which we recently discussed at the Institute) was the style in which it's written. Every page or so there's a pithy, one-sentence paragraph. These zingers are employed over the 200 pages of the book; for the reader, it becomes immensely wearing. But just as a thought-experiment: if Zengotita had chopped the book up into page-sized chunks and turned it into a blog, the single-sentence zingers probably wouldn't have been so bothersome; I might not have noticed them enough to comment on them. I've never found Zengatita's much shorter essays in Harper's annoyingly written. Some traits only becomes visible with length or time.)
While it's very easy to fuse the words blog and book to get blook, that doesn't automatically mean that a successful blog will become a successful book (or vice versa). These are very different forms. What could Julie Powell have learned from R. Kelly, besides any number of things which can't be printed in a family-oriented blog? First, it's a difficult job to make a coherent work out of unified pieces. It's possible that R. Kelly could wrap up all of his narrative loose ends in future chapters, but I'm not holding my breath. Something else: even if it lacks any Aristotelian unities, "Trapped in the Closet" is interesting because it's unique. Nobody else is making serialized pop music videos: we have nothing to judge it against, so it has novelty. (Yes, this might be damning with faint praise – that's the other side of the coin.) A blog turned into a book doesn't have that same sort of novelty. We end up judging it against the criteria by which we'd judge any other book – we compare Powell's book to M. F. K. Fisher, though we wouldn't have thought to do that with her blog. The blook inevitably suffers, because the content has been stuffed into a form which it doesn't quite fit. Let blogs be blogs.
A question to throw out to end this with: could you develop the sort of big ideas that the physical book excels at moving around on a blog, given their modular construction?
no more shrek figures with your fries:
Disney wants to digitize and serialize the happy meal giveaway 12.12.2005, 2:14 PM
A Dec 6th article in New Scientist notes that patents filed by Disney last April reveal plans to drip-feed entertainment into the handheld video players of children eating in McDonalds. The patent suggests that instead of giving out toys with Happy Meals, McDonalds might provide installments of a Disney tale: the child would only get the full story by coming back to the restaurant a number of times to collect all the installments. Here's some text from the patent:
...the downloading of small sections or parts of content can be spread out over a long period of time, e.g., 5 days. Each time a different part of the content, such as a movie, is downloaded, until the entire movie is accumulated. Thus, as a promotional program with a venue, such as McDonald's.RTM. restaurant, a video, video game, new character for a game, etc., can be sent to the portable media player through a wireless internet connection... as an alternative to giving out toys with Happy Meals or some other promotion. The foregoing may be accomplished each time the player is within range of a Wi Fi or other wireless access point. The reward for eating at a restaurant, for example, could be the automatic downloading of a segment of a movie or the like...
Hmm. Some small issues to be worked through here -- like identifying that elusive target market of parents willing to hand their child a video ipod while he or she is eating a cheeseburger and fries. But if this is a real direction for the future, what might it portend? Will Disney tales distributed on the installment plan capture the interest of children as much as small plastic figurines representing the main characters of their latest Disney experience? And what's ultimately better for the development of a young imagination, a small plastic Shrek or five minutes from a mini-Shrek video (the choice of "neither" is not an option here)? Can we imagine such a distribution method returning us to the nineteenth-century serialization manial prompted by Dicken's chapter-by-chapter account of the death of Little Nell?
image: the death of little nell from Dicken's The Old Curiosity Shop, 1840
wikipedia update: author of seigenthaler smear confesses 12.12.2005, 10:14 AM
According to a Dec 11 New York Times article, Daniel Brandt, a book indexer who runs the site Wikipedia Watch, helped to flush out the man who posted the false biography of USA Today and Freedom Forum founder John Seigenthaler on Wikipedia. After Brandt discovered the post issued from a small delivery company in Nashville, the man in question -- 38-year-old Brian Chase -- sent a letter of apology to Seigenthaler and resigned from his job as operations manager at the company.
According to the Times, Chase claims that he didn't realize that Wikipedia was used as a serious research tool: he posted the information to shock a co-worker who was familiar with the Seigenthaler family. Seigenthaler, who complained in a USA Today editorial last week about the protections afforded to the "volunteer vandals" who post anonymously in cyberspace, told the New York Times that he would not seek damages from Chase.
Responding to the fallout from Seigenthaler's USA Today editorial, Wikipedia founder James Wales changed Wikipedia's policies so that posters now must all be registered with Wikipedia. But, as Brandt shows, it's takes work to remain anonymous in cyberspace. Though I'm not sure that I beleive Chase's professed astonishment that anyone would take his post seriously (why else would it shock his co-worker?), it seems clear that he didn't think what he was doing so outrageous that he ought to make a serious effort to hide his tracks.
Meanwhile, Wales has become somewhat irked by Seignthaler's continuing attacks on Wikipedia. Posting to the threaded discussion of the issue on the mailing list of the Association for Internet Researchers, Wikipedia's founder expressed exasperation about Seigenthaler's telling the Associated Press this morning that "Wikipedia is inviting [more regulation of the internet] by its allowing irresponsible vandals to write anything they want about anybody." Wales wrote:
*sigh* Facts about our policies on vandalism are not hard to come by. A statement like Seigenthaler's, a statement that is egregiously false, would not last long at all at Wikipedia.
For the record, it is just absurd to say that Wikipedia allows "irresponsible vandals to write anything they want about anybody."
ElectraPress 12.12.2005, 2:36 AM
Kathleen Fitzpatrick has put forth a very exciting proposal calling for the formation of an electronic academic press. Recognizing the crisis in academic publishing, particularly with the humanities, Fitzpatrick argues that:
The choice that we in the humanities are left with is to remain tethered to a dying system or to move forward into a mode of publishing and distribution that will remain economically and intellectually supportable into the future.
i've got my fingers crossed that Kathleen and her future colleagues have the courage to go way beyond PDF and print-on-demand; the more Electrapress embraces new forms of born-digital documents especially in an open-access pubishing environment, the more interesting the new enterprise will be.
class, cheating and gaming 12.09.2005, 3:20 PM
The New York Times reports that a company in China is hiring people to play Massively Multiplayer Online Games (MMOG), like World of Warcraft or EverQuest. Employees develop avatars (or characters) and earn resources. Then, the company sells these efforts to affluent online gamers who do not have the time or inclination to play the early stages of the games themselves.
Finding hacks or ways to get around the intended game play is nothing new. I will confess that I have used cheat codes and hacks in playing video games. One of the first ones I've ever used, was in Super Mario Brothers on the original Nintendo Entertainment System. The Multiple 1-Ups: World 3-1 was a big favorite.
The article also briefly mentions something that I've been fascinated by: selling the results of your game play on auctions site, such as ebay. These services have turned game play into commodities, and we can actually determine valuations and costs of game play.
It made me to think about the character Hiro Protagonist in Neal Stephenson's Snowcrash, a pizza delivery guy in the real world and lethal warrior in the "Metaverse." He was an exception to the norm and socio-economic status usually carried over into the virtual reality because more realistic avatars were expensive. To actually see that happen in the game spaces of MMOGs by the purchasing of advanced players is quite amazing.
Why do I find that these gamers are cheating? In the era of non-linear information, I select and read only the parts of a text I deem to be relevant. I've skipped over parts of movies and watched another part again and again. Isn't this the same thing? The troubling aspect of this phenomenon is that it is bringing class differentiation into game space. Although gaming itself is a leisure activity, the idea that you can spend your way into succeeding at a MMOG, removes my perceived innocence of that game space.
where we've been, where we're going 12.09.2005, 12:54 PM
This past week at if:book we've been thinking a lot about the relationship between this weblog and the work we do. We decided that while if:book has done a fine job reflecting and provoking the conversations we have at the Institute, we wanted to make sure that it also seems as coherent to our readers as it does to us. With that in mind, we've decided to begin posting a weekly roundup of our blog posts, in which we synthesize (as much a possible) what we've been thinking and talking about from Monday to Friday.
So here goes. This week we spent a lot of time reflecting on simulation and virtuality. In part, this reflection grew out of our collective reading of a Tom Zengotita's book Mediated, which discusses (among other things) the link between alienation from the "real" through digital mediation and increased solipsism. Bob seemed especially interested in the dialectic relationship between, on one hand, the opportunity for access afforded by ever-more sophisticated form of simulation, and, on the other, the sense that something must be lost when as the encounter with the "real" recedes entirely.
This, in turn, led to further conversation about what we might think of as the "loss of the real" in the transition from books on paper to books on a computer screen. On one hand, there seems to be a tremendous amount of anxiety that Google Book Search might somehow make actual books irrelevant and thus destroy reading and writing practices linked to the bound book. On the other hand, one could take the position of Cory Doctorow that books as objects are overrated, and challenge the idea that a book needs to be digitally embodied to be "real."
As the debate over Google Book Search continually reminds us, one of the most challenging things in sifting through discussions of emerging media forms is learning to tell the difference between nostalgia and useful critical insight. Often the two are hopelessly intertwined; in this week's debates about Wikipedia, for example, discussion of how to make the open-source encyclopedia more useful was often tempered by the suggestion that encyclopedias of the past were always be superior to Wikipedia, an assertion easily challenged by a quick browse through some old encyclopedias.
Finally, I want to mention that we finally got around to setting up a del.icio.us account. There will be a formal link on the blog up soon, but you can take a look now. It will expand quickly.
the poetry archive - nice but a bit mixed up 12.09.2005, 11:40 AM
Last week U.K. Poet Laureate Andrew Motion and recording producer Richard Carrington rolled out The Poetry Archive, a free (sort of) web library that aims to be "the world's premier online collection of recordings of poets reading their work" -- "to help make poetry accessible, relevant and enjoyable to a wide audience." The archive naturally focuses on British poets, but offers a significant selection of english-language writers from the U.S. and the British Commonwealth countries. Seamus Heaney is serving as president of the archive.
For each poet, a few streamable mp3s are available, including some rare historic recordings dating back to the earliest days of sound capture, from Robert Browning to Langston Hughes. The archive also curates a modest collection of children's poetry, and invites teachers to use these and other recordings in the classroom, also providing tips for contacting poets so schools, booksellers and community organizations (again, this is focused on Great Britain) can arrange readings and workshops. While some of this advice seems useful, but it reads more like a public relations/ecudation services page on a publisher's website. Is this a public archive or a poets' guild?
The Poetry Archive is a nice resource as both historic repository and contemporary showcase, but the mission seems a bit muddled. They say they're an archive, but it feels more like a CD store.
Throughout, the archive seems an odd mix of public service and professional leverage for contemporary poets. That's all well and good, but it could stand a bit more of the former. Beyond the free audio offerings (which are quite skimpy), CDs are available for purchase that include a much larger selection of recordings. The archive is non-profit, and they seem to be counting in significant part on these sales to maintain operations. Still, I would add more free audio, and focus on selling individual recordings and playlists as downloads -- the iTunes model. Having streaming teasers and for-sale CDs as the only distribution models seems wrong-headed, and a bit disingenuous if they are to call themselves an archive. It would also be smart to sell subscriptions to the entire archive, with institutional rates for schools. Podcasting would also be a good idea -- a poem a day to take with you on your iPod, weaving poetry into daily life.
There's a growing demand on the web for the spoken word, from audiobooks, podcasts, to performed poetry. The archive would probably do a lot better if they made more of their collection free, and at the same time provided a greater variety of ways to purchase recordings.
tipping point? 12.08.2005, 7:36 AM
An article by Eileen Gifford Fenton and Roger C. Schonfeld in this morning's Inside Higher Ed claims that over the past year, libraries have accelerated the transition towards purchasing only electronic journals, leaving many publishers of print journals scrambling to make the transition to an online format:
Faced with resource constraints, librarians have been required to make hard choices, electing not to purchase the print version but only to license electronic access to many journals -- a step more easily made in light of growing faculty acceptance of the electronic format. Consequently, especially in the sciences, but increasingly even in the humanities, library demand for print has begun to fall. As demand for print journals continues to decline and economies of scale of print collections are lost, there is likely to be a tipping point at which continued collecting of print no longer makes sense and libraries begin to rely only upon journals that are available electronically.
According to Fenton and Schonfeld, this imminent "tipping point" will be a good thing for larger publishing houses which have already begun to embrace an electronic-only format, but smaller nonprofit publishers might "suffer dramatically" if they don't have the means to convert to an electronic format in time. If they fail, and no one is positioned to help them, "the alternative may be the replacement of many of these journals with blogs, repositories, or other less formal distribution models."
Fenton and Schonfeld's point that electronic distribution might substantially change the format of some smaller journals echoes other expressions of concern about the rise of "informal" academic journals and repositories, mainly voiced by scientists who worry about the decline of peer review. Most notably, the Royal Society of London issued a statement on Nov. 24 warning that peer-reviewed scientific journals were threatened by the rise of "open access journals, archives and repositories."
According to the Royal Society, the main problem in the sciences is that government and nonprofit funding organizations are pressing researchers to publish in open-access journals, in order to "stop commercial publishers from making profits from the publication of research that has been funded from the public purse." While this is a noble principle, the Society argued, it undermines the foundations of peer review and compels scientists to publish in formats that might be unsustainable:
The worst-case scenario is that funders could force a rapid change in practice, which encourages the introduction of new journals, archives and repositories that cannot be sustained in the long term, but which simultaneously forces the closure of existing peer-reviewed journals that have a long-track record for gradually evolving in response to the needs of the research community over the past 340 years. That would be disastrous for the research community.
There's more than a whiff of resistance to change in the Royal Society's citing of 340 years of precedent; more to the point however, their position statement downplays the depth of the fundamental opposition between the open access movement in science and traditional journals. As Roger Chartier notes in a recent issue of Critical Inquiry, "Two different logics are at issue here: the logic of free communication, which is associated with the ideal of the Enlightenment that upheld at the sharing of knowledge, and the logic of publishing based on the notion of author's rights and commercial gain."
As we've discussed previously on if:book. the fate of peer review in electronic age is an open question: as long as peer review is tied to the logic of publishing, its fate will be determined at least as much by the still evolving market for electronic distribution as by the needs of the various research communities which have traditionally valued it as a method of assessment.
pulitzers will accept online journalism 12.07.2005, 4:45 PM
Online news is now fair game for all fourteen journalism categories of the Pulitzer Prize (previously only the Public Service category accepted online entries). However, online portions of prize submissions must be text-based, and the only web-exclusive content accepted will be in the breaking news reporting and breaking news photography categories. But this presumably opens the door to some Katrina-related Pulitzers this April. I would put my bets on nola.com, the New Orleans Times-Picayune site that kept reports flying online throughout the hurricane.
Of course, the significance of this is mainly symbolic. When the super-prestigious Pulitzer (that's him to the right) starts to re-align its operations, you know there are bigger plate tectonics at work. This would seem to herald an eventual embrace of blogs, most obviously in the areas of commentary, beat reporting, community service, and explanatory reporting (though investigative reporting may not be far off). The committee would do well to consider adding a "news analysis" category for all the fantastic websites, many of them blogs, that help readers make sense of the news and act as a collective watchdog for the press.
Also, while the Pulitzer changes evince a clear preference for the written word, it seems inevitable that inter-media journalism will continue to gain in both quality and legitimacy. We'll probably look back on all the Katrina coverage as the watershed moment. Newspapers (some of them anyway) will figure out that to stay relevant, and distinctive enough not to be pulled apart by aggregators like Google or Yahoo news search, they will have to weave a richer tapestry of traditional reporting, commentary, features, and rich multimedia: a unique window to the world.
Nola.com didn't just provide good, constant coverage, it saved lives. It was an indispensible, unique portal that could not be matched by any aggregator (though harnessing the power of aggregation is part of what made it successful). The crisis of the hurricane put in relief what could be a more everyday strategy for newspapers. The NY Times currently is experimenting with this, developing a range of multimedia features and cordoning off premium content behind its Select pay wall. While I don't think they've yet figured out the right combination of premium content to attract large numbers of paying web subscribers, their efforts shouldn't necessarily be dismissed.
Discussions on the future of the news industry usually center around business models and the problem of solvency with a web-based model. These questions are by no means trivial, but what they tend to leave out is how the evolving forms of journalism might affect what readers consider valuable. And value is, after all, what you can charge for. It's fatalistic to assume that the web's entropic power will just continue to wear down news institutions until they vanish. The tendency on the web toward fragmentation is indeed strong, but I wouldn't underestimate the attraction of a quality product.
A couple of years ago, file sharing seemed to spell doom for the music industry, but today online music retailers are outselling most physical stores. Perhaps there is a way for news as well, but the news will have to change. Dan Gillmor is someone who has understood this for quite some time, and I quote from a rather prescient opinion piece he wrote back in 1997 when the Pulitzers were just beginning to wonder what to do about all this new media (this came up today on the Poynter Online-News list):
When we take journalism into the digital realm, media distinctions lose their meaning. My newspaper is creating multimedia journalism, including video reports, for our Web site. We strongly believe that the online component of our work augments what we sometimes call the "dead-tree" edition, the newspaper itself. Meanwhile, CNN is running text articles on its Web site, adding context to video reports.
So you have to ask a simple question or two: Online, what's a newspaper? What's a broadcaster?
Suppose CNN posts a particularly fine video report on its Web site, augmented by old-fashioned text and graphics. If the Pulitzer Prizes are o pen to online content, the CNN report should be just as valid an entry as, say, a newspaper series posted online and augmented with video.
And what about the occasionally exceptional journalism we're seeing from Web sites (or on CD-ROMs) produced by magazines, newsletters, online-only companies or even self-appointed gadflies? Corporate propaganda obviously will fail the Pulitzer test, but is a Microsoft-sponsored expose of venality by a competitor automatically invalid when it's posted on the Microsoft Network news site or MSNBC? Drawing these lines will take serious wisdom, unless the Pulitzer people decide simply to ignore trends and keep the prizes the way they are, in which case the awards will become quaint - or worse, irrelevant.
I'm also intrigued by another change made by the Pulitzer committee (from the A.P.):
In a separate change, the upcoming Pulitzer guidelines for the feature writing category will give ''prime consideration to quality of writing, originality and concision.'' The previous guidelines gave ''prime consideration to high literary quality and originality.''
Drop the "literary" and add "concision." A move to brevity and a more colloquial character are already greatly in evidence in the blogosphere and it's beginning to feed back into the establishment press. Employing once again the trusty old Pulitzer as barometer, this suggests that that most basic of journalistic forms -- "the story" -- is changing.
a new way to pen pal 12.07.2005, 12:07 PM
Here is another example of users latching onto unexpected features of a technology. Time magazine reports on the surprising popularity of their "Skype Me" mode which allows Skype users to basically cold call each other. This feature seems to be popular with people abroad (especially in China) looking for opportunities to practice their English with people in the US, in a way that resembles pen pals.
One might say that is really just one big chat room, although chat rooms are still text based. The interesting part of the Skype Me community is that it shifts pen pals from a text based activity to an oral based one. The impact that Skype has on pen pals highlights the interplay which digital technology encourages between written and oral forms. Is this a case of the Internet expanding the reach of a community or another example of technology reducing the emphasis on developing writing skills?
google libraries podcast now available 12.07.2005, 11:33 AM
interview with cory doctorow in openbusiness 12.06.2005, 10:10 AM
There's an interview with Cory Doctorow in Openbusiness this morning. Doctorow, who distributes his books for free on the internet, envisions a future in which writers see free electronic distibution as a valuable component of their writing and publishing process. This means, in turn, that writers and publishers need to realize that ebooks and paper books have distinct differences:
Ebooks need to embrace their nature. The distinctive value of ebooks is orthogonal to the value of paper books, and it revolves around the mix-ability and send-ability of electronic text. The more you constrain an ebook's distinctive value propositions -- that is, the more you restrict a reader's ability to copy, transport or transform an ebook -- the more it has to be valued on the same axes as a paper-book. Ebooks *fail* on those axes.
On first read, I thought that Doctorow, much like Julia Keller in her Nov. 27 Chicago Tribune article, wanted to have it both ways: he acknowledges that, in some ways, ebooks challenge the idea of the paper books, but he also suggests that the paper book will remain unaffected by these challenges. But then I read more of Doctorow's ideas about writing, and realized that, for Doctorow, the malleability of the digital format only draws attention to the fact that books are not always as "congealed" as their material nature suggests:
I take the view that the book is a "practice" -- a collection of social and economic and artistic activities -- and not an "object." Viewing the book as a "practice" instead of an object is a pretty radical notion, and it begs the question: just what the hell is a book?
I like this idea of the book as practice, though I don't think it's an idea that would, or could, be embraced by all writers. It's interesting to ponder the ways in which some writers are much more invested in the "thingness" of books than others -- usually, I find myself thinking about the kinds of readers who tend to be more invested in the idea of books as objects.
google on the air 12.06.2005, 12:34 AM
Open Source's hour on the Googlization of libraries was refreshingly light on the copyright issue and heavier on questions about research, reading, the value of libraries, and the public interest. With its book-scanning project, Google is a private company taking on the responsibilities of a public utility, and Siva Vaidhyanathan came down hard on one of the company's chief legal reps for the mystery shrouding their operations (scanning technology, algorithms and ranking system are all kept secret). The rep reasonably replied that Google is not the only digitization project in town and that none of its library partnerships are exclusive. But most of his points were pretty obvious PR boilerplate about Google's altruism and gosh darn love of books. Hearing the counsel's slick defense, your gut tells you it's right to be suspicious of Google and to keep demanding more transparency, clearer privacy standards and so on. If we're going to let this much information come into the hands of one corporation, we need to be very active watchdogs.
Our friend Karen Schneider then joined the fray and as usual brought her sage librarian's perspective. She's thrilled by the possibilities of Google Book Search, seeing as it solves the fundamental problem of library science: that you can only search the metadata, not the texts themselves. But her enthusiasm is tempered by concerns about privatization similar to Siva's and a conviction that a research service like Google can never replace good librarianship and good physical libraries. She also took issue with the fact that Book Search doesn't link to other library-related search services like Open Worldcat. She has her own wrap-up of the show on her blog.
Rounding out the discussion was Matthew G. Kirschenbaum, a cybertext studies blogger and professor of english at the University of Maryland. Kirschenbaum addressed the question of how Google, and the web in general, might be changing, possibly eroding, our reading practices. He nicely put the question in perspective, suggesting that scattershot, inter-textual, "snippety" reading is in fact the older kind of reading, and that the idea of sustained, deeply immersed involvement with a single text is largely a romantic notion tied to the rise of the novel in the 18th century.
A satisfying hour, all in all, of the sort we should be having more often. It was fun brainstorming with Brendan Greeley, the Open Source on "blogger-in-chief," on how to put the show together. Their whole bit about reaching out to the blogosphere for ideas and inspiration isn't just talk. They put their money where their mouth is. I'll link to the podcast when it becomes available.
image: Real Gabinete Português de Literatura, Rio de Janeiro - Claudio Lara via Flickr
thinking about google books: tonight at 7 on radio open source 12.05.2005, 4:58 PM
While visiting the Experimental Television Center in upstate New York this past weekend, Lisa found a wonderful relic in a used book shop in Owego, NY -- a small, leatherbound volume from 1962 entitled "Computers," which IBM used to give out as a complimentary item. An introductory note on the opening page reads:
The machines do not think -- but they are one of the greatest aids to the men who do think ever invented! Calculations which would take men thousands of hours -- sometimes thousands of years -- to perform can be handled in moments, freeing scientists, technicians, engineers, businessmen, and strategists to think about using the results.
This echoes Vannevar Bush's seminal 1945 essay on computing and networked knowledge, "As We May Think", which more or less prefigured the internet, web search, and now, the migration of print libraries to the world wide web. Google Book Search opens up fantastic possibilities for research and accessibility, enabling readers to find in seconds what before might have taken them hours, days or weeks. Yet it also promises to transform the very way we conceive of books and libraries, shaking the foundations of major institutions. Will making books searchable online give us more time to think about the results of our research, or will it change the entire way we think? By putting whole books online do we begin the steady process of disintegrating the idea of the book as a bounded whole and not just a sequence of text in a massive database?
The debate thus far has focused too much on the legal ramifications -- helped in part by a couple of high-profile lawsuits from authors and publishers -- failing to take into consideration the larger cognitive, cultural and institutional questions. Those questions will hopefully be given ample air time tonight on Radio Open Source.
more on wikipedia 12.05.2005, 12:41 PM
As summarized by a Dec. 5 article in CNET, last week was a tough one for Wikipedia -- on Wednesday, a USA today editorial by John Seigenthaler called Wikipedia "irresponsible" for not catching significant mistakes in his biography, and Thursday, the Wikipedia community got up in arms after discovering that former MTV VJ and longtime podcaster Adam Curry had edited out references to other podcasters in an article about the medium.
In response to the hullabaloo, Wikipedia founder Jimmy Wales now plans to bar anonymous users from creating new articles. The change, which went into effect today, could possibly prevent a repeat of the Seigenthaler debacle; now that Wikipedia would have a record of who posted what, presumably people might be less likely to post potentially libelous material. According to Wales, almost all users who post to Wikipedia are already registered users, so this won't represent a major change to Wikipedia in practice. Whether or not this is the beginning of a series of changes to Wikipedia that push it away from its "hive mind" origins remains to be seen.
I've been surprised at the amount of Wikipedia-bashing that's occurred over the past few days. In a historical moment when there's so much distortion of "official" information, there's something peculiar about this sudden outrage over the unreliability of an open-source information system. Mostly, the conversation seems to have shifted how people think about Wikipedia. Once an information resource developed by and for "us," it's now an unreliable threat to the idea of truth imposed on us by an unholy alliance between "volunteer vandals" (Seigenthaler's phrase) and the outlaw Jimmy Wales. This shift is exemplified by the post that begins a discussion of Wikipedia that took place over the past several days on the Association of Internet Researchers list serve. The scholar who posted suggested that researchers boycott Wikipedia and prohibit their students from using the site as well until Wikipedia develops "an appropriate way to monitor contributions." In response, another poster noted that rather than boycotting Wikipedia, it might be better to monitor for the site -- or better still, write for it.
Another comment worthy of consideration from that same discussion: in a post to the same AOIR listserve, Paul Jones notes that in the 1960s World Book Encyclopedia, RCA employees wrote the entry on television -- scarcely mentioning television pioneer Philo Farnsworth, longtime nemesis of RCA. "Wikipedia's failing are part of a public debate," Jones writes, "Such was not the case with World Book to my knowledge." In this regard, the flak over Wikipedia might be considered a good thing: at least it gives those concerned with the construction of facts the opportunity to debate with the issue. I'm just not sure that making Wikipedia the enemy contributes that much to the debate.
i am the person of the year 12.05.2005, 12:33 PM
Time magazine is allowing anyone to submit photos of people they want to be "Person of the Year" to be projected on a billboard in Times Square. However, the website states that what they really want is to have people submit photos of themselves. All the photos that are selected to be projected will be photographed by webcam and their owners will be contacted. The images can be viewed, printed and sent to friends.
If the chance of seeing your image on a giant billboard in Times Square in real time is small, what is the difference between having Time photoshop your face onto its cover and doing it yourself? Is it the idea of projecting your image onto a billboard (which can be simulated as well)?
Is this Time magazine diminishing their role as information filter or it is an established news outlet recognizing the idea that anyone can be a publisher?
are we real or are we memorex 12.04.2005, 4:01 PM
i saw four live performances and a dozen gallery shows over the past few days; one theme kept coming up-- what is the relationship of simulated reality to reality. here are some highlights and weekend musings.
thursday night: "Supervision," a play by the Builders Association and DBox about the infosphere which seems to know more about us than we do -- among other things "it" never forgets and rarely plays mash-up with our memories the way human brains are wont to do. the play didn't shed much light on what we could or should do about the encroaching infosphere but there was one amazing moment when video started shooting from left to right across the blank wall behind the actors. within moments a complete set was "constructed" out of video projections -- so seamlessy joined at the edges and so perfectly shot for the purpse that you quickly forgot you were looking at video.
friday night: Nu Voices six guys making amazing house music, including digitized-sounding vocals, entirely with their voices. one of the group, Masai Electro, eerily imitated the sounds laurie anderson makes with her vocoder or that DJs make when they process vocals to sound robotic. the crowd loved it which made me wonder why we are so excited about hearing a human pretend to be a machine? i asked masai electro why he thinks the audience likes what he does so much. he had never been asked the question before and evidently hadn't thought about it, but then spontaneously answered "because that's where we're going" meaning that humans are becoming machines or at least are becoming "at one" with them.
saturday afternoon: Clifford Ross' very large landscapes (13' x 6') made with a super high resolution surveillance camera. a modern attempt at hudson river school lush landscapes. because of the their size and detail, you feel as if you are looking out a window at reality; makes you long for the "natural world" most of us rarely encounter.
left with a bunch of questions
does it make a difference if our experience is "real" or "simulated." does that way of looking at things even make sense anymore. when we manage to add the smell of fresh air, the sound of the wind, the rustle of the grass, the bird in flight and the ability to walk around in life-size 3-D spaces to the clifford ross photos, what will be the meaningful difference between walking in the countryside and opening the latest "you are there" coffee table book of the future. in a world with limited resources i can see the value of subsituting vicarious travel for the real thing (after all if all 7 billion of us traipsed out to the galapagos during our lifetimes, the "original" would be overrun and despoiled, turning it into its opposite). but what does it mean if almost all of our experience is technologically simulated and/or mediated?
Pedro Meyer in his comment about digitally altered photos says that all images are subjective which makes altered/not-altered a moot distinction. up until now the boundary between mediated objects and "reality" was pretty obvious, but i wonder if that changes when the scale is life-like and 3D. the Ross photos and the DBox video projections foreshadow life-size media which involves all the senses. the book of the future may not be something we hold in our hands, it might be something a 3-dimensional space we can inhabit. does it make any difference if i'm interacting only with simulacra?
IT IN place: muybridge meets typographic man 12.04.2005, 12:45 PM
Alex Itin, friend and former institute artist-in-residence, continues to reinvent the blog as an art form over at IT IN place. Lately, Alex has been experimenting with that much-maligned motif of the early web, the animated GIF. Above: "My Bridge of Words."
the role of note taking in the information age 12.03.2005, 3:19 PM
An article by Ann Blair in a recent issue of Critical Inquiry (vol 31 no 1) discusses the changing conceptions of the function of note-taking from about the sixth century to the present, and ends with a speculation on the way that textual searches (such as Google Book Search) might change practices of note-taking in the twenty-first century. Blair argues that "one of the most significant shifts in the history of note taking" occured in the beginning of the twentieth century, when the use of notes as memorization aids gave way to the use of notes as a aid to replace the memorization of too-abundant information. With the advent of the net, she notes:
Today we delegate to sources that we consider authoritative the extraction of information on all but a few carefully specialized areas in which we cultivate direct experience and original research. New technologies increasingly enable us to delegate more tasks of remembering to the computer, in that shifting division of labor between human and thing. We have thus mechanized many research tasks. It is possible that further changes would affect even the existence of note taking. At a theoretical extreme, for example, if every text one wanted were constantly available for searching anew, perhaps the note itself, the selection made for later reuse, might play a less prominent role.
The result of this externalization, Blair notes, is that we come to think of long-term memory as something that is stored elsewhere, in "media outside the mind." At the same time, she writes, "notes must be rememorated or absorbed in the short-term memory at least enough to be intelligently integrated into an argument; judgment can only be applied to experiences that are present to the mind."
Blair's article doesn't say that this bifurcation between short-term and long-term memory is a problem: she simply observes it as a phenomenon. But there's a resonance between Blair's article and Naomi Baron's recent Los Angeles Times piece on Google Book Search: both point to the fact that what we commonly have defined as scholarly reflection has increasingly become more and more a process of database management. Baron seems to see reflection and database management as being in tension, though I'm not completely convinced by her argument. Blair, less apocalyptic than Baron, nonetheless gives me something to ponder. What happens to us if (or when) all of our efforts to make the contents of our extrasomatic memory "present to our mind" happen without the mediation of notes? Blair's piece focuses on the epistemology rather than the phenomenology of note taking -- still, she leads me to wonder what happens if the mediating function of the note is lost, when the triangular relation between book, scholar and note becomes a relation between database and user.
machinima's new wave 12.02.2005, 3:27 PM
"The French Democracy" (also here) is a short film about the Paris riots made entirely inside of a computer game. The game, developed by Peter Molyneux's Lionhead Productions and called simply "The Movies," throws players into the shark pool of Hollywood where they get to manage a studio, tangle with investors, hire and fire actors, and of course, produce and distribute movies. The interesting thing is that the movie-making element has taken on a life of its own as films produced inside the game have circulated through the web as free-standing works, generating their own little communities and fan bases.
This is a fascinating development in the brief history of Machinima, or "machine cinema," a genre of films created inside the engines of popular video game like Halo and The Sims. Basically, you record your game play through a video out feed, edit the footage, and add music and voiceovers, ending up with a totally independent film, often in funny or surreal opposition to the nature of the original game. Bob, for instance, appeared in a Machinima talk show called This Spartan Life, where they talk about art, design and philosophy in the bizarre, apocalyptic landscapes of the Halo game series.
The difference here is that while Machinima is typically made by "hacking" the game engine, "The Movies" provides a dedicated tool kit for making video game-derived films. At the moment, it's fairly primitive, and "The French Democracy" is not as smooth as other Machinima films that have painstakingly fitted voice and sound to create a seamless riff on the game world. The filmmaker is trying to do a lot with a very restricted set of motifs, unable to add his/her own soundtrack and voices, and having only the basic menu of locales, characters, and audio. The final product can feel rather disjointed, a grab bag of film clichés unevenly stitched together into a story. The dialogue comes only in subtitles that move a little too rapidly, Paris looks suspiciously like Manhattan, and the suburbs, with their split-level houses, are unmistakably American.
But the creative effort here is still quite astonishing. You feel you are seeing something in embryo that will eventually come into its own as a full-fledged art form. Already, "The Movies" online community is developing plug-ins for new props, characters, environments and sound. We can assume that the suite of tools, in this game and elsewhere, will only continue to improve until budding auteurs really do have a full virtual film studio at their disposal.
It's important to note that, according to the game's end-user license agreement, all movies made in "The Movies" are effectively owned by Activision, the game's publisher. Filmmakers, then, can aspire to nothing more than pro-bono promotional work for the parent game. So for a truly independent form to emerge, there needs to be some sort of open-source machinima studio where raw game world material is submitted by a community for the express purpose of remixing. You get all the fantastic puppetry of the genre but with no strings attached.
killing the written word? 12.02.2005, 10:41 AM
A November 28 Los Angeles Times editorial by American University linguistics professor Naomi Barron adds another element to the debate over Google Print [now called Google Book Search, though Baron does not use this name]: Baron claims that her students are already clamoring for the abridged, extracted texts and have begun to feel that book-reading is passe. She writes:
Much as automobiles discourage walking, with undeniable consequences for our health and girth, textual snippets-on-demand threaten our need for the larger works from which they are extracted... In an attempt to coax students to search inside real books rather than relying exclusively on the Web for sources, many professors require references to printed works alongside URLs. Now that those "real" full-length publications are increasingly available and searchable online, the distinction between tangible and virtual is evaporating.... Although [the debate over Google Print] is important for the law and the economy, it masks a challenge that some of us find even more troubling: Will effortless random access erode our collective respect for writing as a logical, linear process? Such respect matters because it undergirds modern education, which is premised on thought, evidence and analysis rather than memorization and dogma. Reading successive pages and chapters teaches us how to follow a sustained line of reasoning.
As someone who's struggled to get students to go to the library while writing their papers, I think Baron's making a very important and immediate pedagogical point: what will professors do after Google Book Search allows their students to access bits of "real books" online? Will we simply establish a policy of not allowing the online excerpted material to "count" in our tally of student's assorted research materials?
On the other hand, I can see the benefits of having a student use Google Book Search in their attempt to compile an annotated bibliography for a research project, as long as they were then required to look at a version of the longer text (whether on or off-line). I'm not positive that "random effortless access" needs to be diametrically opposed to instilling the practice of sustained reading. Instead, I think we've got a major educational challenge on our hands whose exact dimensions won't be clear until Google Book Search finally gets going.
Also: thanks to UVM English Professor Richard Parent for posting this article on his blog, which has some interesting ruminations on the future of the book.
insidious tactic #348: charge for web speed 12.02.2005, 8:31 AM
An article in yesterday's Washington Post -- "Executive Wants to Charge for Web Speed" -- brings us back to the question of pipes and the future of the internet. The chief technology officer for Bell South says telecoms and cable companies ought to be allowed to offer priority deals to individual sites, charging them extra for faster connections. The Post:
Several big technology firms and public interest groups say that approach would enshrine Internet access providers as online toll booths, favoring certain content and shutting out small companies trying to compete with their offerings.
Among these "big technology firms" are Google, Yahoo!, Amazon and eBay, all of whom have pressed the FCC for strong "network neutrality" provisions in the latest round of updates to the 1996 Telecommunications Act. These would forbid discrimination by internet providers against certain kinds of content and services (i.e. the little guys). BellSouth claims to support the provisions, though the statements of its tech officer suggest otherwise.
Turning speed into a bargaining chip will undoubtedly privilege the richer, more powerful companies and stifle competition -- hardly a net-neutral scenario. They claim it's no different from an airline offering business class -- it doesn't prevent folks from riding coach and reaching their destination. But we all know how cramped and awful coach is. The truth is that the service providers discriminate against everyone on the web. We're all just freeloaders leeching off their pipes. The only thing that separates Google from the lady blogging about her cat is how much money they can potentially pay for pipe rental. That's where the "priorities" come in.
Moreover, the web is on its way to merging with cable television, and this, in turn, will increase the demand for faster connections that can handle heavy traffic. So "priority" status with the broadband providers will come at an ever increasing premium. That's their ideal business model, allowing them to charge the highest tolls for the use of their infrastructure. That's why the telecos and cablecos want to ensure, through speed-baiting and other screw-tightening tactics, that the net transforms from a messy democratic commons into a streamlined broadcast medium. Alternative media, video blogging, local video artists? These will not be "priorities" in the new internet. Maximum profit for pipe-holders will mean minimum diversity and a one-way web for us.
In a Business Week interview last month, SBC Telecommunications CEO Edward Whitacre expressed what seemed almost like a lust for revenge. Asked, "How concerned are you about Internet upstarts like Google, MSN, Vonage, and others?" he replied:
How do you think they're going to get to customers? Through a broadband pipe. Cable companies have them. We have them. Now what they would like to do is use my pipes free, but I ain't going to let them do that because we have spent this capital and we have to have a return on it. So there's going to have to be some mechanism for these people who use these pipes to pay for the portion they're using. Why should they be allowed to use my pipes?
The Internet can't be free in that sense, because we and the cable companies have made an investment and for a Google or Yahoo! or Vonage or anybody to expect to use these pipes [for] free is nuts!
This makes me worry that discussions about "network neutrality" overlook a more fundamental problem: lack of competition. "That's the voice of someone who doesn't think he has any competitors," says Susan Crawford, a cyberlaw and intellectual property professor at Cardozo Law School who blogs eloquently on these issues. She believes the strategy to promote network neutrality will ultimately fail because it accepts a status quo in which a handful of broadband monopolies dominate the market. "We need to find higher ground," she says.
I think the real fight should be over rights of way and platform competition. There's a clear lack of competition in the last mile -- that's where choice has to exist, and it doesn't now. Even the FCC's own figures reveal that cable modem and DSL providers are responsible for 98% of broadband access in the U.S., and two doesn't make a pool. If the FCC is getting in the way of cross-platform competition, we need to fix that. In a sense, we need to look down -- at the relationship between the provider and the customer -- rather than up at the relationship between the provider and the bits it agrees to carry or block...
...Competition in the market for pipes has to be the issue to focus on, not the neutrality of those pipes once they have been installed. We'll always lose when our argument sounds like asking a regulator to shape the business model of particular companies.
The broadband monopolies have their priorities figured out. Do we?
image: "explosion" (reminded me of fiber optic cable) by The Baboon, via Flickr
open rights group 12.01.2005, 3:04 PM
Becky Hogge writes in Opendemocracy about a new digital rights organization, The Open Rights Group, based in Westminster, Brussels and Geneva. Like the Electronic Frontier Foundation in the United States, the Open Rights Group will address issues such as access, freedom of speech online, and file sharing. Unlike the EFF -- which was initially bankrolled by a small group of beleivers -- the Open Rights Group was started by a group of 1,000 subscribers who will each pay five pounds a month to get the organization going.
katrina archive on internet archive 12.01.2005, 2:26 PM
The Internet Archive has just established an archive dedicated to preserving the online response to the Katrina catastrophe. According to the Archive:
The Internet Archive and many individual contributors worked together to put together a comprehensive list of websites to create a historical record of the devastation caused by Hurricane Katrina and the massive relief effort which followed. This collection has over 25 million unique pages, all text searchable, from over 1500 sites. The web archive commenced on September 4th.
If you try to link to the Internet Archive today, you might not get through, because everyone is on the site talking about the Grateful Dead's decision to allow free downloading
freedom forum founder wants less freedom online 12.01.2005, 10:11 AM
For just over four months, a biography of Freedom Forum Founder John Seigenthaler that appeared on Wikipedia, Biography.com, and Answers.com claimed -- incorrectly -- that he was once a suspect in the assasination of both John and Robert Kennedy. Last month, Seigenthaler found out about it, and he got angry. Very angry.
In fact, he got angry enough to write a November 29 editorial in USA Today complaining about Federal laws that protect online corporations like Wikiepedia from libel lawsuits and protect the anonymity of the person who published false information about him online.
Don't get me wrong: it's certainly a serious problem that Seigenthaler's biography contained false information (I haven't been able to determine yet whether the assassination rumor is an artifact of the vast Kennedy conspiracy rumor mill, or whether it was a pure invention of the phony biographer -- anyone know?). And the flaws in Wikipedia are a real issue. But I'm still astonished that one of the nation's great free speech advocates seems to be advocating systemic changes to legislation that protects not only prank speech, but political speech online.
Is it that Seigenthaler feels (but does not say) that there is a fundamental difference between print media and online? Or is this a case of someone knowing Seigenthaler's Achille's heel, and publishing the one rumor about him that would cause him to seemingly contradict his basic principles?
google print on deck at radio open source 12.01.2005, 8:07 AM
Open Source, the excellent public radio program (not to be confused with "Open Source Media") that taps into the blogosphere to generate its shows, has been chatting with me about putting together an hour on the Google library project. Open Source is a unique hybrid, drawing on the best qualities of the blogosphere -- community, transparency, collective wisdom -- to produce an otherwise traditional program of smart talk radio. As host Christopher Lydon puts it, the show is "fused at the brain stem with the world wide web." Or better, it "uses the internet to be a show about the world."
The Google show is set to air live this evening at 7pm (ET) (they also podcast). It's been fun working with them behind the scenes, trying to figure out the right guests and questions for the ideal discussion on Google and its bookish ambitions. My exchange has been with Brendan Greeley, the Radio Open Source "blogger-in-chief" (he's kindly linked to us today on their site). We agreed that the show should avoid getting mired in the usual copyright-focused news peg -- publishers vs. Google etc. -- and focus instead on the bigger questions. At my suggestion, they've invited Siva Vaidhyanathan, who wrote the wonderful piece in the Chronicle of Higher Ed. that I talked about yesterday (see bigger questions). I've also recommended our favorite blogger-librarian, Karen Schneider (who has appeared on the show before), science historian George Dyson, who recently wrote a fascinating essay on Google and artificial intelligence, and a bunch of cybertext studies people: Matthew G. Kirschenbaum, N. Katherine Hayles, Jerome McGann and Johanna Drucker. If all goes well, this could end up being a very interesting hour of discussion. Stay tuned.
UPDATE: Open Source just got a hold of Nicholas Kristof to do an hour this evening on Genocide in Sudan, so the Google piece will be pushed to next week.