the future of new york: can you stand the heat? 06.30.2005, 5:27 PM
Heat and the Heartbeat of the City, a site created by Andrea Polli and commissioned by New Radio and Performing Arts, Inc. for its Turbulence web site. Has created a multimedia narrative that imagines the impact global warming will have on the city. The site presents sonifications (sound compositions created by the translation of data to sound) by Andrea Polli and a series of video interviews with Dr. Cynthia Rosenzwieg regarding the dramatic climate changes that will take place over the next 85 years. The project focuses on Central Park, "one of the country's first locations for climate monitoring. As you listen, you will travel forward in time at an accelerated pace and experience an intensification of heat in sound."
the state of the blog: past, present & future 06.28.2005, 6:37 PM
Since Ben's on vacation (you may have noticed the crickets chirping in his absence), I've been in charge of pruning the comment- and trackback-spam that if:book and the rest of our website generates. Hopefully, you haven't noticed much of this around here, but it arrives in ever-increasing volume: lately, we've been getting upwards of twenty comment-spams per day. They've become increasingly less coherent: while once they attempted to cajole our visitors to try out dubious sexual aids or patronize online casinos, the latest batch have been streams of random letters linking to websites that don't seem to exist.
To combat the problem (which I imagine is much the same at any blog), we've installed a Movable Type plugin that filters comments and trackbacks. It does a pretty good job: like a spam filter in a mail program, it can guess what spam is, and it learns quickly. One curious piece of its method, however, might have wider repercussions for how we read & use blogs: it automatically suspects comments made on older posts to be comment spam. This is, by and large, correct: there aren't a lot of people finding our old posts and leaving comments on them. But this does feel like we're increasingly killing off old discussions. This ties into my musings from two weeks back, when I wondered how well blogs function as an archive.
A discussion at Slashdot zooms out to look at the ever decreasing signal-to-noise ratio from the soi-disantblogosphere as a whole. Spam blogs – often created to drive up Google rankings, for example – are becoming ever more common; just as it's simple for you to create a blog, it's simple for a robot to create a thousand. At what point does the sheer volume of spam start turning users away?
A decent guess, if the history of forms on the web is any indicator, is that something new will arise. Mentioned in the Slashdot discussion is Usenet, the newsgroup-based discussion system. Spam first reared its ugly head on Usenet, and by the late 1990s had almost consumed it. As the level of spam rose, users departed - some, undoubtedly, to the comparatively safer environs of the blogosphere. What comes after blogs?
While on the history of blogs: Matt Sharkey has an interesting history of suck.com (here helpfully archived by its creator, Carl Steadman). Suck wasn't a blog as we know them (readers could email the author, but not directly leave comments for others to see), but it did premiere (in 1995) what would become a key concept of the blog, having fresh concept daily. It also brought snarky semi-anonymous commentators to the Web, and the idea of using hyperlinks for humor. They did get in five solid years, though, and the site is arguably an important milestone in the history of how we read online. Browsing through Steadman's archive provides food for thought about archives on the web: while it's still entertaining, you quickly notice that almost every one of the links is broken. Nothing lasts forever.
total recall: managing the memory machine 06.28.2005, 10:24 AM
Bodies in Motion: Memory, Personalization, Mobility and Design, A conference currently taking place at Banff explores the possibility of "total data memory." The conference gathers together nanotechnology researchers, medical researchers, and historians to examine the vast realm of memory materials gathered from increasingly ubiquitous devices such as: sensors, personal recording devices, and surveillance technologies. The conference imagines a world where information will be gathered by everything around us. Our clothes, the walls, we may even find sensors embedded in our bodies. This plethora of information could be used to construct an exhaustive virtual history. But is that something we want?
What drives the contemporary desire in the technology world for total data memory? How does data memory sit beside new kinds of memory capacities in other materials? Memory is closely linked to histories and the interpretations of history. Some of the best mobile experiences combine local memory, histories and place. What models of memory and mind are used in designing technologies that remember? What are the ethical implications of memory machines? What does this mean in time of war, increased security? How do we include the need, capacity, and desire to forget? How do we include trauma?
Marvelous summary of the questions facing us in the coming age of total recall.
get on your digital soapbox 06.27.2005, 12:37 PM
"What would you say, given one free minute of anonymous, uncensored speech?" the people at One Free Minute want to know. Their project gives you a chance to speak your mind loudly and anonymously in "America's demographically average city: Columbus, Ohio."
According to the site: One Free Minute began as a simple concept: what would happen if the remote speech were connected to public space? Since then it has branched out to be an examination of public speech, an exploration of how cellular technology affects human communication in both negative and positive ways, a hand-made fibreglass sculpture, a web site, a bunch of phone lines, a whole lot of server bandwidth... you get the idea.
The One Free Minute mobile sculpture has a cell phone inside connected to a 200 watt amplifier and speaker. Callers remain connected for exactly one minute and their calls are broadcast through the sculpture's red, Victrola-like speaker. These micro-speeches are either performed live, or broadcast from taped messages. Visit the site to hear examples and to find out how to participate.
who owns ideas? 06.25.2005, 5:26 PM
It is the nature of digital technologies that every use produces a copy. Thus, it is the nature of a copyright regime like the United States', designed to regulate copies, that every use in the digital world produces a copyright question: Has this use been licensed? Is it permitted? And if not permitted, is it "fair"? Thus, reading a book in analog space may be an unregulated act. But reading an e-book is a licensed act, because reading an e-book produces a copy. Lending a book in analog space is an unregulated act. But lending an e-book is presumptively regulated. Selling a book in analog space is an unregulated act. Selling an e-book is not. In all these cases, and many more, ordinary uses that were once beyond the reach of the law now plainly fall within the scope of copyright regulation. The default in the analog world was freedom; the default in the digital world is regulation.
I'm going on a brief hiatus, so that'll be my last link for a little while. But keep checking back - Bob, Kim and Dan will be keeping the home fires burning.
how the web changes your reading habits 06.24.2005, 2:17 PM
An article in yesterday's Christian Science Monitor looks at two research projects currently underway in Palo Alto, California - one at Xerox PARC, the other at Stanford. Both are building tools and devising methods to improve online reading, albeit by different approaches. The PARC project is developing ScentHighlights, an "enhanced skimming" function based on keywords and the associative processes of the human brain. On paper, we highlight important passages, or attach sticky notes, to make them more readily retrievable later on when we're re-reading, studying, or compiling notes. The PARC researchers are taking this a few steps further, exploiting the unique properties (and addressing the unique challenges) of the online reading environment. With ScentHighlights, the computer observes what the reader is highlighting and selects other passages that it thinks might be relevant or useful:
We perform the conceptual highlighting by computing what conceptual keywords are related to each other via word co-occurrence and spreading activation. Spreading activation is a cognitive model developed in psychology to simulate how memory chunks and conceptual items are retrieved in our brain.
While the PARC team is focused on deepening the often fractured experience of reading online, where the amount of text is overwhelming, the Stanford project is experimenting with a method for sustained reading in an environment that can barely handle text at all: the tiny screens of cell phones and mobile devices. Using a technique called RSVP (Rapid Serial Visual Presentation), BuddyBuzz flashes words on the screen one at a time. It takes some getting used to, but apparently, readers can absorb up to 1,000 words per minute. Speed is adjustable, and the program is already set to make the tiny, natural pauses that come at commas and periods. The initial release of BuddyBuzz will syndicate stories from Reuters, CNET and a handful of popular blogs.
american libraries are wired, with doors wide open 06.24.2005, 1:15 PM
From today's NY Times: "Almost All Libraries Offer Free Web Access":
The study, which was conducted by researchers at Florida State University, found that 98.9 percent of libraries offer free public Internet access, up from 21 percent in 1994 and 95 percent in 2002. It also found that 18 percent of libraries have wireless Internet access and 21 percent plan to get it within the next year.
Even in an age of online reading, the library still has tremendous significance as a physical commons. When wi-fi coverage in cities becomes comprehensive, we should still be able to get free access at our local library. Another way that public libraries can stay relevant is to offer free on-site access to pay services: things like Lexis-Nexis, subscription-only web periodicals, and even web-delivered movies and television.
GooglePorn.com? 06.24.2005, 11:07 AM
They say that porn drives technology, but could it possibly figure into Google's expansion into online payment systems? Would that be the end of the cute, cuddly Google we've all come to know and love - our most constant companion on the web? Sam Sugar, author of the adult industry-watching blog SugarBank, says Google would be foolish not to capitalize on this massive underground market, routinely shunned by "respectable" services like PayPal. In an open letter to Google's CEOs, Sugar lays out his arguments and explains how porn could catapult Google to the cutting edge of ecommerce, in much the same way that it helped VHS outmaneuver Betamax.
Banking is a perennial thorn in the side of even the largest and most successful adult websites. All adult companies are overcharged by merchant banks poorly equipped to deal with transactions they consider to be 'high-risk'.
Before PayPal withdrew from offering billing services to adult companies (around the time they were acquired by eBay), they were the preferred customer choice for the websites that offered them as a payment option.
It's hard to justify PayPal's withdrawl on 'moral' grounds given the volume of pornography sold via eBay. The logical assumption is that PayPal's decision to ban adult transactions is due to an inability to handle them well. What is beyond question is that their decision loses them billions a year.
Consumers don't find adult websites easy to trust, and would welcome the ability to buy adult material without sharing their financial information with companies they're unsure of. Google is universally trusted and so, when you launch the Google billing system, the adult industry will rush to use it.
(via Searchblog, who reports that Google already owns GooglePorn.com and similar domains.. intrigue!)
google maps with u.s. census data 06.23.2005, 6:31 PM
Another great hack: gCensus.com. The thing I like about this is the fluidity of the data changing across scale and location. As you zoom in and out, or drag across the map, the statistical markers re-cluster, while to the right, "totals for viewable area" (population, housing units, land/water area) shift smoothly. You feel as though you are using a highly sensitive instrument.
Hack I'd like to see: real-time birth/death map (using hospital data).
weaving libraries into the web 06.23.2005, 5:07 PM
A great feature of the Firefox web browser is the little search window built right into the toolbar next to the address field. It's set to Google as a default, but you can add other common search engines or knowledge bases like Yahoo, IMDB, Amazon, eBay, Wikipedia, dictionaries and others - a customized reference suite right in your browser. What if you could put a card catalogue in there too? John Wohlers, of the Todd Library at Waubonsee Community College in Sugar Grove, Illinois has built a searchlet that effectively does this. It's not like Google Print, where you can actually browse scanned copies of the book, but it takes a step toward integrating libraries with the web - an important move if they are to remain relevant in a world where browsers and search engines are the primary research tools.
Wohlers is also working on building library search into desktop tools. Windows users can find instructions here for putting the Todd Library catalogue into your Microsoft Office 2003 Research Pane.
(via The Shifted Librarian)
wiki wiki: snapshot etymology 06.23.2005, 3:49 PM
Found on Flickr: the famous "wiki wiki" shuttle bus at the Honolulu airport. In Hawaiian pidgin, "wiki wiki" means "quick," or "informal," and is what inspired Ward Cunningham in 1995 to name his new openly editable web document engine "wiki", or the WikiWikiWeb.
(photo by cogdogblog)
"letter to the wikitor" 06.23.2005, 7:52 AM
I'm still a bit irked that the LA Times Editors shut down the Wikitorials community. I started to become engaged in the community and saw promise. They shut it down without warning and without thinking things through to begin with.
So he's leading the charge on a community-penned letter to the editor on (you guessed it) a wiki, to perhaps breathe a little warmth into cold feet.
grant virtual asylum - adopt a chinese blog 06.22.2005, 12:57 PM
People sometimes wonder what would have happened if the Soviet Union had survived long enough to experience the internet. It's a delicious "what if" scenario to contemplate. The USSR was quite skilled at using broadcast and print media to achieve total message discipline (the Bush administration can only dream), but what would have happened if a totally decentralized medium like the web (a control freak's nightmare) sprung up right under the Kremlin's boots? Would the dissidents have bubbled over into cyberspace in a surging tide too powerful to control? Or would the the government have cracked down brutally, or cut off the emerging technology before it could develop, drawing the iron curtain still further over the information commons? Someone should write a novel (á la Thomas Harris, Philip Roth)..
But look to China today, and we can get at least some idea of what might have happened. Granted, China is now a booming frontier of global capitalism, having all but abandoned the communist economic model. But the regime is still quite Soviet in its attitudes toward the media (which it totally controls) and toward expressions of political dissent (which it forbids and punishes). The internet presents a particularly devilish challenge.
In response, the government has set up a "Great Firewall" blocking off certain sections of the web (anything from Google News to Human Rights Watch) that it would rather its citizens didn't see. Not wanting to get shut out of the world's biggest emerging market, American corporations like Yahoo, Google, and most recently Microsoft, have complied with state demands that certain services, and even certain terms like "democracy," "freedom" or "human rights," are blocked in Chinese versions of their web applications. In addition, the government recently passed legislation requiring all websites to be registered. Anything deemed inappropriate gets taken off its server. A hundred flowers may bloom on the internet, but not if the government cuts them off at the root.
It's estimated there are about 1 million Chinese blogs, and that number is sure to increase ten, twenty a hundred fold. Who knows? If it gets to that point, the government probably won't be able to keep up. But for now, bloggers with even slightly controversial politics are in danger of getting shut down. This is why some Chinese bloggers are moving their sites abroad, seeking political haven on western servers. Isaac Mao, a venture capitalist in Shanghai for internet startups, self-professed "meta idea" generator, and one of the first Chinese bloggers, has set up an "adopt-a-blog" program that matches up fellow bloggers with foreigners willing to make a little extra room on their servers. It's a great idea, and a chance for the blogosphere to come together as a global community.
More about Isaac Mao in Wired: "Chinese Blogger Slams Microsoft"
Someone found a way to circumvent Microsoft's block on "freedom," "democracy" etc.: "Loophole lets 'Freedom' ring in Chinese MSN blogs" (with complete instructions here)
The 2005 Computers and Writing Conference 06.22.2005, 12:36 PM
Stanford University hosted the 2005 Computers and Writing conference this past weekend. Each session was rife with "future of the book" food for thought. This is an informal summary, with apologies to all the fabulous presentations that I don't mention (sorry, being only one person, I could not attend them all). Some of the major themes (which dovetail nicely with issues we are exploring at the institute) included: Open Source, new interpretations of literacy and "writing," the changing role of the teacher/student, performance, multimodality, and networked community. It is important to note that these themes often blur together in a complicated interdependence. This thematic interplay was evident in the pre-conference workshops which included instruction in open source tools and applications like Drupal that allow for multimodality and the creation of communal authoring environments. Workshops in "Reading Images" and "Using Video to Teach Writing" addressed multiple modalities and new concepts of writing.
I was excited to see that the Computers and Writing community understands the potential of, and imperative for, Open Source. It's practical advantages (free and customizable) and it's philosophical advantages (community-based and built for sharing rather than for selling) make it ideally suited to the goals of the educational community. Open Source came up over and over during the presentations and was featured in the first town hall session "Open Source Opens Thinking." The session challenged the Computers and Writing community "to consider a position statement of collective principles and goals in relation to Open Source." Such a statement would be useful and productive; I'm hoping it will materialize.
The changing role of the teacher and student was evident in several presentations: most notably, the pilot program at Penn State (see my earlier post) in which students publish their "papers" on a wiki. The wiki format allows for intensive peer-review and encourages a culture of responsibility.
There was a lot of speculation about how writing will evolve and how other modalities might be incorporated into our notion of literacy. Andrea Lunsford's keynote speech addressed this issue, calling for a return to oral and embodied "performative literacies." She referred to Tara Shankar's MIT dissertation "Speaking on the Record," which confronts the way we privilege writing above other modalitites for knowledge and education. She says: "Reading and writing have become the predominant way of acquiring and expressing intellect in Western culture. Somewhere along the way, the ability to write has become completely identified with intellectual power, creating a graphocentric myopia concerning the very nature and transfer of knowledge. One of the effects of graphocentrism is a conflation of concepts proper to knowledge in general with concepts specific to written expression."
Shankar calls for new practices that embrace oral communication. She introduces a new word: "to provide a counterpart to writing in a spoken modality: speak + write = sprite. Spriting in its general form is the activity of speaking "on the record" that yields a technologically supported representation of oral speech with essential properties of writing such as permanence of record, possibilities of editing, indexing, and scanning, but without the difficult transition to a deeply different form of representation such as writing itself."
The need for a multimodal approach to writing was addressed in the second Town Hall meeting "Composition Beyond Words." Virginia Kuhn opened by calling for a reconsideration of "writing," and the goals of visual literacy. Bradley Dilger reminded us that literacy goes beyond "the letter;" we need multiple interfaces for the same data because not everyone looks at data the same way. Madeleine Sorapure pointed out that writing with computers is determined by underlying code structures which are, themselves, a form of writing. She quoted Loss Pequeno Glazier, "Code is the writing within the writing that makes the work happen." Gail Hawisher, talked about the 10 year process of incorporating multiple modalities into the first-composition courses at the University of Illinois. Cynthia Selfe addressed this struggle, saying: "colleges are not comfortable with multiple modalities." She advises the C&W community to "think about how to give professional development/support to resistant colleges in ways that are sustainable over time." Stuart Moulthrop also offered some cautionary words of advice. In addition to faculty and administration, Moultrop says students are resistant to multimodality. Code, for example, is fatally hard to teach non-programmers or visually oriented people. "There is a political problem," Moulthrop says, "we are living through a backlash moment. People are very angry about how fast the future has come down on them."
Some participants delivered "papers" that attempted to demonstrate these new multimodal imperatives. Most notably, Todd Taylor's presentation, "The End of Composition," which asked, "Can a paper be a film?" Todd argues "yes" with a cinematic montage of sampled and remixed clips along with original footage, which was enthusiastically received by the audience (alt. review in Machina Memorialis blog.) Morgan Gresham's Town Hall presentation was a student-produced video and a question to the audience; is this just a remake of a bad commercial, or is it a "paper"? Christine Alfano's presentation experimented with a hypertext, "Choose Your Own Adventure," style that allowed the audience to determine the trajectory of the talk. Once the selection was made, she dropped the other two papers/options to the floor. The choice, unfortunately for me, eliminated the material that I most wanted to hear about (Shelly Jackson's Patchwork Girl). Additionally, "virtual" presentations were delivered during an online companion conference called: Computers and Writing Online 2005 When Content Is No Longer King: Social Networking, Community, and Collaboration This interactive online conference served, "as an acknowledgment of the value of social networks in creating discourse of and about scholarly work." CWOnline 2005 made both the submission and presentation process open to public review via the Kairosnews weblog. Despite some flaws, I thought these experimental presentations pushed at the boundaries of academic discourse in a useful way. They reminded us how far we have to go and how difficult the project of putting ideas into practice really is.
Finally, the conference highlighted ways in which computers are being used to cultivate community across cultures and institutions; and between students, teachers, and scholars. Sharing Cultures, a joint project of Columbia College Chicago and Nelson Mandela University Metropolitan University, in South Africa "creates two interconnected, on-line writing and learning communities...the project purposely includes students who traditionally have not had access to, or have been actively marginalized from, both digital and international experiences." Virginia Kuhn approached computers and community at the local level, with a service learning class called, "Multicultural America," which asked students to write an ebook documenting local history. The finished work is part of an ongoing display at a Milwaukee community center. This project inspired an interesting reversal; community members who worked with students on the project are now (thanks to a generous grant) coming to the University of Milwaukee for supplemental study. Within the academy there are also exciting opportunities for computer-based community-building. In her Town Hall presentation, Gail Hawisher said that literacy on campus is, "usually taken care of by first year composition." If we are to incorporate visual literacy into our definition of literacy then, "Perhaps we should be looking to art and design for literacy instead of just the English dept." This is an incredibly smart idea because, short of requiring composition teachers to have degrees in art, film, AND writing, collaborative efforts with other departments seem to be the best way to ensure a deep and rigorous understanding of the material. I had an interesting conversation with Stuart Moulthrop about this. We imagined a massively-multi-player game environment that would allow scholars from around world to collaborate on curriculum across institutional and disciplinary boundaries. Wouldn't it be great, we thought, if someone who wanted to teach an odd combination like, film/biology/physics, could put a course scenario into the game where it would be played out by biologists, film scholars, and physicists. In other words a kind of life-time learning environment for the experts, a laboratory for the exchange of knowledge across disciplinary boundaries, and place to weave together different strands of human insight in order to create a more complete "picture" of the universe.
harlequin romances to hit cell phones 06.21.2005, 1:40 PM
Missed this item from last month.. This fall, Harlequin, the leading publisher of "women's fiction," will release a series of titles for cell phones through distributor Vocel (who signed a deal with Random House earlier this year).
Harlequin will develop various applications, including daily-serialized novels by bestselling authors, romance-writing seminars and interactive pursuits such as helping to choose male cover models for upcoming novels or even using their camera phones to submit pictures of their own boyfriends as possible cover models.
more than half of journalists use blogs, study shows 06.21.2005, 12:48 PM
The Eleventh Annual Euro RSCG Magnet and Columbia University Survey of Media found that 51% of journalists use blogs, many of them for work, though few are actually writing their own. The study also found that, although more than half admit to using blogs, only 1% find them to be credible. Hmmm... From Business Wire:
The study found that blogs have become a large -- and arguably, increasingly integral -- part of how journalists do their jobs. Indeed, 70% of journalists who use blogs do so for work-related tasks. Most often, those work-related tasks involve finding story ideas, with 53% of journalist respondents reporting using blogs for such purposes. But respondents also turn to blogs for other uses, including researching and referencing facts (43%) and finding sources (36%). Most notable, fully 33% of journalists say they use blogs as a way of uncovering breaking news or scandals. Few blog-using journalists are engaging with this new medium by posting to blogs or publishing their own; such activities might be seen as compromising objectivity and thus credibility.
on second thought... (wikis are hard) 06.21.2005, 10:24 AM
The LA Times has temporarily shelved its plans for running "wikatorials" - editorials that any reader can edit - due to a flood of "inappropriate material." The whole experiement with wikis was a risky move for a well-established newspaper to take, and it's not surprising that they immediately panicked once the riff raff showed up. It's hard to establish an open, collaborative environment from the top-down. Whereas, if you start from a point of low stakes, with little prestige on the line (as Wikipedia did), then the enterprise can evolve slowly, embarrassing missteps, spam and all.
Someone should start an experiment: dump the LA Times content in a non-affiliated wiki and try the wikatorials there. Give it time, let the community build, work out the hiccups, and then give the LA Times a call.
Gataga - social bookmark search and exploration engine 06.20.2005, 2:43 PM
We came across this the other day - an engine for searching the social bookmarking commons. Gataga allows you to search by tag across several popular web-clipping services including del.icio.us, furl, and others. Gataga's simple interface looks a lot like Google's, but the similarity ends there. The only ranking system is time - the most recent links come up at the top. So Gataga is a nice tool for the moment's glimpse of the links people are saving, but that's about all.
Bit by bit, the web is being catalogued by its users. But at the moment, Gataga (and the rest of these bookmarking tools) works more like a wire service than a library. Tags are sort of like a reporter's "beat" and Gataga provides RSS feeds for all possible queries, so you can track areas of interest. But if you want to use it as an archive, you'll have some pretty serious digging to do.
In the early days of the web, sites sprung up like Voice of the Shuttle (VOS) that thoughtfully catalogued interesting links. The fact that there was a single editor ensured that things stayed fairly organized, that broken links were repaired, and dead ones pruned. But as the web grew, the model quickly became unmanageable. Alan Liu, who single-handedly managed VOS from 1994-1999, said it came to the point where he was spending 2-3 hours per night simply combing for dead links. VOS allowed the community to suggest sites, but the burden of organizing, annotating, and "weeding" fell solely on Liu. The rise of blogs made it easier and less stressful to gather links, but ensured that it was a casual affair - a kind of day-to-day grazing. Of course, all blogs have archives, but they are not terribly useful (Dan talks about this here).
With social bookmarking, we seem to be laying the foundation for something more sustainable - "the only group that can organize everything is everybody." The next step is for librarians, archivists, and new kinds of editors and curators to start making sense of this wilderness of tags.
book returned to library 78 years late 06.20.2005, 2:36 AM
The Oakland Public Library announced Friday that a man returned an overdue book -- 78 years after his now-deceased aunt checked it out of the Melrose Branch.
Networked Pedagogies: Opensourcing the Writing Classroom 06.18.2005, 4:45 PM
Penn State has initiated a pilot program of 10 wiki-based composition classes. Richard Doyle, Jeff Pruchnic, and Trey Conner, instructors in the pilot-program discussed their experiences this morning at the Computers & Writing Conference in Stanford. They found that students produce better work in a peer-reviewed environment. Grammar and mechanics are contextualized and there is greater motivation to create error-free work. Students read each other's work, which forces them to consider their arguments carefully in order to avoid repeating someone else's point.
They also found that the self-governing ecology of the networked wiki format creates a fruitful environment for discussion and debate. The wiki places control over the direction and duration of the discussion into the student's hands. Richard Doyle also pointed out that there has not been a single editing war in the years that he has been teaching the course. He attributes the lack of unproductive "flame wars" to the amount of work his students have. Each student produces about 100 pages of material and must read, comment on, and GRADE their fellow students' work. This is a learner-centered environment where, as Richard Doyle puts it, "the teacher acts as coach or zen master, making periodic interventions." Doyle also points out that in these wiki-based courses, "students are learning how to interact in an information dense environment responsibly. They are being trained to deal with the fluid environments they are going to find themselves in."
transliteracies: the politics of online reading 06.18.2005, 2:33 PM
Warren Sack presented two interesting diagrams yesterday at Transliteracies. The first was a map of how political conversations happen in newsgroups:
The work is that of John Kelly, Danyel Fisher, and Marc Smith; it shows conversations on the newsgroup alt.politics.bush. Blue dots are left-leaning participants in the newsgroup; red dots are right-leaning participants. Lines between dots show a conversation. Here, it's clear that a conversation is predominantly taking place across the political lines: people are arguing with each other.
The second is a map of how conversations (represented by links) happen on political blogs in the United States:
This is the work of Lada Adamic and Natalie Glance and it shows connections between political blogs. Blue dots are leftist blogs; red dots are rightist blogs. One notes here that the left-leaning blogs and right-leaning blogs tend to link to themselves, not across the political divide. People are reinforcing their own beliefs.
Obviously, it's a stretch to claim that American politics became more polarized and civics died a death because internet conversations moved from newsgroups to blogs. But it's clear from these diagrams that the way in which different forms of online reading take place (and the communities that are formed by this online reading) has political ramifications of which we need to be conscious.
serendipity 06.18.2005, 1:16 PM
the pinpoint accuracy of computer-searches, leaves those of us lucky enough to have spent time in library stacks, nostalgic for the unexpected discovery of something we didn't know we were looking for but which just happened, serendipitously, to be on a nearby shelf. George Legrady, artist and prof at UC Santa Barbara, just showed a project he is working on for the new public library in Seattle that gave the first glimpse of serendipity in online library searching which lets you see all the books that have recently been checked out on a particular subject. Beautiful and Exciting.
blog reading: what's left behind 06.17.2005, 5:29 PM
The basement of the Harvard Bookstore in Cambridge sells used books. There's an enormous market for used books in Cambridge, and anything interesting that winds up there tends to be immediately snapped up. The past few times I've gone to look at the fiction shelves, I've been struck by a big color-coded section in the middle that doesn't change - a dozen or so books from Jerry Jenkins &Tim LaHaye's phenomenally popular Left Behind series, a shotgun wedding of Tom Clancy and the Book of Revelation carried out over thirteen volumes (so far). About half the books on the shelf are the first volume. None of them look like they've been read. They're quite cheap.
Since the books started coming out (in 1996), there's been an almost complete absence of discussion of the books in the mainstream media, save the occasional outburst about this lack of discussion ("These books have sold 60,000,000 copies! And nobody we know reads them!"). I suspect my attitude towards the books is similar to that of many blue-state readers: we know these books are enormously popular in the middle of the country, and it's clearly our cultural/political duty to find out why . . . but flipping through the first one in the basement of the Harvard Bookstore, I'm stricken by the wooden prose. I can't read this. Also, there's the matter of time: I still haven't finished Proust. The same sort of thing seems to happen to other civic-minded would-be readers.
And then, on the Internet, Fred Clark's blog Slacktivist gallops in to save the day. For the past year and a half, Mr. Clark has been engaged in a close reading of the series, explicating the text and the issues it raises in an increasingly fundamentalist America. This project isn't a full-time project; his blog has other commentary, but once a week, he stops to analyze a few pages of Left Behind. It helps that Mr. Clark is a fine writer; his commentary is funny, personal - recollections from a Christian childhood pop up from time to time - and he has enough of a theological background to elucidate telling details and the history behind Jenkins & LaHaye's particular brand of end-times fever.
It's an admirable project as well because of the shear magnitude of it. In his first year and a half, he's made it through 105 pages, working at the rate of roughly six days a page. By my calculations, it will take him eighty more years to finish the 4900 pages of the series, though additional prequels have been declared, which will take the total up somewhere over a century. Lengthwise, he seems to be running about neck-and-neck, though it's hard to tell on the screen. This can't help but remind one of "On Exactitude in Science", the parable by Jorge Luis Borges & Adolfo Bioy Casares about the map that became the size of the territory it set out to survey. And of course, when a map gets this big, you're going to have issues with organization.
How do we start reading something like this? I was forwarded a link to the blog itself - http://slacktivist.typepad.com - and found the top entry dealing with Left Behind. Not all of Slacktivist deals with Left Behind - but enough of it does that Mr. Clark has made a separate category for it, http://slacktivist.typepad.com/slacktivist/left_behind. Clicking on that gets you a single page with all of the Left Behind posts, from newest to oldest. Being interested (and a fast reader) I decided to read the whole thing. To do this, you have to start at the bottom, scroll down a little bit (these are long posts), and then scroll up to get to the next chronological post. This does become, at length, tiring.
One point that's important to remember here: the Left Behind component of Slacktivist differs from the majority of blogs in that its information is not especially time-sensitive. While there are references to ongoing current events (the Iraq war, for example, not without relevance to the text under discussion), these references don't need to be read in real time. A reader could start reading his close reading at any time without much loss. (Granted, there is the question of relevance: it would be nice if in ten years nobody remembered Left Behind, but that probably won't be the case: Clark points out Hal Lindsay's The Late Great Planet Earth from the 1970s as prefiguring the series - and, it's worth noting, it still sells frighteningly well.)
A further complication for the would-be reader: Mr. Clark's posts, while they form the spine of his creation, are not the whole of it: his writing has attracted an enormous number of comments from his readers - somewhere over thirty comments for each of his recent posts, occasionally more than sixty. These comments, as you might expect, are all over the place - some are brilliant glosses, some are from confused Left Behind followers who have stumbled in, some declare the confused Left Behind followers to be idiots, and there's the inevitable comment-spam, scourge of the blog-age. Some have fantastic archived conversations of their own. Some are referenced in later posts by Mr. Clark, and become part of the main text. It's almost impossible to read all the comments because there are so many of them; it's hard to tell from the "Comments (33)" link if the thirty-three comments are worth reading. It's also much more difficult to read the comments chronologically: some older posts are still, a year later, generating comments, becoming weird zombie conversations.
What can be done to make this a more pleasant reading experience? Because blogs keep their entries in a database, it shouldn't be that hard to make a front end webpage that displays the entries in chronological order. It also wouldn't be hard to paginate the entries so that Mr. Clark's more than 50,000 words are in more digestible chunks. I'm not sure what could be done about the comments, though. Seventy-five posts have generated 1738 comments, scattered in time. Here's a rough diagram of how everything is connected:
The bottom row of blue dots represent Mr. Clark's posts over time (from earliest to most recent). One post leads linearly to the next. The rows above represent comments: the first red row are comments on the first post (an arrow which leads to the first), which are frequent at first and then tail off. This pattern is followed by all the other comments on posts. Comments tend to influence following comments (although this isn't necessarily true). But, unless you have eagle-eyed commentators who make sure to click on every comment link every day, different comment streams will probably not be influencing each other over time. The conversation has forked, and will continue forking.
A recent study seems to indicate that the success of a blog (as measured by advertising) is directly related to the feeling of community engendered, in no small part, by the ability to comment and discuss. But that ability to comment and discuss seems to get lost with time. What's happening here might be an inherent limitation in the form of the blog: while they're not strictly time-sensitive, they end up being that way. This could perhaps be changed if there were better ways into the archives, or if notifications were sent to the author and commentators on posts as new comments were posted. But: especially when dealing with an enormous volume of comments, as is the case at Slacktivist, the dialogue becomes increasingly asynchronous as time goes on.
We don't think of physical books as having this problem because we assume that we can't directly interact with the author and don't expect to be able to do so. With electronic media, the boundaries are still unclear: we expect more.
the cramped root: worshipping the artifact 06.17.2005, 2:08 PM
A plant in a container grows differently than a plant in open soil. The roots conform to the shape of the pot. Similarly, our very notions of reading, of books, of knowledge classification are defined by the pot in which they grew. The texture of paper, the topography of the library, the entire university system - these were defined by restraints. Physical, economic, etc. And to a significant extent they are artifacts of their times. An example: the act of reading in bed, as Dan mentions, is frequently invoked as the ideal, as the supreme pleasure of reading, something that computers could never match. But this supine, passive reading stance is not pre-ordained. It is in many ways an artifact of the growth of the novel - a grand, fictional creation to be read in leisure settings. Lying down works well. It's pleasurable. You get lost in rich, immersive worlds. But there are immersive worlds that require a different posture. And there are kinds of reading that are more active.
The computer, too, in its current stage of development, is an artifact of the paper book, the typewriter, and the supercomputer terminal. These define the "pot" in which the computer has grown. And so far, the questions about online "reading" are defined by this cramped root structure. Even though the pot has shattered, we continue to grow as though the walls were there.
Another analogy: the horseless carriage. For years after its invention, the automobile was known as "the horseless carriage." People could define it only in terms of what had come before. You could say that online reading is the territory of "the horseless book."
transliteracies: the pleasure of the text 06.17.2005, 2:07 PM
Two books on my bookshelf: the first, a Penguin paperback of The Recognitions by William Gaddis, the spine reinforced with tape, almost every one of the 976 pages covered with annotations in several different colors of ink, some pages torn, many dogeared, some obvious coffee stains. It's a survivor of a misbegotten thesis project. The second, an old copy of Grace Metalious's soapy Peyton Place which I found on 6th Avenue two years ago & read cover to cover over the course of six delirious hours when I had taken more DayQuil than I should have. It's a cheap paperback from the late 1950s, and its yellow pages have clearly passed through any number of hands, but they're almost entirely unmarked. (God only knows why I decided that I needed to read Peyton Place. I can't recommend it.)
One of the themes that arose in the first session of Transliteracies was that there are several different types of reading. When academics talk about reading, they tend to mean an intensive activity; there's typically a lot of writing involved. A great deal of reading, however, isn't anywhere near as intensive: like my copy of Peyton Place, the text escapes unmarked by the pen. When we talk about moving reading from the printed page to the screen, this is an important consideration: the screen needs to accommodate both of these. Why can't we curl up with an electronic book? has been a persistent question since electronic reading became a possibility, but it misses the important point that we don't want to curl up with every book we read. We can only curl up with something if we're reading it - to some degree - passively.
transliteracies begins.. reading is complex 06.17.2005, 11:57 AM
Over the next couple of days, we'll be posting live from the Transliteracies conference..
The conference kicked off with a rich historical lecture by Adrian Johns, a professor at the University of Chicago and author of The Nature of the Book. Johns examined three revolutionary moments in the development of scientific knowledge - Galileo, Newton, and James Clark Maxwell - and their relationship to the evolving print medium and the social practices of interpretation and transmission that were then developing. Beginning with the iconoclastic moment of Galileo's theological collision with the Catholic church, moving through Newton and the incipient system of journal production - "philosophical transaction" - in authoritative matrices like the Royal Society, up to Maxwell at Cambridge University, his breakthroughs on electricity and magnetism, and the development of written examinations. The overriding lesson: reading is complex. We should not overestimate the power of the book purely as the "container of meaning." The surrounding social reading practices, the charismatic human deliverers of certain texts, are no less important. Each book has a sort of periodical system that follows from it - its ideas move through local systems of perusal, reinterpretation and dissemination. It gets continually "re-published" through this human ecology.
Then there is the scientific revolution going on today: information technology and medical information. Medical error - diagnostic and prescriptive - kills thousands each year, largely due to interruptions in information flow. Info tech could create seamless systems that greatly reduce error. But Johns points out that a good half of the systems implemented so far fail to solve the problems. In fact, all of them create new kinds of errors - confusions between the different groups in the massive medical tangle. So here we have a kind of online reading that has been tested in a highly consequential setting. Johns suggests that medical reading is more like literary reading than we think. For instance, physicians and pharmacists read differently. They have differences in training, worldview, sense of self. Seemingly cosmetic features of the text - fonts, color, layout - are of great consequence.
transliteracies: research in the technological, social, & cultural practices of online reading 06.16.2005, 10:06 AM
Bob's post last week about changing patterns of media consumption kicked off an interesting discussion, one that leads up perfectly to the "Transliteracies" conference we are attending this weekend at UC Santa Barbara.
Alan Liu, director of the Transliteracies project, posted this response, which very elegantly lays out some of the important questions. He's allowed me to re-post it here..
BEGIN: The relationship between "browsing" and the "sheer volume" of information is complicated. To start with, I think there is much to be gained in complicating our usually uniform concepts of "browsing" (all shallow, fragmented, attention-deficient) and "volume" ("sheer," as in a towering, monolithic cliff).
We get a sense of the hidden complexity I indicate if we think historically. Below is a passage from Roger Chartier -- the leading scholar in the "history of the book" field -- that should give us pause about making any quick associations between browsing and today's information glut:
"Does this reaction toward the end of the [18th] century indicate a consciousness that reading styles had changed, that the elites in western Europe had passed from intensive and reverent reading to a more extensive, nonchalant reading style, and that such a change called for correction? . . . In the older style: (1) Readers had the choice of only a few books, which perpetuated texts of great longevity. (2) Reading was not separated from other cultural activities such as listening to books read aloud time and again in the bosom of the family, the memorization of such texts . . . , or the recitation of texts read aloud and learned by heart. (3) The relation of reader to book was marked by a weighty respect and charged with a strong sense of the sacred character of printed matter. (4) The intense reading and rereading of the same texts shaped minds that were habituated to a particular set of references and inhabited by the same quotations. It was not until the second half of the eighteen century in Germany and the beginning of the nineteenth century in New England that this style of reading yielded to another style, based on the proliferation of accessible books, on the individualization of the act of reading, on its separation from other cultural activities, and on the descralization of the book. Book reading habits became freer, enabling the reader to pass from one text to another and to have a less attentive attitude toward the printed word, which was less concentrated in a few privileged books." -- Roger Chartier, "Urban Reading Practices, 1660-1770," in his The Cultural Uses of Print in Early Modern France, trans. Lydia G. Cochrane (Princeton: Princeton Univ. Press, 1987), pp. 222-24
It's pretty certain that browsing in the face of sheer volume were deep habits of literacy (specifically, of high print literacy). By contrast, one might ask: who read so intensely and deeply -- to instance the extreme -- that they only really read one book? There were probably just three classes of such people: the very poor (I remembe r, but can't find at present, an essay by Chartier about people in the past who owned just one book, which was found on their body after a coach accident in Paris), the extremely pious (who read the Bible), or the "genius" author. (Think of Blake, for example: no matter how many books he read, he really only had one or two books on his mental bookshelf: the Bible and Milton.) By contrast, everyone else browsed.
Mass literacy in the twentieth century, perhaps, may be a phenomenon of browsing. Think of Reader's Digest. After my family immigrated to the U.S. in my childhood, we were a kind of microcosm of assimilation (into English literacy) in this regard. There were two major investments in books in my household: the Reader's Digest series of condensed books (a kind of packaged browsing) and The World Book encyclopedia (a veritable lesson in reading as browsing-cum-volume). I drank deeply from both founts as a child, since these were the main books in the house. I was intense in my browsing.
So now let's snap back to the present and the act of browsing cyber- or multi-media volumes of information. I've started a project (combining humanists, social scientists, and computer scientists) called Transliteracies to look into "online reading." It's my hypothesis that there are hidden complexities and intelligences in low-attention modes of browsing/surfing that we don't yet know how to chart. Google, after all, is making a fortune for algorithms enacting this hypothesis. Or to cite a historical googler: Dr. Johnson, sage of the Age of Reason, was famous for "devouring" books just by browsing them instead of reading "cover to cover." (To allude to the titles of the two serial magazines he was involved with, he would have called browsing Rambling or Idling [The Rambler, The Idler.)
Just as "browsing" is complex, so I think that there are hidden complexities in the notion of "sheer volume." Some of the digital artists I know -- e.g., George Legrady, Pockets Full of Memories -- are "database artists" whose work asks the question, in essence: what happens to the notion of art when we gaze not at one work in rapt wonder but several thousand works -- when, in other words, the "work" is "volume"? What if quantity, in other words, was a matter of quality? Aren't there different kinds of "volume," some more intelligent, beautiful, kinder, humane (not to mention efficient and flexible, the usual postindustrial desiderata) than others?
I'd better stop, since this comment is too long. As Blake said about volume: "Enough! or too much."
reading over your shoulder 06.16.2005, 9:09 AM
A particularly offensive section of the Patriot Act was slapped down yesterday in Congress. From Reuters:
The U.S. House of Representatives on Wednesday defied President Bush by approving a measure making it harder for federal agents to secretly gather information on people's library reading habits and bookstore purchases.
pay for the service, not the copy 06.14.2005, 12:51 PM
The other day, I came across an interesting experiment with a new model of distribution and ownership on the web, something that writers, publishers and journalists should pay attention to. KeepMedia charges $4.95 a month for unlimited access to 200 mainstream periodicals (see list) spanning the last 12 years up to the present day. That's significantly less than what I pay annually for my handful of print periodical subscriptions, and gives me access to much more material (kind of like LexisNexis for the masses). Plus, you do get to "keep" - that's part of how it works (indeed, their logo is a kangaroo with a stack of magazines stuffed in her pouch). KeepMedia allows you to attach notes to articles and to store away "clippings." It also makes it easy to track subjects across publications, and has automated recommendations for related stories. I assume that stored articles will get caged off if you stop subscribing. That's what makes me nervous about the pay-for-the-service model. You don't actually get to keep anything for the long haul, unless you print it out. But KeepMedia suggests one way that newspapers and publishers might adapt to the digital age.
Right now, publishers are still stuck on the idea of individual "copies." The web - an enormous, interconnected copying machine - is inherently hostile to this idea. So publishers generally insist on digital rights management (DRM) - coded controls that restrict what you can do with a piece of media. This, almost invariably, is infuriating, and ends up unfairly punishing people who have willingly paid a fair price for an item. Pay-for-the-service models won't solve the problem entirely, but they do get away from the idea of "copies." On the web, copies are cheap, or free. But access to a library or database is valuable. It's not about how many copies are sold, it's about how many people are reading. So charge at the gate. Once people are inside, it's all you can eat. This is nothing new. People play a flat rate for cable television, which is essentially a list of publications. You pay extra for premium channels, or pay-per-view special features, but your basic access is assured. What and how much you watch is up to you. Yahoo! is trying this right now for music. Why not do the same for newspapers, or for books? The web is combining publishing with broadcasting. Publishers and broadcasters need to adapt.
reading manga on Sony Librie 06.13.2005, 12:57 PM
Came across this Flickr photoset of Japanese comics on a Librie - Sony's electric ink ebook reader. Even in a photo, the reflective, print-like quality of the screen is striking. People have generally raved about the Librie's display, but are outraged by its senseless DRM policies: books self-destruct after 60 days. (discussed here and here)
Once E ink enters the mainstream, people might flock to electronic books as rapidly and enthusiastically as they did to digital photography. Screen display technology will undoubtedly advance. The DRM problem is trickier.
(Incidentally, I found this image while browsing recent blog posts under the "ebook" tag on Technorati. Flickr images tagged with "ebook" are placed alongside. An example of how these social tagging systems are becoming interconnected.)
LA Times to run "wikatorials" 06.13.2005, 10:52 AM
Next week, as part of a general reworking of its editorial page, the LA Times is staring "wikatorials" - "an online feature that will empower you to rewrite Los Angeles Times editorials."
(via Dan Gillmor)
UPDATE: "Upheaval on Los Angeles Time Editorial Pages" in NY Times.
web news as gated community 06.10.2005, 10:25 AM
Just found out about this on diglet.. Launched in April, The National Digital Newspaper Program (NDNP) is a joint effort of the Library of Congress and the National Endowment of the Humanities to create a comprehensive web archive of the nation's public domain newspapers.
Ultimately, over a period of approximately 20 years, NDNP will create a national, digital resource of historically significant newspapers from all the states and U.S. territories published between 1836 and 1922. This searchable database will be permanently maintained at the Library of Congress (LC) and be freely accessible via the Internet.
(A similar project is getting underway in France.)
It's frustrating that this online collection will stop at 1922. Ordinary libraries maintain up-to-date periodical archives and make them available to anyone if they're willing to make the trip. But if they put those collections on the web, they'll be sued. Archives are one of the few ways newspapers have figured out to make money on the web, so they're not about to let libraries put their microfilm and periodical reading rooms online. The paradigm has flipped.. in print, you pay for the current day's edition, but the following day it ends up in the trash, or wrapping a fish. The passage of 24 hours makes it worthless. On the web, most news is free. It's the fish wrap that costs you.
The web has utterly changed what things are worth. For most people, when a news site asks them to pay, they high tail it out of there and never look back. Even being asked to register is enough to deter many readers. But come September, the New York Times will start charging a $50 annual fee for what it considers its most unique commodities - editorials, op-eds, and selected other features. Is a full subscription site not far off? With their prestige and vast readership, the Times might be able to pull it off. But smaller papers are afraid to start charging, even as they watch their print circulation numbers plummet. If one paper puts up a tollbooth, they instantly become irrelevant to millions of readers. There will always be a public highway somewhere nearby.
A friend at the Columbia School of Journalism told me that the only way newspapers can be profitable on the web is if they all join together in some sort of league and charge bulk subscription fees for universal access. If there's a wholesale move to the pay model, then readers will have no choice but to shell out. It will be like paying for cable service, where each newspaper is a separate channel. The only time you register is when you pay the initial fee. From then on, it's clear sailing.
It's a compelling idea, but could just be collective suicide for the newspapers. There will always be free news on offer somewhere. Indian and Chinese wire services might claim the market while the prestigious western press withers away. Or people will turn to state-funded media like the BBC or Xinhua. Then again, people might be willing to pay if it means unfettered access to high quality, independent journalism. And with newspapers finally making money on web subscriptions, maybe they'd start loosening up about their archives.
uncyclopedia: the inevitable wikipedia parody 06.09.2005, 5:02 PM
"an invaluable resource that they had an extremely limited role in creating" 06.09.2005, 2:11 PM
Good piece today in Wired on the transformation of scientific journals. There's a general feeling that commercial publishers like Reed Elsevier enjoy unreasonable control over an evolving body of research that should be freely available to the public. With exorbitant subscription fees, affordable only for large institutions, most journals are effectively inaccessible, and the authors retain few or no reproduction rights. Recently, however, free article databases have sprung up on the web - The Public Library of Science (PLoS), BioMed Central, and NIH's PubMed - some of which, like PLoS, have begun publishing their own journals. It's a welcome change, considering how much labor and treasure is poured into scientific publications (from funders, private and public, and from the scientists themselves), and yet how little is gotten in return. Shifting to a non-profit model, as PLoS has done, preserves much of the financial architecture that supports the production of journals, but totally revolutionizes the distribution.
PLoS journals are free and allow authors to retain their copyrights, as long as they allow their work to be freely shared and distributed (with full credit given, naturally). They also require that authors pay $1,500 from their grants, or directly from their sponsors or institutions, to have their work published. These groups pay the bulk of the $10 billion that goes to scientific and medical publishers each year, and what do they get in return? Limited access to the research they funded, and no right to reuse the information.
"It's ridiculous to give publishers complete control of an invaluable resource that they had an extremely limited role in creating," Eisen said (Eisen teaches genetics and is a founder of PLoS).
But what is in many ways the tougher question is how to shift the architecture of prestige - peer review - to these new kinds of journals.
Something is happening here but you don't know what it is, do you, Mr. Jones?
-- me either for that matter. 06.08.2005, 11:18 AM
Came across this on a web-site i'd never heard of while searching for audio samples of a sound artist i'd never heard of (Todd Dockstader) that was referenced in a copy of magazine called The Wire that i purchased for the first time.
Stacked in almost innumerable dusty piles around my room are the incoming CDs of many a publicist's hardwork & toil. And for reasons that have more to do with esoteric alignments of the stars than any particular dislike, they often remain untouched & unheard for far, far too long. This very column is somewhat of an attempt to remedy this situation while also commenting on the sheer volume of music, especially electronic music, that continues to be released. It's a deluge of expression via our machines, which has resulted in an inverse response of criticism, a lack of perspective, an inability to perfect the zoom-out on the overall picture of what is being produced by this wired and wireless culture. . .
-- tobias c. van veen in cut-up
reminded me for the 323rd time in the past several months that something profound is happening relative to the "sheer volume" of media being produced and new (online) distribution patterns. would love to start to understand the ramifications. here's one i see in my own behavior -- and you can't imagine how painful it is to own up to this:
in 2001, 2 and 3 i made a scrapbook of things i collected on the web. i included in the scrapbook a record of all the books i read cover-to-cover. each year the number was at least 24. suddenly in 2004 the number went to ONE, and that was a graphic novel that i read in a few hours.
i'm still reading quite a bit but most of it is online and in much smaller chunks than books or even long articles. but also, with the advent of big notebook computers with dvd drives and large screens, some of my reading time has been supplanted by watching time as i've begun to absorb TV series (sopranos, 24, Six Feet Under) -- viewing all the segments in as few sittings as possible, much like the experience of a page-turner novel.
i'm also browsing quite a bit more. when i was a teenager i went to the record store (yes, i'm that old) and would spend quite a long time choosing one or maybe two to buy. then i would bring those home and listen to them over and over and over. now i find i hardly ever have to come out of browsing mode. between othermusic.com, earplug, etc. etc. a scary amount of my conscious music listening can be subsumed by surfing for new sounds.
i'd like to find a way to get people to talk about their media consumption so that we can begin to understand what actually is happening, not just quantitatively, but qualitatively.
(image by Gregory Vershbow)
multimedia promo or prose poem? 06.07.2005, 9:35 AM
Novelist Micheline Aharonian Marcom has created a website that presents brief excerpts from her novel, Three Apples Fell From Heaven, as audio flash drawings that combine animated text, audio and illustration. The illustration, a red line that scribbles randomly on the screen, subtly alludes to the violence in the text, and the voice-overs add depth to an already intense story. Although this site was only intended as a promotional piece for Marcom's first two novels, the excerpts she chose and the impact of the sound and illustration makes them feel more like prose poems.
the 2005 computers and writing online conference 06.06.2005, 3:30 PM
The Institute for the Future of the Book is presenting a paper at the 2005 Computers and Writing Online Conference. Our presentation, entitled "Sorting the Pile: Making Sense of A Networked Archive," discusses our experience building a networked archive for our Gates Memory Project and the insights it provided regarding the evolution of books in the networked environment.
The conference began on Tuesday, May 31, and runs through Monday, June 13. It is an online conference that is open-access, Creative Commons-licensed, and hosted on a weblog. Drawing upon the conference's theme of exploring the increasing value of the network and collaborative practices within it, presenters examine the role(s) played by social networking applications and other technologies that are intended to foster social interaction, community, and collaboration. Alongside studying the technologies themselves, presenters will observe and describe the ways that writers and users are engaging the technologies and how such engagement is changing our ideas about writing and teaching writing, and, more broadly, the concepts of rhetoric and composition themselves. We very much hope you'll get involved by leaving your comments, or, if you prefer, respond on your own weblog and leave a trackback! Or write a response on your wiki! Or tag presentations on your del.icio.us or de.lirio.us list! You get the idea. This conference is meant to be networked.
The presentations are accessible to anyone with an internet connection, and anyone with an account at Kairosnews (registration is free) can leave comments. For more information, visit the CW Online 2005 weblog.
book DJs: hear penguin, sample penguin, remix penguin 06.06.2005, 2:57 PM
First there was the DJ, then the VJ, now Penguin audio books is sponsoring "penguin remixed" a contest that might spawn a whole new genre--are you ready for the BJ ?
According to the website, "thirty of the best spoken word samples from some of the greatest books of all time and the finest actors around." are available for remix. "Download the samples, use them in your music, submit your tracks. The ten top tracks, as voted by you, will be turned into a Penguin digital audiobook, which will be available through the Audible.co.uk store and via iTunes UK."
Just to get your creative juices flowing, here is one of the samples available for remix. It's from Lewis Carroll's, "Alice in Wonderland," read by Susan Jameson.
building frontier networks 06.06.2005, 2:17 PM
The $100 laptop project - the MIT-led initiative to distribute cheap, network-enabled computers to schools throughout the developing world - is moving ahead, but it's far from clear whether it will succeed. Today Wired discusses some of the daunting physical challenges of deploying technology in places where there isn't even electricity, let alone a wireless broadband network. As far as energy is concerned, the MIT team is trying to make the computers as self-sustaining as possible, experimenting with hand cranks (like a wind-up watch) and "parasitic power," where the user's typing constantly charges the battery. Then there is the problem of networks. The vision driving the project is one of delivering the resources of the web to communities that are cut off from libraries and the general flow of information. But extending the gossamer strands of the web requires robust architecture. Dumping cheap laptops in village schools won't achieve much if you can't connect the dots.
Wired mentions geekcorps, a group that coordinates skilled technology volunteers around the world "to teach communities how use innovative and affordable information and communication technologies to solve development problems." One of their trademark innovations is the "BottleNet" - a method for setting up improvised Wi-Fi relay networks with "do-it-yourself antennas," first employed in geekcorp's Mali project:
The do-it-yourself (DIY) antenna designs were based on information gathered from numerous sources, including standard ham radio operator reference manuals, books on building wireless community networks, numerous DIY wireless sites on the Internet, and from the past experiences of GCM volunteers with wireless antennas. Changes to the designs were made to incorporate materials that are easily available in Mali (plastic water bottles, used valve stems from motorbikes, window screen mesh, television and low cost coaxial cables, etc.) to minimize the technical skills needed to build an antenna and to reduce costs.
Something about these ad hoc creations, patched together with junk - the scraps of western industry - speaks eloquently of the fragility of our grand networked enterprise.
images: (left) kids with Panasonic Toughbooks at the Nicholas and Elaine Negroponte School in Cambodia (from Wired); (right) BottleNet antenna in Mali
Bayesian news by email 06.06.2005, 10:42 AM
Another interesting prototype from BBC Backstage: news feeds delivered by email with Bayesian filtering. In other words, you can flag the kind of messages you want to receive more of, and the kind you want to receive less of, purifying the signal, as it were. This kind of filtering was first developed to deal with spam. Here's what it looks like in your mail viewer:
a literary map of manhattan 06.04.2005, 5:04 PM
Maps maps maps. Everyone's playing with maps as interface (see here and here). Check out this multimedia feature at the NY Times. Doesn't go very deep, but fun all the same. Each item was reader-submitted over the past month - a collective effort to map the rich fictional life of Manhattan. They should do one of these for Brooklyn.
Reminds me of Mr. Beller's Neighborhood. Each of the red dots below links to a story or article set in that location.
remixing the news 06.03.2005, 5:07 PM
There's been an explosion of creative tinkering since the BBC opened up its API (applications programming interface) last month. An API is a window into a site's code and content allowing techie types to build new applications with BBC material. It's really worth going over to the BBC Backstage blog to take a look at the first batch of prototypes and demos. The majority are clever splicings of BBC data - news, traffic reports, images etc. - with Google Maps (everyone's favorite lately), not unlike chicagocrime.org. Other notable examples: an RSS feed of BBC complaints; a feature that allows you to tag articles and read tags left by other readers; and a nice "tag soup" visualization of financial news.
Correction: A reader kindly pointed out that BBC Backstage hasn't actually released APIs yet (though they intend to soon). The projects I've referenced use BBC feeds, or have scraped content directly from the BBC site. APIs are to follow soon (more info here). When they do, the scaping process will become much cleaner. For now, the BBC welcomes projects that "use our stuff to build your stuff" the rough-and-tumble way, and is happy to showcase them on the Backstage site.
The API is becoming a powerful tool for creative reinvention of the web. Back in April, I wrote about Dan Gillmor's piece on "Web 3.0".. Web 1.0 was the early web, a place you went to read - a series of interconnected brochures. Web 2.0 is the "read-write" web - it's a place you go to interact. Web 3.0 is where we start weaving the disparate pieces into new forms. APIs let you do this. You take one application and design a new front end that shows your point of view. Or you take two applications and mix them together, creating something new and illuminating. Right now, Web 2.0 is pretty well in place. The tools for self-expression and interaction are pretty accessible - email, chat, blogs, etc. But the weaving tools required for 3.0 are available only to advanced users. We'll see if that changes.
Here are grabs from four of the map prototypes at BBC Backstage:
For more analysis, check out this article on O'Reilly Radar.
80 years of the New Yorker on disc 06.02.2005, 5:19 PM
The New Yorker has never seemed terribly interested in going digital. Despite maintaining the obligatory website, with a smattering of free content and online features, the magazine exists somewhat apart from the daily swarm of the web. The print format still works quite well for them, and they have the legions of loyal subscribers to prove it.
But their latest publishing project does take them into digital territory. This October, in a big legacy move, the venerable weekly will release 4,109 issues - every single page since the February 1925 founding and the 80th anniversary issue this year - on an eight-DVD set. "The Complete New Yorker" (see NY Times story) will go for about $100 (though Walmart is already listing it for $59.22), and will also contain a 123-page book with an introduction by editor David Remnick. A big improvement on microfilm, the discs will allegedly be searchable by computer, though how granular the search is remains to be seen. For it to be more than just a collector's item, it should be fully structured and offer fine-toothed find functionality. Remnick confirms, however, that readers will have the option of browsing just the cartoons (as many of us do).
visual bookmarks 06.02.2005, 12:21 PM
Wists is a visual bookmarking system for the web, doing for images what del.icio.us does for web pages. It's like browsing the web with a camera, or creating your own hand-selected Google image search. Find an image you want to keep track of and Wists will create a thumbnail for you, linking back to the original site. If it's a whole page you want to capture, Wists will take an automatic screenshot of the entire page. Add a title, tags and description and it goes into the system - a photo album of the web. Much like del.icio.us, Wists arranges popular tags on the sidebar and allows you to browse the latest entries. It also enables you to add other users' bookmarks to your own gallery, clearing the slate for your own tags and descriptions. Best of all, it keeps track of people you've taken items from, and people who have taken items from you. Trails become apparent and the archive becomes interconnected. Here's a grab of my "jaws" tag page - combing around for images, I found an amusing juxtaposition.
These are the kind of basic curatorial tools that would be great on Flickr. Currently, you are only able to apply tags to your own photos, or the those of friends, family or mutual contacts. But part of the fun of Flickr is browsing the photos of total strangers. You can comment on any photo or mark it as a favorite, but there is no way to curate your own collection of images from the community at large. Wists suggests how the gap between del.icio.us and Flickr might be bridged.
useful fun with Technorati tags 06.01.2005, 1:41 PM
You may have noticed a new line of metadata at the bottom of posts on if:book - Technorati tags. Technorati is perhaps the most dynamic blog-tracking site on the web, scanning over 10 million weblogs and ranking their authority according to the number of links they receive from around the blogosphere. Technorati tags are socially constructed classification terms - keywords or categories that authors apply to their entries so that they show up in Technorati searches. Taken together, these thousands of tags are what make up the Technorati folksonomy - a taxonomic system created by users from the bottom-up, instead of by an information architect (like a librarian) from the top-down. Folksonomies are less rigid than shelf-based hierarchies (see "the only group that can organize everything is everybody"). They can cope with subtle but crucial differences between synonyms like movies, films, flicks, and cinema - or devlish distinctions like art versus entertainment. Tags can help bloggers reach small niche areas of interest, trickling content down into the hard-to-reach corners. But being highly idiosyncratic, folksonomic tags tend to proliferate rapidly. Most are too obscure or particularly worded to become widely adopted points of reference. Right now, sites like Technorati or Flickr deal with this problem by ranking. The irony is that, for all the promise of personal expression through folksonomy, the tags that make it to the top of the pile tend to be pretty conventional. Less formal than a library catalogue, to be sure, but nothing terribly colorful (nuance fares better in personal bookmarking systems like del.icio.us). And again, we are struck with this problem, endemic on the web, of authority meaning simply who's popular. In that regard, the web is still a lot like high school.
(Mechanics: we're able to ping specific tags with the great Technorati Tag plugin for Movable Type)
Google Print gets its own address 06.01.2005, 11:04 AM
Google Print is a book marketing program, not an online library, and as such your entire book will not be made available online unless you expressly permit it.
If you reach your limit of permitted pages you get this:
a computer to withstand the wind and rain 06.01.2005, 8:20 AM
This expensive new tablet PC is built to withstand "rain, snow, wind, dust, vibration, shock, chemical exposure, and temperature extremes, from minus 4 to 140 degrees" (see article).