library wisdom 08.31.2006, 12:54 AM
Bob and I have been impressed with what we've been reading on a series of sites maintained by Joyce Valenza, a teacher-Librarian at the Springfield Township High School Library in Erdenheim, Pennsylvania. Of particular interest is a chart she's put together entitled "30 Years of Information and Educational Change: How should our practice respond?" which records the dramatic technological shifts that have taken place since she began studying library science nearly three decades ago, and how her thinking has evolved:
I graduated with an MLS in 1977 and had to return and redo most of the credits in 1987/1988 to get education credentials. While I learned programming the first time around and personal computer applications the second time around, the rate of change has dramatically altered the landscape.
I see an urgent need for librarians to retool. We cannot expect to assume a leadership role in information technology and instruction, we cannot claim any credibility with students, faculty, or administrators if we do not recognize and thoughtfully exploit the paradigm shift of the past two years. Retooling is essential for the survival of the profession.
The role of the librarian has traditionally to guide the user into a dense grove of knowledge, instructing them how best to penetrate, navigate and reference a relatively stable corpus. But with the explosion of personal computers and networks comes the explosion of the library. The librarian becomes a strategic advisor at the gateway to a much larger and continually shifting array of resources and tools that extends well beyond the physical boundaries of the library. The user no longer needs to be guided inward, but guided outward, and in multiple directions. The librarian in an academic or school setting must help students and scholars to match up the right materials with the right modes of communication, while also fostering a critical and ethical outlook in a world awash in information. The librarian is more crucial than ever.
The physical space of the library is still vital too, Valenza argues, and nowhere is this better conveyed than in this charming "virtual library" page she has constructed for the library's home page (that's her standing by the reference desk):
It seems almost too obvious to use the physical library as an interface, but I was immediately struck by how intuitive and useful this page is, and how, so simply and with such spirit, it creates an almost visceral link between the physical library and its online dimensions.
(Also check out Valenza's blog, NeverEnding Search.)
google offers public domain downloads 08.30.2006, 5:41 PM
Google announced today that it has made free downloadable PDFs available for many of the public domain books in its database. This is a good thing, but there are several problems with how they've done it. The main thing is that these PDFs aren't actually text, they're simply strings of images from the scanned library books. As a result, you can't select and copy text, nor can you search the document, unless, of course, you do it online in Google. So while public access to these books is a big win, Google still has us locked into the system if we want to take advantage of these books as digital texts.
A small note about the public domain. Editions are key. A large number of books scanned so far by Google have contents in the public domain, but are in editions published after the cut-off (I think we're talking 1923 for most books). Take this 2003 Signet Classic edition of the Darwin's The Origin of Species. Clearly, a public domain text, but the book is in "limited preview" mode on Google because the edition contains an introduction written in 1958. Copyright experts out there: is it just this that makes the book off limits? Or is the whole edition somehow copyrighted?
documentary licensed through creative commons to play in second life 08.30.2006, 7:34 AM
Route 66: An American Bad Dream is an independent documentary film starring three Germans road tripping across the legendary US highway. What makes this film notable is that they released the film under the Creative commons license. Also, it had its premiere in the virtual world of Second Life on Aug 10th. The success of that showing prompted them to host an additional viewing this Thursday August 31 at 4PM SL in Kula 4, which will be presented by its creator Gonzo Oxberger. In the Open Source spirit of this project, they are making the video and audio project files available to anyone with a serious interest in remixing the film.
wikipedia thread 08.30.2006, 12:00 AM
There's a really interesting post and related comments on KairosNews about the addition of an administrative layer to Wikipedia.
book trailers, but no network 08.29.2006, 9:30 AM
We often conceive of the network as a way to share culture without going through the traditional corporate media entities. The topology of the network is created out of the endpoints; that is where the value lies. This story in the NY Times prompted me to wonder: how long will it take media companies to see the value of the network?
The article describes a new marketing tool that publishers are putting into their marketing arsenal: the trailer. As in a movie trailer, or sometimes an informercial, or a DVD commentary track.
"The video formats vary as widely as the books being pitched. For well-known authors, the videos can be as wordy as they are visual. Bantam Dell, a unit of Random House, recently ran a series in which Dean Koontz told funny stories about the writing and editing process. And Scholastic has a video in the works for "Mommy?," a pop-up book illustrated by Maurice Sendak that is set to reach stores in October. The video will feature Mr. Sendak against a background of the book's pop-ups, discussing how he came up with his ideas for the book."
Who can fault them for taking advantage of the Internet's distribution capability? It's cheap, and it reaches a vast audience, many of whom would never pick up the Book Review. In this day and age, it is one of the most cost effective methods of marketing to a wide audience. By changing the format of the ad from a straight marketing message to a more interesting video experience, the media companies hope to excite more attention for their new releases. "You won't get young people to buy books by boring them to death with conventional ads,'' said Jerome Kramer, editor in chief of The Book Standard."
But I can't help but notice that they are only working within the broadcast paradigm, where advertising, not interactivity, is still king. All of these forms (trailer, music video, infomercial) were designed for use with television; their appearance in the context of the Internet further reinforces the big media view of the 'net as a one-way broadcast medium. A book is a naturally more interactive experience than watching a movie. Unconventional ads may bring more people to a product, but this approach ignores one of the primary values of reading. What if they took advantage of the network's unique virtues? I don't have the answers for this, but only an inkling that publishing companies would identify successes sooner and mitigate flops earlier, that the feedback from the public would benefit the bottom line, and that readers will be more engaged with the publishing industry. But the first step is recognizing that the network is more than a less expensive form of television.
discursions II: networked architecture, a networked book 08.28.2006, 6:09 AM
I'm pleased to announce a new networked book project the Institute will begin working on this fall. "Discursions, II" will explore the history and influence of the Architecture Machine Group, the amazing research collective of the late 60s and 70s that later morphed into the MIT Media Lab. The book will be developed in collaboration with Kazys Varnelis, an architectural historian whom we met this past year at the Annenberg Center at USC, when he was a visiting fellow leading the "Networked Publics" research project.
As its name suggests, the Architecture Machine Group was originally formed to explore how computers might be used in the design of architecture. From there, it went on to make history, inventing many of the mechanisms and metaphors of human-machine interaction that we live, work and play with to this day. Lately, Kazys' focus has been on contemporary architecture and urbanism in the context of network technologies, and how machine-mediated interactions are becoming a key feature of human environments. So he's pretty uniquely positioned to weave together the diverse threads of this history. Most important from the Institute's perspective, he's interested in playing around with the form and feel of publication.
And good news. Kazys recently resettled here on the east coast, where he will be heading up the new Network Architecture Lab (NetLab) at Columbia's Graduate School of Architecture, Planning, and Preservation. One of the lab's first projects will be this joint venture with the Institute. Unlike Without Gods and GAM3R 7H30RY, both of which are print-network hybrids, "Discursions, II" will grow one hundred percent on the network, beginning from its initial seeds: a dozen videos of seminal ARCMac demos, originally published on a video disc called "Discursions". The book will also go much further into collaborative methods of work, and into blurring the boundaries of genre and media form, employing elements of documentary film, textual narrative, and oral history (and other strategies yet to be determined).
From the NetLab press release (AUDC, mentioned below, is Kazys' nonprofit architectural collective):
Formed in 2001, AUDC [Architecture Urbanism Design Collaborative] specializes in research as a form of practice. The AUDC Network Architecture Lab is an experimental unit at Columbia University that embraces the studio and the seminar as venues for architectural analysis and speculation, exploring new forms of research through architecture, text, new media design, film production and environment design.
Specifically, the Network Architecture Lab investigates the impact of computation and communications on architecture and urbanism. What opportunities do programming, telematics, and new media offer architecture? How does the network city affect the building? Who is the subject and what is the object in a world of networked things and spaces? How do transformations in communications reflect and affect the broader socioeconomic milieu? The NetLab seeks to both document this emergent condition and to produce new sites of practice and innovative working methods for architecture in the twenty-first century. Using new media technologies, the lab aims to develop new interfaces to both physical and virtual space. This unit is consciously understood as an interdisciplinary entity, establishing collaborative relationships with other centers both at Columbia and at other institutions.
The NetLab begins operations in September 2006 with "Discursions, II" an exploration of history of architecture, computation, and new media interfaces at the Architecture Machine Group at MIT done in collaboration with the Institute for the Future of the Book.
For a better idea of Kazys' interests and voice, take a look at this fascinating and wide-ranging interview published recently on BLDGBLOG. Here, he talks a bit more about what we're hoping to do with the book:
The goal, then, is to create a new form of media that we're calling the Networked Book. It's a multimedia book, if you will, that can evolve on the internet and grow over time. We're now hoping to get the original players involved, and to get commentary in there. The project won't be just the voice of one author but the voices of many, and it won't be just one form of text but, rather, all sorts of media. We don't really know where it will go, in fact, but that's part of the project: to let the material take us; to examine the past, present, and future of the computer interface; and to do something that's really bold. It's not that we don't know what we're doing [laughter] - it's that we have a wide variety of options.
Congratulatons, Kazys, on the founding of the NetLab. We can't wait to move forward with this project.
showtiming our libraries 08.25.2006, 6:55 PM
Google's contract with the University of California to digitize library holdings was made public today after pressure from The Chronicle of Higher Education and others. The Chronicle discusses some of the key points in the agreement, including the astonishing fact that Google plans to scan as many as 3,000 titles per day, and its commitment, at UC's insistence, to always make public domain texts freely and wholly available through its web services.
But there are darker revelations as well, and Jeff Ubois, a TV-film archivist and research associate at Berkeley's School of Information Management and Systems, hones in on some of these on his blog. Around the time that the Google-UC deal was first announced, Ubois compared it to Showtime's now-infamous compact with the Smithsonian, which caused a ripple of outrage this past April. That deal, the details of which are secret, basically gives Showtime exclusive access to the Smithsonian's film and video archive for the next 30 years.
The parallels to the Google library project are many. Four of the six partner libraries, like the Smithsonian, are publicly funded institutions. And all the agreements, with the exception of U. Michigan, and now UC, are non-disclosure. Brewster Kahle, leader of the rival Open Content Alliance, put the problem clearly and succinctly in a quote in today's Chronicle piece:
We want a public library system in the digital age, but what we are getting is a private library system controlled by a single corporation.
He was referring specifically to sections of this latest contract that greatly limit UC's use of Google copies and would bar them from pooling them in cooperative library systems. I vocalized these concerns rather forcefully in my post yesterday, and may have gotten a couple of details wrong, or slightly overstated the point about librarians ceding their authority to Google's algorithms (some of the pushback in comments and on other blogs has been very helpful). But the basic points still stand, and the revelations today from the UC contract serve to underscore that. This ought to galvanize librarians, educators and the general public to ask tougher questions about what Google and its partners are doing. Of course, all these points could be rendered moot by one or two bad decisions from the courts.
the children's machine 08.25.2006, 7:27 AM
Why is it that the publicity images of these machines are always like this? Ghostly showroom white and all the kids crammed inside. What might it mean? I get the feeling that we're looking at the developers' fantasy. All this well-intentioned industry and aspiration poured into these little day-glo machines. But totally decontextualized, in a vacuum.
This ealier one was supposed to show poor, brown hands reaching for the stars, but it looked more to me like children sinking in quicksand.
Indian Education Secretary Sudeep Banerjee, explaining last month why his country would not be placing an order for Negroponte's machines, put it more bluntly. He called the laptops "pedagogically suspect."
An exhange in the comments below made me want to clarify my position here. Bleak humor aside, I really hope that the laptop project succeeds. From the little I've heard, it appears that the developers have some really interesting ideas about the kind of software that'll go into these things.
Dan, still reeling from three days of Wikimania earlier this month, as well as other meetings concerning OLPC, relayed the fact that the word processing software being bundled into the laptops will all be wiki-based, putting the focus on student collaboration over mesh networks. This may not sound like such a big deal, but just take a moment to ponder the implications of having all class writing assignments being carried out wikis. The different sorts of skills and attitudes that collaborating on everything might nurture. There a million things that could go wrong with the One Laptop Per Child project, but you can't accuse its developers of lacking bold ideas about education.
Still, I'm skeptical that those ideas will connect successfully to real classroom situations. For instance, we're not really hearing anything about teacher training. One hopes that community groups will spring into action to help develop and implement new pedagogical strategies that put the Children's Machines to good use. But can we count on this happening? I'm afraid this might be the fatal gap in this otherwise brilliant project.
librarians, hold google accountable 08.24.2006, 8:09 AM
I'm quite disappointed by this op-ed on Google's library intiative in Tuesday's Washington Post. It comes from Richard Ekman, president of the Council of Independent Colleges, which represents 570 independent colleges and universities in the US (and a few abroad). Generally, these are mid-tier schools -- not the elite powerhouses Google has partnered with in its digitization efforts -- and so, being neither a publisher, nor a direct representative of one of the cooperating libraries, I expected Ekman might take a more measured approach to this issue, which usually elicits either ecstatic support or vociferous opposition. Alas, no.
To the opposition, namely, the publishing industry, Ekman offers the usual rationale: Google, by digitizing the collections of six of the english-speaking world's leading libraries (and, presumably, more are to follow) is doing humanity a great service, while still fundamentally respecting copyrights -- so let's not stand in its way. With Google, however, and with his own peers in education, he is less exacting.
The nation's colleges and universities should support Google's controversial project to digitize great libraries and offer books online. It has the potential to do a lot of good for higher education in this country.
Now, I've poked around a bit and located the agreement between Google and the U. of Michigan (freely available online), which affords a keyhole view onto these grand bargains. Basically, Google makes scans of U. of M.'s books, giving them images and optical character recognition files (the texts gleaned from the scans) for use within their library system, keeping the same for its own web services. In other words, both sides get a copy, both sides win.
If you're not Michigan or Google, though, the benefits are less clear. Sure, it's great that books now come up in web searches, and there's plenty of good browsing to be done (and the public domain texts, available in full, are a real asset). But we're in trouble if this is the research tool that is to replace, by force of market and by force of users' habits, online library catalogues. That's because no sane librarian would outsource their profession to an unaccountable private entity that refuses to disclose the workings of its system -- in other words, how does Google's book algorithm work, how are the search results ranked? And yet so many librarians are behind this plan. Am I to conclude that they've all gone insane? Or are they just so anxious about the pace of technological change, driven to distraction by fears of obsolescence and diminishing reach, that they are willing to throw their support uncritically behind the company, who, like a frontier huckster, promises miracle cures and grand visions of universal knowledge?
We may be resigned to the steady takeover of college bookstores around the country by Barnes and Noble, but how do we feel about a Barnes and Noble-like entity taking over our library systems? Because that is essentially what is happening. We ought to consider the Google library pact as the latest chapter in a recent history of consolidation and conglomeratization in publishing, which, for the past few decades (probably longer, I need to look into this further) has been creeping insidiously into our institutions of higher learning. When Google struck its latest deal with the University of California, and its more than 100 libraries, it made headlines in the technology and education sections of newspapers, but it might just as well have appeared in the business pages under mergers and acquisitions.
So what? you say. Why shouldn't leaders in technology and education seek each other out and forge mutually beneficial relationships, relationships that might yield substantial benefits for large numbers of people? Okay. But we have to consider how these deals among titans will remap the information landscape for the rest of us. There is a prevailing attitude today, evidenced by the simplistic public debate around this issue, that one must accept technological advances on the terms set by those making the advances. To question Google (and its collaborators) means being labeled reactionary, a dinosaur, or technophobic. But this is silly. Criticizing Google does not mean I am against digital libraries. To the contrary, I am wholeheartedly in favor of digital libraries, just the right kind of digital libraries.
What good is Google's project if it does little more than enhance the world's elite libraries and give Google the competitive edge in the search wars (not to mention positioning them in future ebook and print-on-demand markets)? Not just our little institute, but larger interest groups like the CIC ought to be voices of caution and moderation, celebrating these technological breakthroughs, but at the same time demanding that Google Book Search be more than a cushy quid pro quo between the powerful, with trickle-down benefits that are dubious at best. They should demand commitments from the big libraries to spread the digital wealth through cooperative web services, and from Google to abide by certain standards in its own web services, so that smaller librarians in smaller ponds (and the users they represent) can trust these fantastic and seductive new resources. But Ekman, who represents 570 of these smaller ponds, doesn't raise any of these questions. He just joins the chorus of approval.
What's frustrating is that the partner libraries themselves are in the best position to make demands. After all, they have the books that Google wants, so they could easily set more stringent guidelines for how these resources are to be redeployed. But why should they be so magnanimous? Why should they demand that the wealth be shared among all institutions? If every student can access Harvard's books with the click of a mouse, than what makes Harvard Harvard? Or Stanford Stanford?
Enlightened self-interest goes only so far. And so I repeat, that's why people like Ekman, and organizations like the CIC, should be applying pressure to the Harvards and Stanfords, as should organizations like the Digital Library Federation, which the Michigan-Google contract mentions as a possible beneficiary, through "cooperative web services," of the Google scanning. As stipulated in that section (4.4.2), however, any sharing with the DLF is left to Michigan's "sole discretion." Here, then, is a pressure point! And I'm sure there are others that a more skilled reader of such documents could locate. But a quick Google search (acceptable levels of irony) of "Digital Library Federation AND Google" yields nothing that even hints at any negotiations to this effect. Please, someone set me straight, I would love to be proved wrong.
Google, a private company, is in the process of annexing a major province of public knowledge, and we are allowing it to do so unchallenged. To call the publishers' legal challenge a real challenge, is to misidentify what really is at stake. Years from now, when Google, or something like it, exerts unimaginable influence over every aspect of our informated lives, we might look back on these skirmishes as the fatal turning point. So that's why I turn to the librarians. Raise a ruckus.
UPDATE (8/25): The University of California-Google contract has just been released. See my post on this.
the wisdom of fortune cookies: "your reputation is your wealth" 08.23.2006, 6:40 AM
Over cold jasmine tea and quartered oranges in Chinatown, I got this little gem of a fortune. I chuckled at its relevance to our work at the institute. With the rise of self publishing (blogs, wikis, and POD), being google searchable, and content being freely given away, I wonder what our readers think about reputations being our wealth. Is this truth, nothing new, tom foolery, or just a fad? Has the concept of "reputation" changed? Have you and your work felt an effect as well? If so, how? I'm looking forward to hearing your thoughts.
speed dating sophie 08.22.2006, 9:06 AM
Last Tuesday I was formally introduced to Sophie. Our first date left me dazed and confused. She is a powerful multimedia application from New York, well funded and growing under healthy cosmopolitan influences, while I am a digitally challenged graduate student with a dreadful Third World education. Despite the obvious mismatch, Sophie was surprisingly responsive. For a program that is still a month away from even entering beta purgatory, to freeze up once in a while is perfectly normal. My reaction, on the other hand, was childish and immature. I protested out loud, argued with developers, worried about details, and became permanently infatuated. Now I can't stop thinking about Sophie.
The problem is that she lies at the core of everything I want to do. During the next couple of decades I would like to participate in the collaborative development of multimedia ecosystems. Ok, that sounds awfully pretentious. What I really want is to work and play with a bunch of friends in a huge toy factory. My favorite toys are multimedia creatures.
For a while (and halfway-tongue-in-cheek) I have been training myself to think about all kinds of cultural artifacts in evolutionary terms. When I play around with a good old printed book, for example, I try to think about it as a potentially feature-rich creature that, so far as I am concerned, is working very well in frozen text mode. All other noisier and flashier possible forms of behavior have been muted, so to speak, in order to maximize the cultural value of the reading experience.
I think Sophie fascinates me precisely because her future depends so much on achieving a creative balance between simplicity and complexity. If everything goes well, Sophie will be able to handle very intricate tasks in rather plain terms. The program already has an unobtrusive but intuitive interface that would allow first time users to assemble rich multimedia documents in a matter of minutes. A highly sophisticated Sophie document can be embedded as a whole into another Sophie document. Placing an entire library of interconnected multimedia artifacts in a corner of a page within a Sophie "book" would only take a few mouse clicks.
An open source multimedia assembling program is always welcome. Sophie will be particularly good at doing difficult things the easy way, and that is a bonus in an industry cluttered with "advanced" applications that seem to be going in the opposite direction. Given the proper planetary alignment, a nurturing community could grow around the development of extensions and additions to the program. Eventually, Sophie would be unrecognizable, and that is the best thing that can happen to an evolving living thing.
Did I mention that the application has also been conceived as a platform independent application for collaborative multimedia assembling? That's right; Sophie would eventually allow people to join efforts in authoring and managing complex documents over a network. These are my kind of toys: evolving multimedia artifacts, born on a network, raised by a virtual village, and assembled with a tool that is being develop along similar principles. Very cool stuff.
Strategically speaking, however, the development of Sophie, and the model of collaborative multimedia creation in general, could be better implemented using the notion of software as a service. Downloading an application that would reside in the desktop and then using it to handle files over a network is relatively cumbersome. This model requires periodic updating of the program and a high volume of general traffic up and down the servers.
Under the current paradigm, Sophie is being developed just like Microsoft Word but I would rather work on something more along the lines of Writely. An Ajax-based version of Sophie within a regular web browser like Firefox would maximize the networking capabilities of the application. Full assembly functionality could be hard to achieve this way, but in a tradeoff between fancy multimedia features and wider potential for collaboration I would tend to favor the latter. The evolutionary success of a networked book will depend on the qualities of the network rather than the features of the book.
Online collaboration can be achieved more efficiently by sideloading rather than constantly uploading and downloading files. In an ideal world we would only need to upload original raw files, and only once. Everything else would happen at the server level. Every user would have access to every file and any combination of files at every step during the assembling process, from any computer connected to the internet.
This late in its development, altering the nature of an application like Sophie at this radical level is too difficult. Perhaps the best way to go about it is to release a beta version of the program, in order to broaden its community of developers, and hope that a team of Ajax-savvy people decides to create a browser-based alternative interface for Sophie. In the meantime she should consider setting up a series of dates with the guys at Ajax13. I promise I won't be jealous.
"highbrow" video games? 08.22.2006, 8:57 AM
Recently in the gaming blog Gamersutra, Ernest Adams questions why aren't there highbrow video games." His article comes one month after an Esquire article, where Chuck Klosterman wondered why isn't there good video game criticism and makes the claim that video games needs its own Lester Bangs. As the video game market grows, it is not surprising that fans and advocates of gaming will want to form to grow and mature as well.
Adams' call for "highbrow" games is rooted in a desire to add creditability and legitimacy to video games. As someone who has dedicated his career to making and writing about video games, the never-ending criticism about the violence in games by various groups looking for easy political targets must be frustrating to endure. I can appreciate the motivations behind Adam's conclusion, however, his description of highbrow video games is ultimately too narrowly defined and overlooks impressive experiments of video games.
I hesitate to even try to deem games "high" or "low" because the terms are not that useful. Adams specifically points out that the films he aspires video games to emulate are not "art films," which he describes as a "short low-budget titles filled with impenetrable weirdness." Therefore, his definition of highbrow edge towards the problematic "I know it when I see it" definitions of art. Further, we can gather insight on culture and ourselves by interacting will both high and low culture and valuing one form over another is problematic.
From his description of a highbrow video game, I think what Adams is really asking for is better interactive narratives in gaming. He alludes to the films of Ishmail Merchant and James Ivory, who are best known for adopting the novels of E.M. Forester, often with screenwriter Ruth Prawer Jhabvala. Their films tend to be beautiful, well crafted analyses of class. Although, they are not generally know for pushing the boundaries of film.
Last year, Adams gave a talk which he published on his website, in which he assesses the state of interactive narrative. It provides more insight on his train of thought. In it, one of his references in video game scholarship is Janet Murray's Hamlet on the Holodeck which uses a theatrical frame of reference in postulating the future of interactive narrative. Also, Adams offers a model of a "structured" approach to the narrative of video games and reveals that he is particularly wedded to the idea of single player games over the shared gaming experience of MMORPGs which are increasingly popular. In his current essay, his description of states that the highbrow video game "would reward close attention and playing more than once." This implies that he still leans towards single player role playing games in his conceptualization of highbrow games.
However, video games that are pushing the form in more "artistic" ways are occurring outside the bounds of single player game. For example, we-make-money-not-art reported on The Endless Forest, which is a gorgeous MMORPG in which players assume the identity of a deer. Developed by the Belgian studio Tale of Tales, The Endless Forest has an elegant interface and darkly rich art direction. Although it lacks an explicit narrative, the gameplay engages users without the typical violence and sexually charged themes of many games. The Endless Forest limits the use of language. Therefore, it does not include a chat function and players are "named" with pictograms rather than words. However, as more of these kinds of games are created, they are unlikely to lessen the criticism of the negative social effects of video games.
As for criticism, the notion of elevating video game criticism to a higher form is rather ironic, as it comes at the same time when the New York Times critic A.O. Scott finds himself defending film criticism. While not a music critic, Scott describes the critics' predicament that often panned movies are still hugh box offices successes. Media critics want the new and interesting, which is somewhat expected if it is your job to watch and write about movies, music, or video games everyday. Their standards are quite different from the typical audience member. Lester Bangs was a polarizing figure, who wanted to raise the standards of writing on music. He appeared at a time when people were ready for similar standards. It may be that a critical mass of audience for a similar kind of criticism for gaming is beginning to emerge.
As previously stated, most gamers will still want "mainstream" titles. Because games are expensive, they will still rely on criticism which Klosterman dismissives as "customer advice." That is, many gamers, if not most, will still mostly be interested in reading reviews which describe gameplay, graphics and sound design, rather than thematic and issues of meaning. Many gamers don't like the academic scholarly writing on video games, which is in adbundance, but is not what Klosterman wants to read. We learned about their attitutdes in initial reactions and comments posted across the gaming blogosphere about our project "GAM3R 7H30RY." It's not clear to me what is bad about gaming publications serving the desires of the video game playing community.
My guess is that both boundary pushing video games and criticism will be begin to get more exposure fairly soon. For the actual video games, I would look towards Europe and Asia, where more government funding exists for developing these kinds of endeavors. I don't expect many of the big gaming companies in the US to create experimental games of this nature. Although, they might in the future, after the proven economic viability of them. In that, major movie studios started funding more smaller films after they saw successful crossover of films of the Merchant Ivory variety. Although, Rockstar (the maker of Grand Theft Auto) have the upcoming and already controversial game Bully, where you must navigate a boarding school as a new student. It was described by the New York TImes as having, "an open world for the players to explore, tightly defined and memorable characters, a strong story line, [and] high-end voice acting," which is precisely what Adams call for in his article.
Regardless of who moves video games and its coverage further, it's bound to happen. Although, these new forms may not look exactly has Adams and Klosterman describe or wish. Media takes time to evolve, just compare the "highbrow" television series the HBO produces as compared to rather "lowbrow" television from the 50s. (I will admit that I don't prefer one over the other.) For a great example of how a medium transforms the perspective of an artist, see Scott McCloud's description of the movement from comic book fan to student to professional to genre pushing pioneer in Understanding Comics. If someone really wants to write video game criticism in the style of Lester Bangs, then the current low barriers of entry to electronic self-publishing allows her to do so. Creating video games, of course, requires a lot more resources. However, in closing, Adams states, "maybe I'll design one myself, just for the fun of it."
call for papers: what to do with a million books 08.21.2006, 3:42 PM
The Humanities Division at the University of Chicago and the College of Science and Letters at the Illinois Institute of Technology are hosting an intriguing colloquium on the future of research in the humanities in response to the rapid growth of digital archives. They are currently accepting paper proposals, which are due at the end of August.
Here is the call for papers:
What to Do with a Million Books: Chicago Colloquium on Digital Humanities and Computer Science
Sponsored by the Humanities Division at the University of Chicago and the College of Science and Letters at the Illinois Institute of Technology.
Chicago, November 5th & 6th, 2006
Submission Deadline: August 31st, 2006
The goal of this colloquium is to bring together researchers and scholars in the Humanities and Computer Sciences to examine the current state of Digital Humanities as a field of intellectual inquiry and to identify and explore new directions and perspectives for future research.
In the wake of recent large-scale digitization projects aimed at providing universal access to the world's vast textual repositories, humanities scholars, librarians and computer scientists find themselves newly challenged to make such resources functional and meaningful.
As Gregory Crane recently pointed out (1), digital access to "a million books" confronts us with the need to provide viable solutions to a range of difficult problems: analog to digital conversion, machine translation, information retrieval and data mining, to name a few. Moreover, mass digitization leads not just to problems of scale: new goals can also be envisioned, for example, catalyzing the development of new computational tools for context-sensitive analysis. If we are to build systems to interrogate usefully massive text collections for meaning, we will need to draw not only on the technical expertise of computer scientists but also learn from the traditions of self-reflective, inter-disciplinary inquiry practiced by humanist scholars.
The book as the locus of much of our knowledge has long been at the center of discussions in digital humanities. But as mass digitization efforts accelerate a change in focus from a print-culture to a networked, digital-culture, it will become necessary to pay more attention to how the notion of a text itself is being re-constituted. We are increasingly able to interact with texts in novel ways, as linguistic, visual, and statistical processing provide us with new modes of reading, representation, and understanding. This shift makes evident the necessity for humanities scholars to enter into a dialogue with librarians and computer scientists to understand the new language of open standards, search queries, visualization and social networks.
Digitizing "a million books" thus poses far more than just technical challenges. Tomorrow, a million scholars will have to re-evaluate their notions of archive, textuality and materiality in the wake of these developments. How will humanities scholars, librarians and computer scientists find ways to collaborate in the "Age of Google?"
November 5th & 6th, 2006
The University of Chicago
Ida Noyes Hall
1212 East 59th Street
Chicago, IL 60637
Greg Crane (Professor of Classics, Tufts University) has been engaged since 1985 in planning and development of the Perseus Project, which he directs as the Editor-in-Chief. Besides supervising the Perseus Project as a whole, he has been primarily responsible for the development of the morphological analysis system which provides many of the links within the Perseus database.
Ben Shneiderman is Professor in the Department of Computer Science, founding Director (1983-2000) of the Human-Computer Interaction Laboratory, and Member of the Institute for Advanced Computer Studies and the Institute for Systems Research, all at the University of Maryland. He is a leading expert in human-computer interaction and information visualization and has published extensively in these and related fields.
John Unsworth is Dean of the Graduate School of Library and Information Science and Professor of English at the University of Illinois at Urbana-Champaign. Prior to that, he was on the faculty at the University of Virginia where he also led the Institute for Advanced Technology in the Humanities. He has published widely in the field of Digital Humanities and was the recipient last year of the Lyman Award for scholarship in technology and humanities.
Prof. Helma Dik, Department of Classics, University of Chicago
Dr. Catherine Mardikes, Bibliographer for Classics, the Ancient Near East, and General Humanities, University of Chicago
Prof. Martin Mueller, Department of English and Classics, Northwestern University
Dr. Mark Olsen, Associate Director, The ARTFL Project, University of Chicago
Prof. Shlomo Argamon, Computer Science Department, Illinois Institute of Technology
Prof. Wai Gen Yee, Computer Science Department, Illinois Institute of Technology
Call for Participation
Participation in the colloquium is open to all. We welcome submissions for:
1. Paper presentations (20 minute maximum)
2. Poster sessions
3. Software demonstrations
Suggested submission topics
* Representing text genealogies and variance
* Automatic extraction and analysis of natural language style elements
* Visualization of large corpus search results
* The materiality of the digital text
* Interpreting symbols: textual exegesis and game playing
* Mashup: APIs for integrating discrete information resources
* Intelligent Documents
* Community based tagging / folksonomies
* Massively scalable text search and summaries
* Distributed editing & annotation tools
* Polyglot Machines: Computerized translation
* Seeing not reading: visual representations of literary texts
* Schemas for scholars: field and period specific ontologies for the humanities
* Context sensitive text search
* Towards a digital hermeneutics: data mining and pattern finding
Please submit a (2 page maximum) abstract in either PDF or MS Word format to email@example.com.
Deadline for Submissions: August 31st
Notification of Acceptance: September 15th
Full Program Announcement: September 15th
General Inquiries: firstname.lastname@example.org
Mark Olsen, email@example.com, Associate Director, ARTFL Project, University of Chicago.
Catherine Mardikes, firstname.lastname@example.org, Bibliographer for Classics, the Ancient Near East, and General Humanities, University of Chicago.
Arno Bosse, email@example.com, Director of Technology, Humanities Division, University of Chicago.
Shlomo Argamon, firstname.lastname@example.org, Department of Computer Science, Illinois Institute of Technology.
kairos turns ten 08.21.2006, 9:06 AM
The issue contains interviews with people reflecting upon their experiences with rhetoric/ composition and digital technology and Kairos. As well, Jim Kalmbach gives a good overview of the scholarship which took place within Kairos over the past ten years. He concludes with this following statement:
"...we do not need ever more stunning hypermediated essays; we need new forms of scholarship; we need to think about new ways of using digital writing spaces to make meaning."
His statement is a good transition to a new section in Kairos called Inventio, which will publicly track an article through the editorial process from inception to publication. As described, this section will:
(a) to provide a publication venue for experimental scholarly texts that push technological boundaries, and (b) to make Kairos' editorial and peer-review decisions for innovative scholarly webtexts more explicit.
I'm looking forward to seeing the first article from this section to appear in the fall of next year. It is another project along the same lines as our project, MediaCommons. People from many directions are clearly realizing the potential in pursuing more ways to develop and share academic scholarship. I excited to see that it is starting to occur on many fronts, and I am happy that the institute, as well as, this pioneering journal are part of it.
shifting forms of graffiti 08.18.2006, 7:27 AM
A few weekends ago, I was returning into Manhattan from upstate New York. Coming down the FRD along the East River, we past Keith Haring's "Crack is Wack" mural on 128th and 2nd. I remember the first time I saw it in the 80s on a family day trip into the city. The work is strikes me as quite extraordinary, even 20 years after its creation in 1986. By that time, Haring was established in the art world, having already shown at the Venice Biennale and the Whitney Biennial and had solo shows at the Tony Shafrazi Gallery and the Leo Castelli Gallery. Even though Haring was part of that contemporary/ high brow art world, he still maintained a connection to his roots of skirting the lines between public street art and illegal graffiti. "Crack is Wack" mixed of graffiti street culture, political and social messages, and high art. Although, the mural was quickly placed under the protection and jurisdiction of the City Department of Parks.
Haring took cues from graffiti, among other sources of influence, and created his own style and form. Revisiting "Crack is Wack" got me thinking about graffiti and how it has evolved of the past few decades. The funny thing about living in New York is that, after a while, you start seeing through the visual chaos of our surroundings. When you take a moment to stop and look, it is amazing what you can actually see. The things that you walk by everyday, especially stand out.
Graffiti, which had faded into the background visual noise of New York, was back on my radar screen. It was, of course, everywhere, but it had also changed since I really paid attention to it. Ben posted about a show on graffiti at the Brooklyn Museum, and that was just one aspect of how graffiti has expanded beyond the traditional notions of the form. At the institute, we spend a lot of time thinking about the evolution of media, and it seems that graffiti is no different.
On a side street near Little Italy, there used to be an advertisement for the Sony PSP done by the graffiti artists Tat's Cru. Now, the brick space has a place holder of an advertisement for these graffiti artists for hire. An interesting comment was left by a rival tagger, saying "sell out." And then someone else left their tag over the unsolicited commentary. I love the on-going asynchronous dialogue occurring on this brick wall of this corner deli. It is not surprising that others would be upset at Tat's Cru getting paid by advertisers for marketing. The website shows a piece that they did for BP. Working for the oil industry certainly will raise doubts to the authenticity and street credibility by purists of the form.
Perhaps, the work of the Tats Cru has not branched off to the new genre of graffiti but circled back to another form. Take this painted billboard for the debut solo record by Radiohead frontman Thom Yorke. The billboard appeared in Williamsburg in the weeks leading up to its release. Within moments, my initial thought that it was some hard core fan's ode to British rock was replaced with the realization that this was paid for by a record label.
Referencing graffiti in advertising is nothing new. Turning actual graffiti into the advertising was the obvious next step. If graffiti is paid for and created for marketing purposes, at what point does it stop being graffiti? Has it turned into something else? Is it just a style of art using spray paint to create forms referencing hip-hop?
Sometime after seeing Haring work, I passed this stoop a few blocks north of Chinatown. There was another similar work by the same artist I saw in the Lower East Side, however, I couldn't find it again.
But the work kept on reappearing. And then, I noticed another one a few blocks from the institute's office in Brooklyn. I probably walked past it hundreds of times before stopping to notice it. You can see where someone tried to tear it off, because in fact, the figures are not drawn on the wall, but on paper. The image is then transferred to the wall. Is this cheating? Suddenly, form and material are being challenged.
I did stumble upon Ping Magazine, when a friend sent me another article from the site. I finally learned that these pieces were created by an artist who goes by Swoon. She gave an interview for the New York Times, and describes herself as street artist, but considers her work graffiti. More importantly, she does not talk about the legal nature or the materiality of her work, but focuses on its location in public spaces and its direction interaction with people.
Keith Haring ended up being a great starting point, because of his work is a hybrid many forms and influences, including graffiti but also things beyond it. His mural reminded me that graffiti has embodied a range of politics, material and cultures for decades. Forms of expression emerge, branch off and circle back. Subsequent generations focus on different areas, be it monetary or expressive. Today, art and advertising are often re-appropriations of each other, as forms blend into one another. Empty spaces are filled with media by both artists and advertisers. The arts organization the Wooster Collective shows how broadly the concept of street art can extended. Trying to restrict these forms to bounded definitions is marginally useful, and often futile.
In this investigation, I was surprised at what I found, and amused at how often I circled back to the question of what is graffiti? The question or process of re-seeing something itself is not that surprising, particularly in the context of our work at the institute. Although, we focus more time on textual media, many of the questions remain the same. As we witness the evolving forms of text and the book, we can learn from other forms that turn into something slightly familiar but also remarkably new.
google on mars 08.17.2006, 1:43 AM
Apparently, this came out in March, but I 've only just stumbled on it now. Google has a version of its maps program for the planet Mars, or at least the part of if explored and documented by the 2001 NASA Mars Odyssey mission. It's quite spectacular, especially the psychedelic elevation view:
There's also various info tied to specific coordinates on the map: location of dunes, craters, planes etc., as well as stories from the Odyssey mission, mostly descriptions of the Martian landscape. It would be fun to do an anthology of Mars-located science fiction with the table of contents mapped, or an edition of Bradbury's Martian Chronicles. Though I suppose there'd a fair bit of retrofitting of the atlas to tales written out of pure fancy and not much knowledge of Martian geography (Marsography?). If nothing else, there's the seeds of a great textbook here. Does the Google Maps API extend to Mars, or is it just an earth thing?
can advertising liberate textbooks? 08.16.2006, 10:38 AM
The aptly named Freeload Press is giving away free PDFs (free as in free beer, or free market) of over 100 textbooks titles (mostly in business and finance, though more is planned). All students have to do is fill out an online survey and then the download is theirs, to use on a computer or to print out. Where does the money come from? Ads. Ads in the pages of the textbooks.
An ad for FedEx Kinkos in a sample Freeload textbook. Hmmm, wonder where I should get this thing printed?
Ads in textbooks is undoubtedly a depressing thought. Even more depressing, though, is the outlandish cost of textbooks, and the devious, often unethical, ways that textbook publishers seek to thwart the used book market. This Washington Post story gives a quick overview of the problem, and profiles the St. Paul, Minnesota-based Freeload.
Though making textbooks free to students is an admirable aim, simply shifting the cost to advertisers is not a good long-term solution, further eroding as it does the already much-diminished borderline between business and education (I suppose, though, that ads in business ed. textbooks in some ways enact the underlying precepts being taught). There are far better ideas out there for, as Freeload promises, "liberating the textbook" (a slogan that conjures the Cheney-esque: the textbooks will greet us as liberators).
One of them comes from Adrian Lopez Denis, a PhD candidate in Latin American history at UCLA. I'm reproducing a substantial chunk of a brilliant comment he posted last month to the Chronicle of Higher Ed's Wired Campus blog in response to their coverage of our announcement of MediaCommons. We just met with Adrian while in Los Angeles and will likely be collaborating with him on a project based on the ideas below. Basically, his point is that teachers and students should collaborate on the production of textbooks.
Students are expected to produce a certain amount of pages that educators are supposed to read and grade. There is a great deal of redundancy and waste involved in this practice. Usually several students answer the same questions or write separately on the same topic, and the valuable time of the professionals that read these essays is wasted on a rather repetitive task.
As long as essay writing remains purely an academic exercise, or an evaluation tool, students would be learning a deep lesson in intellectual futility along with whatever other information the course itself is trying to convey. Assuming that each student is writing 10 pages for a given class, and each class has an average of 50 students, every course is in fact generating 500 pages of written material that would eventually find its way to the campus trashcans. In the meantime, the price of college textbooks is raising four times faster that the general inflation rate.
The solution to this conundrum is rather simple. Small teams of students should be the main producers of course material and every class should operate as a workshop for the collective assemblage of copyright-free instructional tools. Because each team would be working on a different problem, single copies of library materials placed on reserve could become the main source of raw information. Each assignment would generate a handful of multimedia modular units that could be used as building blocks to assemble larger teaching resources. Under this principle, each cohort of students would inherit some course material from their predecessors and contribute to it by adding new units or perfecting what is already there. Courses could evolve, expand, or even branch out. Although centered on the modular production of textbooks and anthologies, this concept could be extended to the creation of syllabi, handouts, slideshows, quizzes, webcasts, and much more. Educators would be involved in helping students to improve their writing rather than simply using the essays to gauge their individual performance. Students would be encouraged to collaborate rather than to compete, and could learn valuable lessons regarding the real nature and ultimate purpose of academic writing and scholarly research.
Online collaboration and electronic publishing of course materials would multiply the potential impact of this approach.
What's really needed is for textbooks to liberated from textbook publishers. Let schools produce their own knowledge, and spread the wealth.
encouraging 08.15.2006, 8:51 PM
The following was posted on Sunday by Mitch Stephens on Without Gods (for those of you still unfamiliar with it, Without Gods is the public work diary for Mitch's forthcoming history of atheism, which we've been hosting for the past eight months -- wow, has it been that long?!).
The quality of the comments here lately has seemed, to me, extraordinarily high.
One of the purposes of blogging a book as it is being written is to have ideas tested and, possibly, sharpened, transformed or overturned. This has repeatedly occurred -- although I have not often weighed in with comments of my own acknowledging that. Please take this as a blanket acknowledgement and expression of appreciation.
GAM3R 7H30RY may be flashier, and more technically ambitious, but in many ways Without Gods has been a more revelatory experiment in networked writing. As Mitch acknowledges, the sustained activity, and quality, of the comment streams has been impressive, and above all, interesting to read. It's fascinating to follow this evolving collaboration between author and reader, and to watch Mitch come into his own as a skilled moderator of blog-based discussion. It remains to be seen how these conversations will end up shaping the finished book, but for some examples of a tangible collaboration taking place, take a look at these recent "Author Needs Advice" posts (part 1, part 2), in which Mitch asks for feedback on specific sections of the work-in-progress. Whatever the outcome, it's clear that this reconfiguration of the writing process is yielding real rewards.
notable wikipedians 08.15.2006, 7:21 AM
I just came across this story in The Toronto Globe and Mail about a young man from Ottawa by the name of Simon Pulsifer who, under the moniker SimonP, is Wikipedia's most prolific contributor: "with 78,000 entries edited and 2,000 to 3,000 new articles to his name. He can't remember the exact number."
Pulsifer is also the subject of an article in Wikipedia, which, like many of the vanity stubs devoted to the encyclopedia's editors, was nominated for deletion, only to be voted a keeper after some discussion. Justin Hall, a colleague of ours at USC, often cited as the first blogger, and a distinguished Wikipedian in his own right, offered the following in defense of the Pulsifer page:
As Wikipedia grows in importance and global reach, the most passionate participants in this collective editing experiment become important global intellectuals. Simon Pulsifer is one of the first public Wikipedians - with a great number of articles, a passion for editing under-developed subjects, and a strong sense of the mission of Wikipedia. He might not care to have an article about him here, but already mainstream media outlets (a Canadian newspaper) and online news sites (digg.com) have saluted his work. That attention and importance is only likely to increase. Let's keep this article because Simon Pulsifer has already reached a greater number of people than many of the "historic" individuals described on Wikipedia.
Both Pulsifer and Hall are members of what could be considered the Wikipedia elite, the "notable Wikipedians". Many of these probably deserve a good share of the credit for Wikipedia's success. Now, though, I'm more interested in how Wikipedia's corps of editors might gradually expand to include a greater slice of the public: teacher, students, and people from all walks of life.
Zealous Wikipedia hobbyists like Pulsifer, god love 'em, will hopefully, over time, be considered the exceptions that prove the general rule of participation: editing as a more modest pursuit that one builds into one's intellectual life and lifelong learning regimen. If enough people begin to take part in this way, Wikipedia could become more diverse, more exhaustive, and more accurate than it already is. The Pulsifers and Halls might end up being its governors, its civil servants, its politicians. Of course, it is the process that is most important: the kind of civic participation and engagement over points of dissent that collaborating on Wikipedia entails. Bob explained this eloquently last week. Or, as Pulsifer describes it:
You write an article and you think you've made it as good as it can be and then you put it out there for everyone to see and edit. And within just a few minutes, you have started a dialogue over how best to represent a subject.
pinkwater dips his toes (and quill) into the web 08.14.2006, 9:35 AM
The Institute is back in Los Angeles at USC, our home away from home in academe, where, for the next two days, we're holding an introductory "boot camp" session with a small group of professors who will begin using Sophie in their classes this fall. USC is just southwest of downtown LA, right near the La Brea Tar Pits, which, incidentally, is the starting point of the latest book by one of my favorite childhood writers, Daniel Manus Pinkwater, who, I just read in Publishers Weekly, is publishing his newest book online.
Pinkwater, author of Lizard Music, The Hoboken Chicken Emergency, the Snarkout Boys novels, and many, many others, is publishing his newest effort, The Neddiad, "a rip-roaring, foot-stomping, blood-curdling adventure, with station stops in Chicago, Flagstaff, and Hollywood, California," free on his website as a serial.
With the blessing of his publisher, Houghton Mifflin, Pinkwater has set up a simple, very readable little site, where readers can imbibe the book, in slightly raw form, one chapter per week.
What we are presenting is the original author's manuscript. There are some typos, and editorial corrections, and changes by me are not included. So the published book will be slightly different. I am a careful writer, and worked with a fine editor, so the differences are not great, but I thought it might be of interest for some to see what the book was like when handed in.
In many ways, this is a very PInkwater move -- plugging his book into an electrical socket and watching it glow. There's also a discussion forum, so it's something of a networked book:
Readers are welcome to post comments, criticisms, complaints, and exchange remarks--a link will be provided, and I may periodically chime in to discuss and argue with the posters.
Pinkwater told PW:
When I was younger a circus hand showed me how they let kids sneak into the circus. If they were bold enough to try, they got to stay. I'm trying to keep that feeling for kids with this project. It lets kids sneak into the tent. We're deliberately keeping it from looking slick; there are no ads. Of course, it's with Houghton Mifflin's kind permission that we can offer this, but it's still a bit of homebrew, slightly different from the finished version. We hope that the readers who enjoy what they find online will want to buy the book, too.
If nothing else, Pinkwater has grasped an important (and counterintuitive) principle of web publishing: that giving stuff away can help sell books. It helps facilitate a discussion about that stuff, and can make readers feel better disposed toward you and your work (i.e. more likely to buy it in print). One chapter per week is a rather dribbling pace, however (recall the somewhat disingenuous serialization of Pulse by FSG), and might be a bit like Chinese water torture for Pinkwater's ardent fans. But we'll see.
the trouble with wikis in china 08.11.2006, 7:35 AM
I've just been reading about this Chinese online encyclopedia, modeled after Wikipedia, called "e-Wiki," which last month was taken offline by its owner under pressure from the PRC government. Reporters Without Borders and The Sydney Morning Herald report that it was articles on Taiwan and the Falun Gong (more specifically, an article on an activist named James Lung with some connection to FG) that flagged e-Wiki for the censors.
Meanwhile, "Baidupedia," the user-written encyclopedia run by leading Chinese search engine Baidu is thriving, with well over 300,000 articles created since its launch in April. Of course, "Baidu Baike," as the site is properly called, is heavily censored, with all edits reviewed by invisible behind-the-scenes administrators before being published.
Wikipedia's article on Baidu Baike points out the following: "Although the earlier test version was named 'Baidu WIKI', the current version and official media releases say the system is not a wiki system." Which all makes sense: to an authoritarian, wikis, or anything that puts that much control over information in the hands of the masses, is anathema. Indeed, though I can't read Chinese, looking through it, pages on Baidu Baike do not appear to have the customary "edit" links alongside sections of text. Rather, there's a text entry field at the bottom of the page with what seems to be a submit button. There's a big difference between a system in which edits are submitted for moderation and a totally open system where changes have to be managed, in the open, by the users themselves.
All of which underscores how astonishingly functional Wikipedia is despite its seeming vulnerability to chaotic forces. Wikipedia truly is a collectively owned space. Seeing how China is dealing with wikis, or at least, with their most visible cultural deployment, the collective building of so-called "reliable knowledge," or encyclopedias, underscores the political implications of this oddly named class of web pages.
Dan, still reeling from three days of Wikimania, as well as other meetings concerning MIT's One Laptop Per Child initiative, relayed the fact that the word processing software being bundled into the 100-dollar laptops will all be wiki-based, putting the focus on student collaboration over mesh networks. This may not sound like such a big deal, but just take a moment to ponder the implications of having all class writing assignments being carried out wikis. The different sorts of skills and attitudes that collaborating on everything might nurture. There a million things that could go wrong with the One Laptop Per Child project, but you can't accuse its developers of lacking bold ideas about education.
But back to the Chinese. An odd thing remarked on the talk page of the Wikipedia article is that Baidu Baike actually has an article about Wikipedia that includes more or less truthful information about Wikipedia's blockage by the Great Firewall in October '05, as well as other reasonably accurate, and even positive, descriptions of the site. Wikipedia contributor Miborovsky notes:
Interestingly enough, it does a decent explanation of WP:NPOV (Wikipedia's Neutral Point of View policy) and paints Wikipedia in a positive light, saying "its activities precisely reflects the web-culture's pluralism, openness, democractic values and anti-authoritarianism."
But look for Wikipedia on Baidu's search engine (or on Google, Yahoo and MSN's Chinese sites for that matter) and you'll get nothing. And there's no e-Wiki to be found.
u.c. offers up stacks to google 08.10.2006, 6:49 AM
Less than two months after reaching a deal with Microsoft, the University of California has agreed to let Google scan its vast holdings (over 34 million volumes) into the Book Search database. Google will undoubtedly dig deeper into the holdings of the ten-campus system's 100-plus libraries than Microsoft, which is a member of the more copyright-cautious Open Content Alliance, and will focus primarily on books unambiguously in the public domain. The Google-UC alliance comes as major lawsuits against Google from the Authors Guild and Association of American Publishers are still in the evidence-gathering phase.
Meanwhile, across the drink, French publishing group La Martiniè re in June brought suit against Google for "counterfeiting and breach of intellectual property rights." Pretty much the same claim as the American industry plaintiffs. Later that month, however, German publishing conglomerate WBG dropped a petition for a preliminary injunction against Google after a Hamburg court told them that they probably wouldn't win. So what might the future hold? The European crystal ball is murky at best.
During this period of uncertainty, the OCA seems content to let Google be the legal lightning rod. If Google prevails, however, Microsoft and Yahoo will have a lot of catching up to do in stocking their book databases. But the two efforts may not be in such close competition as it would initially seem.
Google's library initiative is an extremely bold commercial gambit. If it wins its cases, it stands to make a great deal of money, even after the tens of millions it is spending on the scanning and indexing the billions of pages, off a tiny commodity: the text snippet. But far from being the seed of a new literary remix culture, as Kevin Kelly would have us believe (and John Updike would have us lament), the snippet is simply an advertising hook for a vast ad network. Google's not the Library of Babel, it's the most sublimely sophisticated advertising company the world has ever seen (see this funny reflection on "snippet-dangling"). The OCA, on the other hand, is aimed at creating a legitimate online library, where books are not a means for profit, but an end in themselves.
Brewster Kahle, the founder and leader of the OCA, has a rather immodest aim: "to build the great library." "That was the goal I set for myself 25 years ago," he told The San Francisco Chronicle in a profile last year. "It is now technically possible to live up to the dream of the Library of Alexandria."
So while Google's venture may be more daring, more outrageous, more exhaustive, more -- you name it --, the OCA may, in its slow, cautious, more idealistic way, be building the foundations of something far more important and useful. Plus, Kahle's got the Bookmobile. How can you not love the Bookmobile?
clifford lynch takes on computation and open access 08.09.2006, 7:33 AM
Academic Commons mentions that Clifford Lynch has written a chapter, entitled, "Open Computation: Beyond Human-Reader-Centric Views of Scholarly Literatures" in an upcoming book on open access edited by Neil Jacobs of the Joint Information Committee. His chapter, which is available online, looks at the potential computational analyses that could be formed by collecting scholarly literature into a digital repository. These "large scholarly literature corpora" would be openly accessible and used for new branches of research currently not possible.
He takes cues from the current work in text mining and large scale collections of scholarly documents, such as the Persus Digital Library hosted by Tufts Unviersity. Lynch also acknowledges the skepticism that many scholars hold to the value of text mining analysis in the humanities. Further, he discusses the limitations that current intellectual property regimes place on the creation of a large, accessible scholarly corpora. Although many legal and technical obstacles exist, his proposal does seem more feasible than something like Ted Nelson's Project Xanadu because the corpora he describes have boundaries, as well as supporters who believe that these bodies of literature should be accessible.
Small scale examples show the challenges Lynch's proposal faces. I am reminded of the development of meta-analysis in the field of statistics. Although the term meta-analysis is much older, the contemporary usage refers to statistical techniques developed in the 1970s to aggregate results from a group of studies. These techniques are particularly popular in the medical research and the public health sciences (often because their data sets are small.) Thirty years on, these methods are frequently used and their resulted published. However, the methods are still questioned in certain circles.
Gene Glass gives a good overview of meta-analysis, concluding with a reflection on how the criticisms of its use reveal insights on fundamental problems with research in his field of education research. He notes the difference in the "fundamental unit" of his research, which is a study, versus physics, which is lower level, accessible and generalizable. Here, even taking a small step back reveals new insights on the fundamentals of his scholarship.
Lynch speculates on how the creation of corpora might play out, but he doesn't dwell on the macro questions that we might investigate. Perhaps it is premature to think about these ideas, but the possible directions of inquiry are what lingered in my mind after reading Lynch's chapter.
I am struck by the challenge of graphically representing the analysis of these corpora. Like the visualizations of the blogosphere, these technologies could not only analyze the network of citations, but also word choice and textual correlations. Moreover, how does the body of literature change over time and space, as ideas and thoughts emerge or fall out of favor. In the humanities, can we graphically represent theoretical shifts from structuralist to post-structuralist thought, or the evolution from pre-feminist to feminist to post-feminist thought? What effect did each of these movements have on each other over time?
The opportunity also exists of exploring the possible ways of navigating corpora of this size. Using the metaphor of Google Earth, where one can zoom in from the entire Earth down to a single home, what can we gain from being able to view the sphere of scholarly literature in such a way? Glass took one step back to analyze groups of studies, and found insight on the nature of education research. What are the potential insights can we learn from viewing the entire corpus of scholarly knowledge from above?
Lynch describes expanding our analysis beyond the human scale. Even if his proposal never reaches fruition, his thought experiments revealed (at least to me) how knowledge acquisition occurs over a multidimensional spectrum. You can have a close reading of a text or merely skim the first sentence of each paragraph. Likewise, you can read an encyclopedia entry on a field of study or spend a year reading 200 books to prepare for a doctoral qualifying exam. However, as people, we have limits to the amount of information we can comprehend and analyze.
Purists will undoubtedly frown upon the use of computation that cannot be replicated by humans in scholarly research. Another example is the use of computational for solving proofs in mathematics, which is still controversial. The humanities will be no different, if not more so. A close reading of certain texts will always be important, however the future that Lynch offers just may give that close reading an entirely new context and understanding. One of the great things about inquiry is that sometimes you do not know where you will end up until you get there.
jaron lanier's essay on "the hazards of the new online collectivism" 08.08.2006, 9:37 AM
In late May John Brockman's Edge website published an essay by Jaron Lanier -- "Digital Maoism: The Hazards of the New Online Collectivism". Lanier's essay caused quite a flurry of comment both pro and con. Recently someone interested in the work of the Institute asked me my opinion. I thought that in light of Dan's reportage from the Wikimania conference in Cambridge i would share my thoughts about Jaron's critique of Wikipedia . . .
I read the article the day it was first posted on The Edge and thought it so significant and so wrong that I wrote Jaron asking if the Institute could publish a version in a form similar to Gamer Theory that would enable readers to comment on specific passages as well as on the whole. Jaron referred me to John Brockman (publisher of The Edge), who although he acknowledged the request never got back to us with an answer.
From my perspective there are two main problems with Jaron's outlook.
a) Jaron misunderstands the Wikipedia. In a traditional encyclopedia, experts write articles that are permanently encased in authoritative editions. The writing and editing goes on behind the scenes, effectively hiding the process that produces the published article. The standalone nature of print encyclopedias also means that any discussion about articles is essentially private and hidden from collective view. The Wikipedia is a quite different sort of publication, which frankly needs to be read in a new way. Jaron focuses on the "finished piece", ie. the latest version of a Wikipedia article. In fact what is most illuminative is the back-and-forth that occurs between a topic's many author/editors. I think there is a lot to be learned by studying the points of dissent; indeed the "truth" is likely to be found in the interstices, where different points of view collide. Network-authored works need to be read in a new way that allows one to focus on the process as well as the end product.
b) At its core, Jaron's piece defends the traditional role of the independent author, particularly the hierarchy that renders readers as passive recipients of an author's wisdom. Jaron is fundamentally resistant to the new emerging sense of the author as moderator -- someone able to marshal "the wisdom of the network."
I also think it is interesting that Jaron titles his article Digital Maoism, with which he hopes to tar the Wikipedia with the brush of bottom-up collectivism. My guess is that Jaron is unaware of Mao's famous quote: "truth emerges in the course of struggle [around ideas]". Indeed, what I prize most about the Wikipedia is that it acknowledges the messiness of knowledge and the process by which useful knowledge and wisdom accrete over time.
harpercollins takes on online book browsing 08.08.2006, 8:48 AM
In general, people in the US do not seem to be reading a lot of books, with one study citing that 80% of US families did not buy or read a book last year. People are finding their information in other ways. Therefore it is not surprising that HarpersCollins announced it "Browse Inside" feature, which to allows people to view selected pages from books by ten leading authors, including Michael Crichton and C.S. Lewis. They compare this feature with "Google Book Search" and Amazon's "Search Inside."
The feature is much closer to "Search Inside" than "Google Book Search." Although Amazon.com has a nice feature "Surprise Me" which comes closer to replicating the experience of flipping randomly to a page in a book off the shelf. Of course "Google Book Search" actually lets you search the book and comes the closest to giving people the experiences of browsing through books in a physical store.
In the end, HarperCollins' feature is more like a movie trailer. That is, readers get a selected pages to view that were pre-detereminded. This is nothing like the experience of randomly opening a book, or going to the index to make sure the book covers the exact information you need. The press release from HarperCollins states that they will be rolling out additional features and content for registered users soon. However, for now, without any unique features, it is unclear to me, why someone would go to the HarperCollins site to get a preview of only their books, rather than go to the Amazon and get previews across many more publishers.
This initiative is a small step in the correct direction. At the end of the day, it's a marketing tool, and limits itself to that. Because they added links to various book sellers on the page, they can potentially reap the benefits of the long tail, by assisting readers to find the more obscure titles in their catalogue. However, their focus is still on selling the physical book. They specifically stated that they do not want to be become booksellers. (Although through their "Digital Media Cafe," they are experimenting with selling digital content through their website.)
As readers increasingly want to interact with their media and text, a big question remains. Is Harper Collins and the publishing industry ready to release control they traditionally held and reinterpret their purpose? With POD, search engines, emergent communities, we are seeing the formation of new authors, filters, editors and curators. They are playing the roles that publishers once traditional filled. It will be interesting to see how far Harper Collins goes with these initiatives. For instance, Harper Collins also has intentions to start working with myspace and facebook to add links to books on their site. Are they prepared for negative commentary associated with those links? Are they ready to allow people to decide which books get attention?
If traditional publishers do not provide media (including text) in ways we are increasingly accustomed to receiving it, their relevance is at risk. We see them slowly trying to adapt to the shifting expectations and behaviors of people. However, in order to maintain that relevance, they need to deeply rethink what a publisher is today.
controversy in a MMORPG 08.07.2006, 7:12 AM
Henry Jenkins gives a fascinating account of an ongoing controversy occurring in a MMORPG in the People's Republic of China, the fastest growing market for these online games. Operated by Netease, Fantasy Westward Journey (FWJ) has 22 million users, with an average of over 400,000 concurrent players. Last month, game administrators locked down the account of an extremely high ranking character, for having an anti-Japanese name, as well as leading a 700 member guild with a similarly offensive name. The character would be "jailed" and his guild would be dissolved unless he changed his character and guild's name. The player didn't back down and went public with accusations of ulterior motives by Netease. Rumors flew across FWJ about its purchase by a Japanese firm which was dictating policy decisions. A few days late, an alarming protest of nationalism broke out, consisting of 80,000 players on one of the gaming servers, which was 4 times the typical number of players on a server.
The ongoing incidents are important for several reasons. One is that it is another demonstration of how people (from any nation) bring their conceptualization of the real world into the virtual space. Sino-Japanese relations are historically tense. Particularly, memories of war and occupation by the Japan during World War II are still fresh and volatile in the PRC. In a society whose current calender year is 4703, the passage of seventy years accounts for a relatively short amount of time. Here, political and racial sentinment seamlessly interweave between the real and the virtual. However, these spaces and the servers which house them are privately owned.
The second point is that concentrations of economic and cultural production is being redistributed across the globe. The points where the real and the virtual worlds become porous are likewise spreading to places throughout Asia. Therefore, coverage of these events outside of Asia should not be considered fringe, but I see important incentives to track, report and discuss these events as I would local and regional phenomenon.
wikimania: the importance of naming things 08.05.2006, 3:53 PM
I'll write up what happened on the second day of Wikimania soon – I saw a lot of talks about education – but a quick observation for now. Brewster Kahle delivered a speech after lunch entitled "Universal Access to All Knowledge", detailing his plans to archive just about everything ever & the various issues he's confronted along the way, not least Jack Valenti. Kahle learned from Valenti: it's important to frame the terms of the debate. Valenti explained filesharing by declaring that it was Artists vs. Pirates, an obscuring dichotomy, but one that keeps popping up. Kahle was happy that he'd succeeded in creating a catch phrase in naming "orphan works" – a term no less loaded – before the partisans of copyright could.
Wikimania is dominated by Wikipedia, but it's not completely about Wikipedia – it's about wikis more generally, of which Wikipedia is by far the largest. There are people here using wikis to do immensely different things – create travel guides, create repositories of lesson plans for K–12 teachers, using wikis for the State Department's repositories of information. Many of these are built using MediaWiki, the software that runs Wikipedia, but not all by any means. All sorts of different platforms have been made to create websites that can be edited by users. All of these fall under the rubric "wiki". we could just as accurately refer to wikis as "collaboratively written websites", the least common denominator of all of these sites. I'd argue that the word has something to do with the success of the model: nobody would feel any sense of kinship about making "collaboratively written websites" – that's a nebulous concept – but when you slap the name "wiki" on it, you have something easily understood, a form about which people can become fanatical.
wikimania day 1: wrap up 08.05.2006, 9:32 AM
There was something of a valedictory feeling around Wikimania yesterday, springing perhaps from Jimmy Wales's plenary talk: the feeling that a magnificent edifice had been constructed, and all that remained was to convince people to actually use it. If we build it, they will come & figure it out. Wales declared that it was time to stop focusing on quantity in Wikipedia and to start focusing on quality: Wikipedia has pages for just about everything that needs a page, although many of the pages aren't very good. I won't disagree with that, but there's something else that needs to happen: the negotiation involved as their new technology increasingly hits the rest of the world.
This was the narrative arc traced by Larry Lessig in his plenary: speaking about how he got more and more enthusiastic about the potential of freely shared media before running into the brick wall of the Supreme Court. At that point, he realized, it was time to regroup and assess what would be politically & socially necessary to bring free media to the masses. There's something similar going on in the wiki community as a whole. It's a tremendously fertile time technologically, but there are increasingly social issues that scream for engagement.
One of the most interesting presentations that I saw yesterday afternoon was Daniel Caeton's presentation on negotiating truth. Caeton's talk was based on his upcoming book entitled The Wild, Wild Wiki: Unsettling the Frontiers of Cyperspace. Caeton teaches writing at California State University in Fresno; he experimented in having students explore & contribute to the WIkipedia. The issues that arose surprised him. His talk focused on the experiences of Emina, a Bosnian Muslim student: she looked at how Bosnian Muslims were treated in the Wikipedia and found immensely diverging opinions. She found herself in conversation with other contributors about the meaning of the word "Bosniak". In doing so she found herself grappling with the core philosophy of Wikipedia: that truth is never objective, always in negotiation. Introducing this sort of thinking is something that needs to be taught just as much as Wiki markup syntax, though it hasn't had nearly as much attention.
Today there's a whole track on using Wikis in education: I'll be following & reporting back from that.
transmitting live from cambridge: wikimania 2006 08.04.2006, 1:29 PM
I'm at the Wikimania 2006 conference at Harvard Law School, from where I'll be posting over the course of the three-day conference (schedule). The big news so far (as has already been reported in a number of blogs) came from this morning's plenary address by Jimmy Wales, when he announced that Wikipedia content was going to be included in the Hundred Dollar Laptop. Exactly what "Wikipedia content" means isn't clear to me at the moment – Wikipedia content that's not on a network loses a great deal of its power – but I'm sure details will filter out soon.
This move is obvious enough, perhaps, but there are interesting ramifications of this. Some of these were brought out during the audience question period during the next panel that I attended, in which Alex Halavis talked about issues of evaluating Wikipedia's topical coverage, and Jim Giles, the writer of the Nature study comparing the Wikipedia & the Encyclopædia Britannica. The subtext of both was the problem of authority and how it's perceived. We measure the Wikipedia against five hundred years of English-language print culture, which the Encyclopædia Britannica represents to many. What happens when the Wikipedia is set loose in a culture that has no print or literary tradition? The Wikipedia might assume immense cultural importance. The obvious point of comparison is the Bible. One of the major forces behind creating Unicode – and fonts to support the languages used in the developing world – is SIL, founded with the aim of printing the Bible in every language on Earth. It will be interesting to see if Wikipedia gets as far.
three glimpses at the future of television 08.03.2006, 9:30 AM
1. When radio was the main electronic media source, families would gather around the radio and listen to music, news, or entertainment programming, not unlike traditional television viewing. Today, radio listening habits have shifted, and I only hear the radio in cars and offices. Television viewing (if you can even call it that) is experiencing a similar shift, as people multitask at home, with the television playing in the background. With the roll out of Digital Multimedia Broadcasting (DMB) in South Korean last year, the use of television is starting to resemble radio even more. DBM is a digital radio transmission system which allows television signals to play on mobile devices. Since its 2005 debut, a slew of DMB capability devices, such as GPS units and the PM80 PDA from LG have been released in Korea. DBM systems are being planned throughout Europe and Asia, which may make mobile television viewing ubiquitous and the idea of a family sitting in front of a television at home seem quaint.
2. I recently posted on a partnership between youtube and NBC, which will create a channel on the video sharing site to promote new shows from NBC this autumn. NBC seems to have taken the power of youtube to heart as is producing new episodes of the failed WB pilot, "Nobody's Watching," which never aired. The pilot was leaked to youtube and viewed by over 450,000 people. I'm waiting to see how far NBC is willing experiment proactively with youtube and its community to create better programming.
3. In the US, the shifting of television from large boxes residing in living rooms to desktops, laptops, and portable media players, has often meant viewing pirated programming uploaded onto video sharing sites like youtube or downloading files from bit torrent. For those who don't want to break the law, Jeff Jarvis reports that legal streamed and downloaded content will be helped by an announcement by ABC that 87% of viewers of their streamed video were able to recall its advertising, which is over 3 times the average recall of standard television advertising. While legal content is important, I hope it doesn't kill remix culture or the anyone can be a star ability that youtube provides.
review: the access principle 08.02.2006, 7:34 AM
In his book "The Access Principle-- The Case for Open Access to Research and Scholarship," John Willinsky, from the University of British Columbia, tackles the idea that scholarship needs to be more open and accessible than it currently is. He offers a comprehensive and persuasive argument that covers the ethical, political and economic reasons for making scholarship accessible to both scholars and the public. He lives by his words, as a full text version is now available for download on the MIT Press website. The book is an important resource for anyone who is concerned with scholarly communicate. We were also fortunate to have his attendance at our meeting on the formulation of an scholarly press.
Many people have spoken to the situation that raising journal subscription costs and shrinking library acquisition budgets are quickly reaching their limits of feasibility, and now Willinsky provides in one place, a clear depiction of the status quo and the reasons on how it arrived there. He then takes the argument for open access deeper by widening the discussion to address the developing world and the general public.
Willinsky documents a promising trend that several large institutions including the NIH and prestigious journals such as the New England Journal of Medicine, are making their research available. They use different models releasing the research. For example, NEJM makes article accessible six months after its paid publication is released. In attempting to encourage this trend of open access to scholarly work, Willinsky devotes much of "Open Access" to document the business models of scholarly publishing and shows in detail the economic feasibility of open access publishing. He clearly maintains that making scholarship accessible is not necessarily making it free. Walking through the current economic models of academic publishing, Willinsky gives a good overview of the range of publishing models with varying degrees of accesibility. As well, he devotes an entire chapter which proposes an intriguing model of how a journal could be operated by scholars as a cooperative.
To coincide with this effort to argue for the open access of scholarship, Willinsky also works with a group of developers to create an open source and free publication platform, called the Open Journal System. The OJS provides a journal a way to reduce their costs by providing digital tools for editing, management and distribution. Although, it is clear that scholars and publishers still hold on to print as the ideal medium, even as it is becoming increasing economically infeasible to maintain. However, when the breaking point eventually comes to pass, the point in time when shrinking library budgets and raising subscription rates eventually become unworkable, viable options will fortunately already exist. A sample list of journals using OJS shows the breadth of subject matter and international use of the tool.
It is the last chapters of the book, "Reading," "Indexing" and "History" which leave the biggest impact. In "Reading," Willinsky explores how the way people read is already being influenced by screen-based text. Initially, the focus on digital publishing was relevant in his analysis and proposals, because the efficiencies gained by digital publishing can be used to balance the costs of accessing print publishing. However, in the shift to digital online publishing, he notes that there exists an opportunity to aid the comprehension of readers that is unrelated to the economics and ethics of access.
He uses the example of how students read a primary history text very differently than a historian reads. A historian quickly jumps from the top to bottom looking for clues concerning geography, time of the events depicted and the time document was written, in order to understand the historical context of the document. On the other hand, a student will typically read a document from start to finish, with less emphasis on building a context for the document.
Scholars' readings of journal articles have similarities to the way historians read their source documents. Just as there are techniques to assist student of history how to read, there also ways to assist the reading of all scholarly work. Most importantly, these techniques can be integrated into the reading environment of the open and online journal. Addressing and utilizing the potential of digital and networked text, in the end, can assist the overall arguments of Willinsky. Because Willinsky comes from an education and pedagogy background, it is not surprising that he uses an "scaffolding" approach to support learning and reading. In this context, scaffolding refers to the pedagogical idea that knowledge transfer is increased when readers (or learners) are given tools and resources to support their learning experience with the main text.
Currently, there are of course features in print journal publishing to aid the reader. He cites that abstracts, footnotes and citations are ubiquitous tools to aid the reader. In the online environment, these tools can be expanded even further. While Willinsky acknowledges that open access will change the readership of scholarly publishing and that the medium must adapt for these new readers, he does not mean to say that the level of writing itself necessarily has to change. Scholars should still write to expand their field.
One very basic feature that is included the Open Journal System is the ability to comment. This simple feature has the ability to narrow the gap between author and reader. Although as far as I can tell, it is not often used. Also included, "Reading Tools" are basic but significant additions to the reading experience, currently providing supportive information by searching open access databases with author-proscribed key words. Willinsky states these tools are still undergoing development, which is not surprising because our understanding of the digital networked text is still in the formative stage as well. Because OJS is open source, it allows new feature sets to be added into the system as new forms of reading are understood and can be applied onto a large scale. Radical experimentation is not always appropriate. Just getting the journals into an online environment is a significant achievement. It is telling that the default setting for "Reading Tools" is off, although it is being used by some journals.
The chapter "Indexing," flips the analysis to look at how online and accessibility will change how scholarship is stored, indexed and retrieved on the publisher side. Willinsky notes that in countries as Bangalore, universities cannot even afford the collected abstracts of journals, let alone subscriptions to the actual journal. However, the developing world is starting to benefit from the growing open indexes such as PubMed, ERIC, and CiteSeer.IST and HighWire.
He goes deeper into the issues of indexing by exploring how indexing of schloarly literature can be "more comprehensive, integrated and automated" while being open and accessible. Collaborative indexing is one such route to explore, which begins to blur the lines between publisher, author and reader. Willinsky has documented how fragmented current indexing service are, which leads to overlap and confusion over where journal are indexed. He aptly points out that indexing needs to evolve in step with open access because the amount of information to search vastly increases. Information that cannot located, even if it is openly accessible, has limited social value.
The Access Principle closes with a wonderful look at the historical relationship between scholarship and publishing in the aptly named chapter, "History." In the early ears of the printing press, scholars where often found at the presses themselves, working with printers to produce their work. Once the printing press matured, a disconnect between the scholar and the press developed. Intermediaries emerged who ordered their subscription preferences and texts were sent off publishers and editors, as scholars moved further away from the physical press. Today, the shift to the digital has allowed the scholar to redevelop a closer relationship with the entire process of publishing. Blogging, print on demand, wikis, online journals and tagging tools are a few examples of how scholars now interact with "not only fonts and layout, but to the economics of distribution and access."
It's important that the book closes here, because it illuminates how publishing technology has always been a distruptive force on the way knowledge is stored and shared. Willinsky's concern is to argue for open access but to also show how interrelated the digital is to that access. Further, there is the opportunity to "improve the quality and value of that access."
Our work at the institue, including Sophie, MediaCommon, Gamer Theory, and next\text all point to these new directions that Willinsky share, which not surprisingly make his book particularly relevant to me. However, Willinsky describes something relevant to all scholars as well.
working in the open 08.01.2006, 4:59 PM
From 1984 to 1996 i had the good fortune to be a part of Voyager, an innovative publisher known for The Criterion Collection, which started in 1984 as a series of laser videodiscs, innovative cd-roms, the first credible electronic books in the Expanded Books series, and even a few landmarks on the web including the first audio webcast which fielded questions from remote listeners. We were a wide-eyed group inventing things as we went along. Nothing happened without intense in-house discussion and debate over the complex new relationships between form and content afforded by new technologies. But realistically the discussion was limited at the most to the hundred or so people at any one time who were involved in development of Voyager's titles.
Through another stroke of luck i've managed to be part of a second wonderfully creative group which is having as much fun navigating uncharted waters as we did at Voyager. However this time, thanks to the network, my colleagues and i are working out in the open. And because others are able to listen in as we "think out loud" and then "chime-in" if they have something to contribute, the discussion is ever so much broader, deeper and fundamentally useful.
This thought came to me this morning when i looked at the discussion on if:book about MediaCommons and realized how remarkable a group of people had contributed so far and how much quicker the discussion is developing than it ever could have if it had just been my colleagues and i discussing this around a table.
now playing: academics in the role of the public intellectual 08.01.2006, 6:37 AM
Last week, in light of Middle East expert and blogger Juan Cole's recent experience with the hiring process of Yale University, the Chronicle of Higher Education posted commentary on the career risks of academic blogging from several well-known academic bloggers, including:
The last comment is from Juan Cole, himself, and he closes with:
"The role of the public intellectual is my career. And it is a hell of a career. I recommend it."
It appears that Juan Cole has few regrets. Although not getting the position at Yale certainly is disappointing, he can still teach, carry on with his formal scholarly research, and of course blog, at the University of Michigan. His ability to be a public intellectual has not suffered. (Due to the nature of tenure and the university system, his public courting by a potential competing employer will have a much less adverse effect than if he was employed in the private sector.)
By the nature of his area of expertise, he ideas were going to have detractors. Anyone who write on the Middle East is destined to be decried as either too pro-Israel or pro-Arab. Cole could have remained behind the protective walls of the academy that tenure affords. Juan Cole made a decision to blog and seems satisfied the outcomes.
Clearly, he views his role as public intellectual as part of his job. Although, some of his fellow bloggers do not necessarily take the same viewpoint. This discrepancy leads to the question, what is the job description of the higher education professor? More specifically, if outreach to the public is part of the job, how is the role of the academic public intellectual evaluated in the hiring and promotion process?
J. Bradford DeLong provides an good list of the possible activities of academics.
"A great university has faculty members who do a great many things -- teaching undergraduates, teaching graduate students, the many things that are "research," public education, public service, and the turbocharging of the public sphere of information and debate that is a principal reason that governments finance and donors give to universities. Web logs may well be becoming an important part of that last university mission."
Of course, academics are involved in these areas in varying degrees. I do not mean to suggest that every professor needs to blog. However, on the whole, university presidents and department heads needs to acknowledge that they do have an obligation to make their scholarship accessible to the public. Scholarship for its own sake or its own isolated community has little or no social value.
Therefore, the public university which receives funding from the state government, has a responsibility to give back the results from the resources that society gives it. Further, we also as a society give private higher ed schools protective benefits (such as special tax status) because there is an implied idea that they provide a service to the overall community. Therefore, one can argue that part of higher education's duty includes not only teaching and scholarship, but outreach as well. Some professors will have a natural tendency towards outreach and acting as a public intellectual, and universities need to support their activities as part of their reason for being hired in the first place.
The difficulty has arisen because within the academy there is history of a certain distain through those who pursued becoming a public intellectual. Drezner mentions how television was a legacy of being regarded with similar negativity. However, the web is a much more disruptive force than television in this regard. In that, it has dramatically changed how the university public intellectual can access people. Blogging specifically has lower the barrier of entry for academics (and anyone for that matter) to interact with the public. Now, they no longer need to rely on traditional media outlets to reach a mass audience. The biggest resource, then, is considerable time on the part of the professor.
Siva Vaidhyanathan states, "There has never been a better time to be a public intellectual, and the Web is the big reason why...
"I'm thrilled to see the membrane between the academy and the public more permeable and transparent than ever."
If direct outreach is an essential part of the professional duty of the academic (which I argue it is,) then the academy needs to understand how to evaluate the medium. Blogging is not scholarly publishing, and needs to be interpreted with an understanding of the form. Because the hiring and tenure process is often closed, it is not clear how and if they are evaluating academic blogging as Ann Althouse notes:
"Those who are making a judgment about whether to offer a blogger a new career opportunity ought to have the sense to recognize satire and hyperbole and to understand that blog writing is done quickly, instinctively, and without an editor. But surely they are entitled to look at it as evidence of the quality of the blogger's mind."
In the short term, Yale is free to hire whomever they chose, as Erin O'Connor correctly asserts. However, there are long term effects to their decisions. The academy needs to be careful to insure that they are remain relevant to society. Cole's blog get 200,000 viewers a month, and people are obviously interested in what he has to say. Playing it safe is a precarious position, because they may isolate themselves into obsolescence -- particularly because (for better or worse) our society is increasingly business/ results/ ROI oriented.
Daniel W. Drezner states:
"Blogs and prestigious university appointments do not mix terribly well. That is because top departments are profoundly risk-averse when it comes to senior hires. In some ways, that caution is sensible -- hiring a senior professor is the equivalent of signing a baseball player to a lifetime contract without any ability to release or trade him. In such a situation, even small doubts about an individual become magnified."
In general, innovation tends to occur on the fringe. Being on the fringe often means organizations or individuals are unencumbered, and more free to take risks. Therefore, its not surprising that Drezner says that the top-tier institutes tend to be more conservative. At some point, what was once fringe gains acceptance and becomes mainstream. Therefore, the acceptance of academic blogging as part of a professor's job will start at the fringes and move towards the mainstream and at some point top tier universities. However, if they are too slow to adapt, they will ironically risk losing the reputations they are seeking to protect.
The spectrum of reactions given by the commentators shows that the academy does not know yet how to handle blogging. ls it a personal activity, a professional pursuit, or something in between the two? Not all of them would agree that their blogging is formal part of the job as academics. Their opinions to Juan Cole's blogging and experience with Yale, shows where they fall in that range. An interesting follow-up question to pose to them, is "Why Blog?" As well, there range of reactions and opinions point out the overall lack of guidelines on how to treat blogging for both academics and hiring committees. This is very different from the usual "rules" for promotion and hiring that are very well defined.
As stated previously, a university can create their own criteria for who they hire. The situations of tenure and promotion are quite different, because the faculty member is already employed by the organization when dealing with promotion. Even within a field, departments within an individual school will have specific guidelines on their expectations for teaching, research, grants and publishing.
With promotion, the importance of guidelines is even more crucial because junior faculty's energy under the current system is so focused on progressing through the tenure track. If this ambiguity continues, we are bound to hear about new additions to the list of faculty being denied jobs and promotions. This could lead to academics abandoning blogging which would be a great loss for the public and the academy.