on appropriation 04.27.2006, 3:50 PM
The Tate Triennial 2006, showcasing new British Art, brings together thirty-six artists who explore the reuse and reshaping of cultural material. Curated by Beatrix Ruf, director of the Kunsthalle in Zurich, the exhibition includes artists from different generations who explore reprocessing and repetition through painting, drawing, sculpture, photography, film, installations and live work.
Historically, the appropriation of images and other cultural matter has been practiced by societies as the reiteration, reshuffling, and eventual transformation of artistic and intellectual human manifestations. It covers a vast range from tribute to pastiche. When visual codes are combined, the end product is either a cohesive whole where influences connect into new and very personal languages, or disparate combinations where influences compete and clash. In today's art, the different guises of repetition, from collage and montage to file sharing and digital reproduction highlight the existing codes or reveal the artificiality of the object. Today's combination of codes alludes to a collective sense of memory in a moment when memories have become literally photographic.
One comes out of this exhibition thinking about Duchamp's "readymades," Rauschenberg's "combines," and other forms of conceptual "gluing," (the literal meaning of the word "collage,") as precursors and/or manifestations of the postmodern condition. This show is a perfect representation of our moment. As Beatrix Ruf says in the catalogue: "Artists today are forging new ways of making sense of reality, reworking ideas of authenticity, directness and social relevance, looking again into art practices that emerged in the previous century."
We have artists like Michael Fullerton, who paints contemporary figures in the style of Gainsborough, or Luke Fowler's use of archive material to explore the history of Cornelius Cardew's Scratch Orchestra. Repetition goes beyond inter-referentiality in the work of Marc Camille Chaimowicz, who combines works he made in the 70s with projected images of himself as a young man and as an adult, within a space where a vase of flowers set on a Marcel Breuer' table and a pendulum swinging back and forth position the images of the past solidly in the present. In "Twelve Angry Women," Jonathan Monk affixes to the wall twelve found drawings by an unknown artist from the 20s, using different colored pins that work as earrings. Mark Leckey uses Jeff Koons' silver bunny as a mirror into his studio in the way 17th century masters painted theirs. Liam Gillick creates sculptures of hanging texts made out of factory signage.
Art itself is cumulative. Different generations build upon previous ones in a game of action and reaction. One interesting development in art today is the collective. Groups of artists coming together in couples, teams, or cyberspace communities, sometimes under the identity of a single person, sometimes a single person assuming a multiple identity. Collectives seem to be a new phenomenon, but their roots go back to the concept of workshops in antiquity where artistic collaboration and copying from casts of sculptural masterpieces was the norm. The notion of the individual artist producing radically new and original art belongs to modernity. The return to collectives in the second part of the 20th century, and again now, has a lot to do with the nature of representation, with the desire to go beyond the limits of artistic mimesis or individual interpretation.
On the other hand, appropriation as a form of artistic expression is a postmodern phenomenon. Appropriation is the language of today. Never before the advent of the Internet had people appropriated knowledge, spaces, concepts, and images as we do today. To cite, to copy, to remix, to modify are part of our everyday communication. The difference between appropriation in the 70s and 80s and today resides in the historical moment. As Jean Verwoert says in the Triennial 2006 catalogue:
The standstill of history at the height of the Cold War had, in a sense, collapsed the temporal axis and narrowed the historical horizon to the timeless presence of material culture, a presence that was exacerbated by the imminent prospect that the bomb could wipe everything out at any time. To appropriate the fetishes of material culture, then, is like looting empty shops at the eve of destruction. It is the final party before doomsday. Today, on the contrary, the temporal axis has sprung up again, but this time a whole series of temporal axes cross global space at irregular intervals. Historical time is again of the essence, but this historical time is not the linear or unified timeline of steady progress imagined by modernity: it is a multitude of competing and overlapping temporalities born from the local conflicts that the unresolved predicaments of the modern regimes still produce.
Today, the challenge is to rethink the meaning of appropriation in a moment when capitalist commodity culture has become the determinant of our daily lives. The Internet is perhaps our potential Utopia (though "dystopian" seems to be the adjective of choice now.) But, can it be called upon to fulfill the unfulfilled promises of 20th century's utopias? To appropriate is to resist the notion of ownership, to appropriate the products of today's culture is to expose the unresolved questions of a world shaped by the information era. The disparities between those who are entering the technology era and those forced to stay in the times of early industrialization are more pronounced than ever. As opposed to the Cold War, where history was at a standstill, we live in a time of extreme historicity. Permanence is constantly challenged, how to grasp it all continues to be the elusive task.
how people read online 04.27.2006, 9:43 AM
There's a series of recent posts (1, 2, 3, 4) up at Ron Silliman's blog where he analyzes a recent study (by Simmons B. Buntin of terrain.org) of how people read and write poetry online. This is of interest even to those uninterested in poetry: Silliman is doing some very careful work in scrutinizing how and why people read online. In doing so, he's touching on a number of things we're interested in here, not least the roles of reputation, legitimization, and distribution in electronic reading and writing.
The study Silliman's looking at was mostly answered by those who write as well as read poetry, so there's a certain amount of bias in the responses he's looking at. But this selective skew provides a useful look at cutting edge attitudes. While respondents read a wide variety on online poetry and criticism, word of mouth remains a primary method of finding new things to read: social interaction seems to be critical. Of particular interest is the different roles he sees assigned to print and online publication: most respondents found no difference in quality between print and online work, although there was the perception that online work took more risks and was generally more experimental (there seem to be broader extremes in online publication).
What do people like about publishing online? First (by a wide margin) the accessibility that it affords; second, the possibility of real-time interaction. Cost comes in third: it's interesting that again the perception of the need for social interaction shows itself. It's also interesting (and not tremendously surprising) that the efforts on which the most money has been spent (Poetry, which recently received an enormous bequest, has sunk $100 million into their website) don't seem to be the most influential – blogs and forums, which are more interaction-based, come out ahead.
What doesn't work about online publishing? The look & feel of online work, as well as poorly-designed websites, was the most frequent complaint. The ephemerality of the web is another issue: many websites seem to disappear as soon as they spring up, and Silliman suggests the need for archiving online work is a problem that needs to be resolved. A number of respondents complained about devices, arguing that it's not as pleasant to read on a screen than on a page – which Silliman, who's done a fair amount of reading on a Palm Pilot, qualifies by arguing that this seems to be more a software problem than a hardware problem.
an interview with bruno pellegrini 04.26.2006, 11:48 AM
Last week I posted about Le mie elezioni, a film about the recent Italian elections constructed from footage shot by the general public, mostly part of Italy's videoblogging community. Le mie elezioni will be released on the 15th of May: according to an article in Il manifesto more than 150 videobloggers have submitted more than 50 hours of materials which are being furiously edited right now. You can watch a rough cut of the trailer here.
The footage is being put together into an hour-long film by the website Nessuno.TV a portal for Italian videobloggers, which is run by Bruno Pellegrini, who also teaches the sociology of communications at the Universitá di Architettura in Rome. I sent him an email asking about the project; while Bruno happened to be in the U.S. last week, but his (and our) travel schedule didn't allow him to stop by the Institute. Nonetheless, we conducted an interview, via email. My questions are in bold; his responses are indented.
Can you describe how the project came about?
It was born in a very natural way, as some of the vloggers who already participated at BlogTV (the first ever TV station broadcasting user-generated content) suggested covering the election together. Then the idea of the movie came up.
Is this project part of a larger Italian web response to media consolidation? How widely do people share your belief that the perception that big media has failed to cover things it should be covering?
I do not think this project is specific to Italy. Although the situation in my country is embarrassing I believe it is only a little ahead compared to what is going on abroad. Big media has failed (sometimes deliberately) to cover things all over the world for the last decades and the web has given people a chance to re-balance the power. Whereever and whenever there are major needs for a democracy, you can be sure something is going to happen . . . With regards to people, only a small part is conscious about what is going on, and the others are not helped by mass information . . . In general there is a common sense of distrust of politics, media and power.
In the U.S., the 2004 elections brought out – for the first time – a huge number of political bloggers; this seemed to be the first time that the blogosphere registered in the mainstream media, and there's the perception that the U.S. blogosphere exploded at that point. Have these elections done the same thing in Italy? Or did people turn to the Internet earlier?
Not at all, unfortunately. The Italian blogosphere is not as mature as it is in the U.S. We still lack a common identity and, most of all, consciousness of the power of being media . . . Hopefully this will happen in the next political campaign, and I suspect it will come out not from the classic political separation (left and right) but from an increasing fight between young and old people with the latter trying to keep their undeserved priviledges . . .
How big is the videoblogging community in Italy? We periodically look in on it in the U.S., and while everyone loves Rocketboom, it doesn't really seem to have taken off here as much as everyone expected (although maybe things like YouTube and Google Video are changing that). Did you find people getting interested in videoblogging because of the project, or was there already a vibrant community?
Vibrant is not exactly the right word, maybe promising would be more appropriate. I think it is a matter of critical mass and once reached it will grow exponentially like all the network related trends.
The question of copyright. Watching the clips, I couldn't help but notice the music - songs by the Arctic Monkeys & Caparezza playing in the background, as well as video clips from the news and I think a couple of press photographs. In the U.S. documentary film makers increasingly have problems with clearance issues - the owners of the songs charge thousands of dollars for the rights to use even a few seconds of them. We've been covering this issue of fair use rather closely because it seems to figure in many of the things you can do with multimedia. I'm curious how much of a problem it is in Italy - is this something you worry about there?
So far it is not a problem at all and we can deal with the fair use regulation. I believe the majors will play harder in the future, especially with music and movies, but there are already good open access libraries and the Creative Commons movement is getting stronger in Italy too.
Many thanks to Bruno Pellegrini for being so generous with his time. If people have more questions about this project, don't hesitate to leave them in the comments section.
questions on libraries, books and more 04.26.2006, 6:08 AM
Last week, Vince Mallardi contacted me to get some commentary for a program he is developing for the Library Binding Institute in May. I suggested that he send me some questions, and I would take a pass at them, and post them on the blog. My hope that is, Vince, as well as our colleagues and readers will comment upon my admittedly rough thoughts I have sketched out, in response to his rather interesting questions.
1. What is your vision of the library of the future if there will be libraries?
Needless to say, I love libraries, and have been an avid user of both academic and public libraries since the time I could read. Libraries will be in existence for a long time. If one looks at the various missions of a library, including the archiving, categorization, and sharing of information, these themes will only be more relevant in the digital age for both print and digital text. There is text whose meaning is fundamentally tied to its medium. Therefore, the creation and thus preservation of physical books (and not just its digitization) is still important. Of course, libraries will look and function in a very different way from how we conceptualize libraries today.
As much as, I love walking through library stacks, I realize that it is a luxury of the North, which was made more clear to me at the recent Access to Knowledge conference my colleague and I were fortunate enough to attend. In the economic global divide of the North and South, the importance of access to knowledge supersedes my affinity for paper books. I realize that in the South, digital libraries are a much efficient use of resources to promote sustainable knowledge, and hopefully economic, growth.
2. How much will self-publishing benefit book manufacturers, indeed save them?
Recently, I have been very intrigued with the notion of Print On Demand (POD) of books. My hope is that the stigma will be removed from the so-called "vanity press." Start-up ventures, such as LuLu.com, have the potential to allow voices to flourish, where in the past they lacked access to traditional book publishing and manufacturing.
Looking at the often cited observation that 57% of Amazon book sales comes from books in the Long Tail (here defined as the catalogue not typically available in the 100,000 books found in a B&N superstore,) I wonder if the same economic effect could be reaped in the publishing side of books. Increasing efficiency of digital production, communication, and storage, relieve economic pressures of the small run printing of books. With print on demand, costs such as maintaining inventory are removed, as well, the risk involved in estimating the demand for first runs is reduced. Similarly, as I stated in my first response, the landscape of book manufacturing will have to adapt as well. However, I do see potential for the creation of more books rather than less.
3. What co-existence do you foresee between the printed and electronic book, as co-packaged, interactive via barcodes or steganography? etc.
Paper based books will still have its role in communication in the future. Paper is still a great technology for communication. For centuries, paper and books were the dominate medium because that was the best technology available. However, with film, television, radio and now digital forms, it is not longer always true. Thus the use of print text must be based upon the decision by the author that paper is the best medium for her creative purposes. Moving books into the digital allows for forms that cannot exist as a paper book, for instance the inclusion of audio and video. I can easily see a time when an extended analysis of a Hitchcock movie will be an annotated movie, with voice over commentary, text annotation and visual overlays. These features cannot be reproduced in traditional paper books.
Rather, that try to predict specific applications, products or outcomes, I would prefer to open the discussion to a question of form. There is fertile ground to explore the relationship between paper and digital books, however it is too early for me to state exactly what that will entail. I look forward to seeing what creative interplay of print text and digital text authors will produce in the future. The co-existence between the print and electronic book in a co-packaged form will only be useful and relevant, if the author consciously writes and designs her work to require both forms. Creating a pdf of Proust's Swann Way's is not going to replace the print version. Likewise, printing out Moulthrop's Victory Garden do not make sense either.
4. Can there be literacy without print? To the McLuhan Gutenberg Galaxy proposition.
Print will not fade out of existence, so the question is a theoretical one. Although, I'm not an expert in McLuhan, I feel that literacy will still be as vital in the digital age as it is today, if not more so. The difference between the pre-movable type age and the electronic age, is that we will still have the advantages of mass reproduction and storage that people did not have in an oral culture. In fact, because the marginal cost of digital reproduction is basically zero, the amount of information we will be subjected to will only increase. This massive amount of information which we will need to process and understand will only heighten the need for not only literacy, but media literacy as well.
a2k wrap-up 04.25.2006, 9:35 AM
—Jack Balkin, from opening plenary
I'm back from the A2K conference. The conference focused on intellectual property regimes and international development issues associated with access to medical, health, science, and technology information. Many of the plenary panels dealt specifically with the international IP regime, currently enshrined in several treaties: WIPO, TRIPS, Berne Convention, (and a few more. More from Ray on those). But many others, instead of relying on the language in the treaties, focused developing new language for advocacy, based on human rights: access to knowledge as an issue of justice and human dignity, not just an issue of intellectual property or infrastructure. The Institute is an advocate of open access, transparency, and sharing, so we have the same mentality as most of the participants, even if we choose to assail the status quo from a grassroots level, rather than the high halls of policy. Most of the discussions and presentations about international IP law were generally outside of the scope of our work, but many of the smaller panels dealt with issues that, for me, illuminated our work in a new light.
In the Peer Production and Education panel, two organizations caught my attention: Taking IT Global and the International Institute for Communication and Development (IICD). Taking IT Global is an international youth community site, notable for its success with cross-cultural projects, and for the fact that it has been translated into seven languages—by volunteers. The IICD trains trainers in Africa. These trainers then go on to help others learn the technological skills necessary to obtain basic information and to empower them to participate in creating information to share.
The ideology of empowerment ran thick in the plenary panels. Ronaldo Lemos, in the Political Economy of A2K, dropped a few figures that showed just how powerful communities outside the scope and target of traditional development can be. He talked about communities at the edge, peripheries, that are using technology to transform cultural production. He dropped a few figures that staggered the crowd: last year Hollywood produced 611 films. But Nigeria, a country with only ONE movie theater (in the whole nation!) released 1200 films. To answer the question of how? No copyright law, inexpensive technology, and low budgets (to say the least). He also mentioned the music industry in Brazil, where cultural production through mainstream corporations is about 52 CDs of Brazilian artists in all genres. In the favelas they are releasing about 400 albums a year. It's cheaper, and it's what they want to hear (mostly baile funk).
We also heard the empowerment theme and A2K as "a demand of justice" from Jack Balkin, Yochai Benkler, Nagla Rizk, from Egypt, and from John Howkins, who framed the A2K movement as primarily an issue of freedom to be creative.
The panel on Wireless ICT's (and the accompanying wiki page) made it abundantly obvious that access isn't only abut IP law and treaties: it's also about physical access, computing capacity, and training. This was a continuation of the Network Neutrality panel, and carried through later with a rousing presentation by Onno W. Purbo, on how he has been teaching people to "steal" the last mile infrastructure from the frequencies in the air.
Finally, I went to the Role of Libraries in A2K panel. The panelists spoke on several different topics which were familiar territory for us at the Institute: the role of commercialized information intermediaries (Google, Amazon), fair use exemptions for digital media (including video and audio), the need for Open Access (we only have 15% of peer-reviewed journals available openly), ways to advocate for increased access, better archiving, and enabling A2K in developing countries through libraries.
—The Adelphi Charter
The name of the movement, Access to Knowledge, was chosen because, at the highest levels of international politics, it was the one phrase that everyone supported and no one opposed. It is an undeniable umbrella movement, under which different channels of activism, across multiple disciplines, can marshal their strength. The panelists raised important issues about development and capacity, but with a focus on human rights, justice, and dignity through participation. It was challenging, but reinvigorating, to hear some of our own rhetoric at the Institute repeated in the context of this much larger movement. We at the Institute are concerned with the uses of technology whether that is in the US or internationally, and we'll continue, in our own way, to embrace development with the goal of creating a future where technology serves to enable human dignity, creativity, and participation.
access to the a2k conference 2006 04.21.2006, 1:22 PM
Jesse and I have just arrived at the Yale University to police barricades, blocked of streets, bus loads of demonstrators, and general confusion. I wish I could say that it was in support of protecting open and accessible knowledge, as we are here to attend the Access 2 Knowledge conference. However, the crowds of Falun Gong supporters (with a few Free Tibet activists in the mix) were protesting the arrival of President Hu Jintao from China. Wandering the streets of New Haven to find an unblocked entrance to the law school, Jesse and I reflected a bit on the irony of the difficulty of physically "accessing" the building where we will hear current thinking and planning on the making knowledge accessible.
The conference's stated goal is to "bring together leading thinkers and activists on access to knowledge policy from North and South, in order to generate concrete research agendas and policy solutions for the next decade...The A2K Conference aims to help build an intellectual framework that will protect access to knowledge both as the basis for sustainable human development and to safeguard human rights." Sessions will cover peer production, economics of a2k, copyright, access to science and medicine, network neutrality and privacy.
We very excited to be here, as presenters include some of our favorite IP / Copyright / Open Content thinkers: Yochai Benkler, Eric Von Hippel, Susan Crawford, and Terry Fisher. We're sure that by Sunday, we'll have more to add to the list.
Stay tuned for more.
google scholar 04.20.2006, 3:45 PM
Google announced a new change to Google Scholar to improve the results of a search. The results can now be ordered by a confluence of citations, date of publication, and keyword relevance, instead of just the latter. From the Official Google Blog:
It's not just a plain sort by date, but rather we try to rank recent papers the way researchers do, by looking at the prominence of the author's and journal's previous papers, how many citations it already has, when it was written, and so on. Look for the new link on the upper right for "Recent articles" -- or switch to "All articles" for the full list.
Another feature, which I wasn't aware of, is the "group of X", located just at the end of the line. It points to papers that are very similar in topic. Researchers can use this feature to delve deeper into a topic, as opposed to skipping across the surface of a topic. This reflects the deep user-centered thinking that went into the design of the results, which is broken down in more detail here.
Though many professors lament the use of Google as students first and last research resource, the continual improvements of Google Scholar and the Google Book project (when combined with access rights afforded by a university library) provide an increasingly potent research environment. Google Scholar, by displaying the citation count, provides a significant piece of secondary data that improves decision making dramatically compared to unguided topic searches in the library. By selecting uncredited quotations and searching for them in Google Book project, students can get information on the primary text, read a little of the additional context, and decide whether or not to procure the book from the library. I feel like I'm overselling Google, but my real point has nothing to do with any specific corporation. The real point is: in the future, all the value is in the network.
da vinci, copyright of non-fiction, and intelligent search 04.20.2006, 8:32 AM
Two of the authors of "Holy Blood, Holy Grail," recently lost their copyright infringement suit against genre thriller author, Dan Brown, for his book the "Da Vinci Code." Brown heavily relied on the theories of the secret lineage of Jesus found in Holy Blood (a best seller in its own time.) Both books were published by Random House, but that did not stop Michael Baigent and Richard Leigh from suing their own publisher for copyright infringement. A judge found that Baigent and Leigh could not prove (or even define) the central themes of their book were stolen and further did not think it was a good idea to have authors of "pretend historical books" scour over fiction works looking for stolen ideas.
Mark Stephens, a media lawyer for the losing side lawyer stated:
"Whilst the decision shows that he didn't infringe copyright, his moral behavior is more, in my view, open to question. It's clear that he used the fundamental themes and ideas of 'Holy Blood, Holy Grail,' and many people will think that morally, Dan Brown owes a debt to Baigent, Leigh and Lincoln."
Of course, Dan Brown owes a "creative" debt to the authors of Holy Blood, Holy Grail. Just like all fiction authors who use non-fiction (and in this case, I'm using the word loosely), own a debt to the research they do. Claims for compensation for it goes against the centuries old traditions of how culture is created.
Aside note: In my original search, for my post on the Da Vinci code, I mistyped "Devinci Code" in the nytimes.com search, and come up with this:
The nytimes.com article search engine couldn't find anything. Amusingly, it suggested I was searching for "deviancy code." Maybe it knows something I do not. And of course, the related sponsor ads on the top and right hand side, showed correctly identified and relevant links.
learning to read 04.20.2006, 8:00 AM
Somebody interviewed Bob for a documentary a few months ago. I don't remember who this was, because I was in the other room busy with something else, but I was half-listening to what was being discussed: how the book is changing, what precisely the Institute does, in short, what we discuss from day to day on this blog. One statement captured my ear: Bob offhandedly declared that "we don't really know how to read Wikipedia yet". I made a note of it at the time; since then I've been periodically pulling his statement out at idle moments and rolling it over and over in my mind like a pebble in my pocket, trying to decide exactly what it could mean.
There's something appealing to me about the flatness of the statement: "We don't really know how to read Wikipedia yet." It's obvious but revelatory: the reason that we find the Wikipedia frustrating is that we need to learn how to read it. (By we I mean the reading public as a whole. Perhaps you have; judging from the arguments that fly back and forth, it would seem that the majority of us haven't.) The problem is, of course, that so few people actually bother to state this sort of thing directly and then to unpack the repercussions of it.
What's there to learn in reading the Wikipedia? Let's start with a sample sentence from the entry on Marcel Proust:
In addition to the grief that attended his mother's death, Proust's life changed due to a very large inheritance (in today's terms, a principal of about $6 million, with a monthly income of about $15,000).
Criticizing the Wikipedia for being poorly written is like shooting fish in a barrel, but bear with my lack of sportsmanship for a second. Imagine that you found the above sentence in a printed reference work. A printed reference book that seems to be written in the voice of a sixth grade student deeply interested in matters financial might worry you. It would worry me. It's worried many critics of the Wikipedia, who point out that this clearly isn't the sort of manicured prose we're used to reading in books and magazines.
But this prose is also conceptually different. A Wikipedia article is not constructed in the same way that a magazine article is written. Nor is the content of a Wikipedia article at one particular instant in time - content that has probably been different, and might certainly change - analogous to the content of a print magazine article, which is always, from the moment of printing, exactly the same. If we are to keep using the Wikipedia, we'll have to get used to the solecisms endemic there; we'll also need to readjust they way we give credence to media. (Right now I'm going to tiptoe around the issue of text and authority, which is of course an enormous can of worms that I'd prefer not to open right now.) But there's a reason that the above quotation shouldn't be that worrying: it's entirely possible, and increasingly probable as time goes on, that when you click the link above, you won't be able to find the sentence I quoted.
This faith in the long run isn't an easy thing, however. When we read Wikipedia we tend to apply to it the standards of judgment that we would apply to a book or magazine, and it often fails by these standards, as might be expected. When we're judging Wikipedia this way, we presuppose that we know what it is formally: that it's the same sort of thing as the texts we know. This seems arrogant: why should we assume that we already know how to read something that clearly behaves differently from the text we're used to? We shouldn't, though we do: it's a human response to compare something new to something we already know, but often when we do this, we miss major formal differences.
This isn't the best way to read something new. It's akin to the "horseless carriage" analogy that Ben's used: when you think of a car as a carriage without a horse, you miss whatever it is that makes a car special. But there's a problem with that metaphor, in that it carries with it ideas of displacement. Evolution is often perceived as being transformative: one thing turns into, and is then replaced by, another, as the horse was replaced by the car for purposes of transportation. But it's usually more of a splitting: there's a new species as well as the old species from which it sprung. The old species may go extinct, or it may not. To finish that example: we still have horses.
Figuratively, what's happened with the Wikipedia is that a new species of text has arisen and we're still wondering why it won't eat the apples we're proffering it. The Wikipedia hasn't replaced print encyclopedias; in all probability, the two will coexist for a while. But I don't think we yet know how to read Wikipedia. We judge it by what we're used to, and everyone loses. Were you to judge a car by a horse's attributes, you wouldn't expect to have an oil crisis in a century.
Perhaps a useful way to think about this: a few paragraphs of Proust, found on a trip through In Search of Lost Time with Bob's statement bouncing around my head. The Guermantes Way, the third part of the book, feels like the longest: much of this volume is about failing to recognize how things really are. Proust's hapless narrator alternately recognizes his own mistakes of judgment and makes new ones for six hundred pages, with occasional flashes of insight, like this reflection:
. . . . There was a time when people recognized things easily when they were depicted by Fromentin and failed to recognize them at all when they were painted by Renoir.
Today people of taste tell us that Renoir is a great eighteenth-century painter. But when they say this they forget Time, and that it took a great deal of time, even in the middle of the nineteenth century, for Renoir to be hailed as a great artist. To gain this sort of recognition, an original painter or an original writer follows the path of the occultist. His painting or his prose acts upon us like a course of treatment that is not always agreeable. When it is over, the practitioner says to us, "Now look." And at this point the world (which was not created once and for all, but as often as an original artist is born) appears utterly different from the one we knew, but perfectly clear. Women pass in the street, different from those we used to see, because they are Renoirs, the same Renoirs we once refused to see as women. The carriages are also Renoirs, and the water, and the sky: we want to go for a walk in a forest like the one that, when we first saw it, was anything but a forest – more like a tapestry, for instance, with innumerable shades of color but lacking precisely the colors appropriate to forests. Such is the new and perishable universe that has just been created. It will last until the next geological catastrophe unleashed by a new painter or writer with an original view of the world.
(The Guermantes Way, pp.323–325, trans. Mark Treharne.) There's an obvious comparison to be made here, which I won't belabor. Wikipedia isn't Renoir, and its entry for poor Eugène Fromentin, whose paintings are probably better left forgotten, is cribbed from the 1911 Encyclopædia Britannica. But like the gallery-goers who needed to learn to look at Renoir, we need to learn to read Wikipedia, to read it as a new form that certainly inherits some traits from what we're used to reading, but one that differs in fundamental ways. That's a process that's going to take time.
dyson weighs in on goodmail 04.20.2006, 7:32 AM
Here is a follow up on the our post about Goodmail. A few weeks ago Esther Dyson wrote an op-ed piece in the New York Times in support of the Goodmail service, a start up which charges fees to senders of email to ensure their delivery. She argues that the services by either Goodmail or others are inevitable and will provide value of email recipients by eliminating spam. While I agree that customers should be allowed to chose whatever services they want, many of Dyson's claims need to further examined to see if they make sense.
"I agree that pretty soon sending most e-mail will cost money, but I think that's only right. It costs money to guarantee quality and safety. Moreover, I think the market will work, and that it will not shut out deserving senders, if we only let it work freely..."
"...In the short run, AOL and others will serve as the recipients' proxies. If they don't do a good job of ensuring that customers get the mail they want, even from nonpaying senders, they will lose their customers...
I'm not clear on how market competition requires additional cost to users. People are already free to chose email providers based on their spam filters. Email earns revenue for providers. Even yahoo, gmail and hotmail earn money through ads and sponored links. Adding another layer of fees will only give email providers an economic incentive to abandon their spam filters, which will make senders of email feel obligated to pay for these extra services. Email providers currently have incentives to continually improve their spam filters, which work rather well. Employing more strategies along the lines of improved mechanisms for users to report spam seem to be a more fairly distribution of the costs of insuring the delivery of email.
"And in the long run, recipients will be able to use services like Goodmail to set their own prices for receiving mail.
In my case, I'd have a list. I'd charge nothing for people I know, 50 cents for anyone new (though if I add the sender to my list after reading the mail, I'll cancel the 50 cents) and $3 for random advertisers. Ex-boyfriends pay $10."
I doubt the practicality of this scheme. It would only work if people receive email, open it, and approve new addresses, in a timely matter. People do not read all their email, as I've seen Inboxes with thousands of emails, many of them unopened. People forget that they have requested information, which would lead to disputed charged. People abandon email addresses and they accidentally erase email from servers. People change their minds on what spam is. Sometimes, email updates which people sign up to receive and later the become spam. If a firm buys an email list in good faith, are they open to these fines as well? The thought of resolving all these disputes of who wanted what email would be a bureaucratic mess.
"If people like those little stamps that mark their mail as safe and wanted or as commercial transactions, then let the customers have them. And let other companies compete with Goodmail to offer better and less expensive service."
Goodmail is not passing on costs to receivers, but to senders. The companies who will feel the most effect are the ones with less resources, for example as she notes, non-profit organizations. Why this has to be true is still unclear to me.
wealth of networks 04.19.2006, 9:02 AM
I was lucky enough to have a chance to be at The Wealth of Networks: How Social Production Transforms Markets and Freedom book launch at Eyebeam in NYC last week. After a short introduction by Jonah Peretti, Yochai Benkler got up and gave us his presentation. The talk was really interesting, covering the basic ideas in his book and delivered with the energy and clarity of a true believer. We are, he says, in a transitional period, during which we have the opportunity to shape our information culture and policies, and thereby the future of our society. From the introduction:
This book is offered, then, as a challenge to contemporary legal democracies. We are in the midst of a technological, economic and organizational transformation that allows us to renegotiate the terms of freedom, justice, and productivity in the information society. How we shall live in this new environment will in some significant measure depend on policy choices that we make over the next decade or so. To be able to understand these choices, to be able to make them well, we must recognize that they are part of what is fundamentally a social and political choice—a choice about how to be free, equal, productive human beings under a new set of technological and economic conditions.
During the talk Benkler claimed an optimism for the future, with full faith in the strength of individuals and loose networks to increasingly contribute to our culture and, in certain areas, replace the moneyed interests that exist now. This is the long-held promise of the Internet, open-source technology, and the infomation commons. But what I'm looking forward to, treated at length in his book, is the analysis of the struggle between the contemporary economic and political structure and the unstructured groups enabled by technology. In one corner there is the system of markets in which individuals, government, mass media, and corporations currently try to control various parts of our cultural galaxy. In the other corner there are individuals, non-profits, and social networks sharing with each other through non-market transactions, motivated by uniquely human emotions (community, self-gratification, etc.) rather than profit. Benkler's claim is that current and future technologies enable richer non-market, public good oriented development of intellectual and cultural products. He also claims that this does not preclude the development of marketable products from these public ideas. In fact, he sees an economic incentive for corporations to support and contribute to the open-source/non-profit sphere. He points to IBM's Global Services division: the largest part of IBM's income is based off of consulting fees collected from services related to open-source software implementations. [I have not verified whether this is an accurate portrayal of IBM's Global Services, but this article suggests that it is. Anecdotally, as a former IBM co-op, I can say that Benkler's idea has been widely adopted within the organization.]
Further discussion of book will have to wait until I've read more of it. As an interesting addition, Benkler put up a wiki to accompany his book. Kathleen Fitzpatrick has just posted about this. She brings up a valid criticism of the wiki: why isn't the text of the book included on the page? Yes, you can download the pdf, but the texts are in essentially the same environment—yet they are not together. This is one of the things we were trying to overcome with the Gamer Theory design. This separation highlights a larger issue, and one that we are preoccupied with at the institute: how can we shape technology to allow us handle text collaboratively and socially, yet still maintain an author's unique voice?
the networked book: an increasingly contagious idea 04.18.2006, 10:34 AM
Farrar, Straus and Giroux have ventured into waters pretty much uncharted by a big commercial publisher, putting the entire text of one of their latest titles online in a form designed to be read inside a browser. "Pulse," a sweeping, multi-disciplinary survey by Robert Frenay of "the new biology" -- "the coming age of systems and machines inspired by living things" -- is now available to readers serially via blog, RSS or email: two installments per day and once per day on weekends.
Naturally, our ears pricked up when we heard they were calling the thing a "networked book" -- a concept we've been developing for the past year and a half, starting with Kim White's original post here on "networked book/book as network." Apparently, the site's producer, Antony Van Couvering, had never come across if:book and our mad theories before another blogger drew the connection following Pulse's launch last week. So this would seem to be a case of happy synergy. Let a hundred networked books bloom.
The site is nicely done, employing most of the standard blogger's toolkit to wire the book into the online discourse: comments, outbound links (embedded by an official "linkologist"), tie-ins to social bookmarking sites, a linkroll to relevant blog carnivals etc. There are also a number of useful tools for exploring the book on-site: a tag cloud, a five-star rating system for individual entries, a full-text concordance, and various ways to filter posts by topic and popularity.
My one major criticism of the Pulse site is that the site is perhaps a little over-accessorized, the design informed less by the book's inherent structure and themes than by a general enthusiasm for Web 2.0 tools. Pulse clearly was not written for serialization and does not always break down well into self-contained units, so is a blog the ideal reading environment or just the reading environment most readily at hand? Does the abundance of tools perhaps overcrowd the text and intimidate the reader? There has been very little reader commenting or rating activity so far.
But this could all be interpreted as a clever gambit: perhaps FSG is embracing the web with a good faith experiment in sharing and openness, and at the same time relying on the web's present limitations as a reading interface (and the dribbling pace of syndication -- they'll be rolling this out until November 6) to ultimately drive readers back to the familiar print commodity. We'll see if it works. In any event, this is an encouraging sign that publishers are beginning to broaden their horizons -- light years ahead of what Harper Collins half-heartedly attempted a few months back with one of its more beleaguered titles.
I also applaud FSG for undertaking an experiment like this at a time when the most aggressive movements into online publishing have issued not from publishers but from the likes of Google and Amazon. No doubt, Googlezon's encroachment into electronic publishing had something to do with FSG's decision to go ahead with Pulse. Van Couvering urges publishers to take matters into their own hands and start making networked books:
Why get listed in a secondary index when you can be indexed in the primary search results page? Google has been pressuring publishers to make their books available through the Google Books program, arguing (basically) that they'll get more play if people can search them. Fine, except Google may be getting the play. If you're producing the content, better do it yourself (before someone else does it).
I hope tht Pulse is not just the lone canary in the coal mine but the first of many such exploratory projects.
Here's something even more interesting. In a note to readers, Frenay talks about what he'd eventually like to do: make an "open source" version of the book online (incidentally, Yochai Benkler has just done something sort of along these lines with his new book, "The Wealth of Networks" -- more on that soon):
At some point I'd like to experiment with putting the full text of Pulse online in a form that anyone can link into and modify, possibly with parallel texts or even by changing or adding to the wording of mine. I like the idea of collaborative texts. I also feel there's value in the structure and insight that a single, deeply committed author can bring to a subject. So what I want to do is offer my text as an anchor for something that then grows to become its own unique creature. I like to imagine Pulse not just as the book I've worked so hard to write, but as a dynamic text that can continue expanding and updating in all directions, to encompass every aspect of this subject (which is also growing so rapidly).
This would come much closer to the networked book as we at the institute have imagined it: a book that evolves over time. It also chimes with Frenay's theme of modeling technology after nature, repurposing the book as its own intellectual ecosystem. By contrast, the current serialized web version of Pulse is still very much a pre-network kind of book, its structure and substance frozen and non-negotiable; more an experiment in viral marketing than a genuine rethinking of the book model. Whether the open source phase of Pulse ever happens, we have yet to see.
But taking the book for a spin in cyberspace -- attracting readers, generating buzz, injecting it into the conversation -- is not at all a bad idea, especially in these transitional times when we are continually shifting back and forth between on and offline reading. This is not unlike what we are attempting to do with McKenzie Wark's "Gamer Theory," the latest draft of which we are publishing online next month. The web edition of Gamer Theory is designed to gather feedback and to record the conversations of readers, all of which could potentially influence and alter subsequent drafts. Like Pulse, Gamer Theory will eventually be a shelf-based book, but with our experiment we hope to make this networked draft a major stage in its growth, and to suggest what might lie ahead when the networked element is no longer just a version or a stage, but the book itself.
funding serious games 04.17.2006, 7:26 AM
In his recent article "Why We Need a Corporation for Public Gaming," David Rejeski proposes the creation of a government funded entity for gaming to be modeled after the Corporation for Public Broadcasting (CPB). He compares the early days of television to the early days of video gaming. 20 years after the birth of commercial broadcast television, he notes that the Lyndon Johnson administration created CPB to combat to the "vast wasteland of television." CPB started with an initial $15 million budget (which has since grown to $300 million). Rejeski propose a similar initial budget for a Corporation for Public Gaming (CPG). For Rejeski, video games are no longer sequestered to the bedroom of teenage boys, and are as an important medium in our culture as is television. He notes "that the average gamer is 30 years old, that over 40 percent are female, and that most adult gamers have been playing games for 12 years." He also cites examples of how a small but growing movement of "serious games" are being used towards education and humanitarian ends. By claiming that a diversity of video games is important for the public good, and therefore important for the government to fund, he implies that these serious games are good for democracy.
Rejeski raises an important idea (which I agree with), that gaming has more potential activities than saving princesses or shooting everything in sight. Fortunately, he acknowledges that government funded game development will not cure all the ill effects he describes. In that, CPB funded television programs did not fix television programming and has its own biases. Rejeski admits that ultimately "serious games, like serious TV, are likely to remain a sidebar in the history of mass media." My main contention with Rejeski's call is his focus on the final product or content, in this case, comparing a video game with a television program. His analogy fails to recognize the equally important components of the medium, production and distribution. If we look at video games in terms of production, distribution as well as content, the allocation of government resources envision a different outcome. In this analysis, a more efficient use of funds would be geared towards creating tools to create games, insuring fair and open access to the network, and less emphasis funded towards the creation of actual games.
Perhaps, rather than television, a better analogy would be to look at the creation of the Internet, which supports many to many communication and production. What started as a military project under DARPA, Internet protocols and networks became a tool which people used for academic, commercial, and individual purposes. A similar argument could be made for the creation of a freely distributed game development environment. Although the costs associated with computation and communication are decreasing, high-end game development budgets for titles such as the Sims Online and Halo 2 are estimated to run in the tens of millions of dollars. The level of support are required to create sophisticated 3D and AI game engines.
Educators have been modding games of this caliber. For example, the Education Arcade's game, Revolution, teaches American History. The game was created using the Neverwinter Nights game engine. However, problems often arise because the actions of characters are often geared towards the violent, and male and female models are not representative of real people. Therefore, rather than focusing on the funding of games, creating a game engine and other game production tools to be made open source and freely distributed would provide an important resource for the non-commerical gaming community.
There are funders who support the creation of non-commerical games, however as with most non-commerical ventures, resources are scare. Thus, a game development environment, released under a GPL-type licensing agreement, would allow serious game developers to use their resources for design and game play, and potentially address issues that may be too controversial for the government to fund. Issues of government funding over controversial content, be it television or games, will be addressed further in this analysis.
In Rejecki's analogy of television, he focuses on the content of the one to many broadcast model. One result of this focus is the lack of discussion on the equally important use of CPB funds to support the Public Broadcast Services (PBS) that air CPB funded programs. By supporting PBS, an additional voice was added to the three television networks which in theory is good for a functioning democracy. The one to many model also discounts the power of the many to many model that is enabled by a fairly accessible network.
In the analogy of television and games, air waves and cables are tightly controlled through spectrum allocation and private ownership of cable wires. Individual production of television programming is limited to public access cable. The costs of producing and distributing on-air television content is extremely expensive, and does not decreasingly scale. That is, a two minute on-air television clip is still expensive to produce and air. Where as, small scale games can be created and distributed with limited resources. In the many to many production model, supporting issues as network neutrality or municipal broadband (along with new tools) would allow serious games to increase in sophistication, especially as games increasingly rely on the network for not only distribution, but game play as well. Corporation for Public Gaming does not need to pay for municipal broadband networks. However, legislative backers of a CPG need to recognize that an open network are equally linked to non-commerical content, as the CPB and PBS are. Again, keeping the network open will allow more resources to go toward content.
The problem with government funded content, whether it be television programs or video games, is that the content will always been under the influence of the mainstream cultural shifts. It may be hard to challenge the purpose of creating games to teach people about children diabetics glucose level management or balancing state budgets. However, games to teach people about HIV/AIDS education, evolution or religion are harder for the government to fund. Or better yet, take Rejeski's example of the United Nation World Food Program game on resource allocation for disaster relief. What happens with this simulation gets expanded to include issues like religious conflicts, population control, and international favoritism?
Further, looking at the CPB example, it is important to acknowledge the commercial interests in CPB funded programs. Programs broadcast on PBS receive funding from CPB, private foundations, and corporate sponsorship, often from all three for one program. It becomes increasingly hard to defend children's television as "non-commerical," when one considers the proliferation of products based on CPB funded children's educational shows, such as Sesame Street's "Tickle me Emo" dolls. Therefore, we need to be careful, when we discuss the CPB and PBS programs as "non-commercial."
Therefore, commercial interests are involved in the production of "public television," and will be effected by commerical interests, even if it is to a lesser degree than commercial network programming. Investment in fair distribution and access to the network , as well as the development of accessible tools for gaming production would allow more opportunity for the democratization of game development that Rejeski is suggesting.
Currently, many of the serious games being created are niche games, with a very specific, at times, small audience. Digital technologies excel in this many to many model. As opposed to the one to many communication model of television, the many to many production of DYI game design allows for many more voices. Some segment of federal grants to support these games will fall prey to criticism, if the content strays too far from the current mainstream. The vital question than, is how do we support the diversity of voices to maintain a democracy in the gaming world given the scare resource of federal funding. Allocating resources towards tools and access may then be more effective overall in supporting the creation of serious games. Although I agree with Rejeski's intentions, I suggest the idea of government funded video games needs to expand to include production and distribution, along with limited support of content for serious games.
italian videobloggers create open source film 04.16.2006, 12:12 PM
An article in today's La Repubblica reports that Italian videobloggers are at work creating an "open source film" about the recent election there. A website called Nessuno.TV is putting together a project called Le mie elezioni ("My Elections"). Visitors to the site were invited to submit their own short films. Director Stefano Mordini plans to weld them together into an hour-long documentary in mid-May.
The raw materials are already on display: they've acquired an enormous number of short films which provide an interesting cross section of Italian society. Among many others, Davide Preti interviews a husband and wife about their opposing views on the election. Stiletto Paradossale's series "That Thing Called Democracy" interviews people on the street in the small towns of Serrapetrona and Caldarola about what's important about democracy. In a neat twist, Donne liberta di stampa interview a reporter from the BBC about what she thinks about the elections. And Robin Good asks the children what they think.
Not all the films are interviews. Maurizio Dovigi presents a self-filmed open letter to Berlusconi. ComuniCalo eschews video in "Una notta terribile!", a slideshow of images from the long night in Rome spent waiting for results. And Luna di Velluto offers a sped-up self-portrait of her reaction to the news on that same nights.
It's immediately apparent that most of these films are for the left. This isn't an isolated occurance: the Italian left seems to have understood that the network can be a political force. In January, I noted the popularity of comic Beppe Grillo's blog. Since then, it's only become more popular: recent entries have averaged around 3000 comments each (this one, from four days ago, has 4123). Nor is he limiting himself to the blog: there are weekly PDF summaries of issues, MeetUp groups, and a blook/DVD combo. Compare this hyperactivity to the staid websites of Berlusconi's Forza Italia party and the Silvio Berlusconi Fans Club.
The Italian left's embrace of the Internet has partially been out of necessity: as Berlusconi owns most of the Italian media, views that counter his have been largely absent. There's the perception that the mainstream media has stagnated, though there's clearly a thirst for intelligent commentary: an astounding five million viewers tuned in to an appearance by Umberto Eco on TV two months ago. Bruno Pellegrini, who runs Nessuno.TV, suggests that the Internet can offer a corrective alternative:
We want to be a TV 'made by those who watch it. A participatory TV, in which the spectators actively contribute to the construction of a palimpsest. We are riding the tendencies of the moment, using the technologies available with the lowest costs, and involving those young people who are convinced that an alternative to regular TV can be constructed, and we're starting that.
They're off to an impressive start, and I'll be curious to see how far they get with this. One nagging thought: most of these videos would have copyright issues in the U.S. Many use background music that almost certainly hasn't been cleared by the owners. Some use video clips and photos that are probably owned by the mainstream press. The dread hand of copyright enforcement isn't as strong in Italy as it is in the U.S., but it still exists. It would be a shame if rights issues brought down such a worthy community project.
G4M3R 7H30RY: part 4 04.13.2006, 5:51 PM
We've moved past the design stage with the GAM3R 7H30RY blog and forum. We're releasing the book in two formats: all at once (date to be soon decided), in the page card format, and through RSS syndication. We're collecting user input and feedback in two ways: from comments submitted through the page-card interface, and in user posts in the forum.
The idea is to nest Ken's work in the social network that surrounds it, made visible in the number of comments and topics posted. This accomplishes something fairly radical, shifting the focus from an author's work towards the contributions of a work's readers. The integration between the blog and forums, and the position of the comments in relation to the author's work emphasizes this shift. We're hoping that the use of color as an integrating device will further collapse the usual distance between the author and his reading (and writing) public.
To review all the stages that this project has been through before it arrived at this, check out Part I, Part II, and Part III. The design changes show the evolution of our thought and the recognition of the different problems we were facing: screen real estate, reading environment, maintaining the author's voice but introducing the public, and making it fun. The basic interaction design emerged from those constraints. The page card concept arose from both the form of Ken's book—a regimented number of paragraphs with limited length—and the constraints of screen real estate (1024x768). The overlapping arose from the physical handling of the 'Oblique Strategies' cards, and helps to present all the information on a single screen. The count of pages (five per section, five sections per chapter) is a further expression of the structure that Ken wrote into the book. Comments were lifted from their usual inglorious spot under the writer's post to be right beside the work. It lends them some additional weight.
We've also reimagined the entry point for the forums with the topic pool. It provides a dynamic view of the forums, raising the traditional list into the realm of something energetic, more accurately reflecting the feeling of live conversation. It also helps clarify the direction of the topic discussion with a first post/last post view (visible in the mouseover state below). This simple preview will let users know whether or not a discussion has kept tightly to the subject or spun out of control into trivialities.
We've been careful with the niceties: the forum indicator bars turned on their sides to resemble video game power ups; the top of the comments sitting at the same height as the top of their associated page card; the icons representing comments and replies (thanks to famfamfam).
Each of the designed pages changed several times. The page cards have been the most drastically and frequently changed, but the home page went through a significant series of edits in a matter of a few days. As with many things related to design, I took several missteps before alighting on something which seems, in retrospect, perfectly obvious. Although the 'table of contents' is traditionally an integrated part of a bound volume, I tried (and failed) to create a different alignment and layout with it. I'm not sure why—it seemed like a good idea at the time. I also wanted to include a hint of the pages to come—unfortunately it just made it difficult for your eye move smoothly across the page. Finally I settled on a simpler concept, one that harmonized with the other layouts, and it all snapped into place.
With that we began the production stage, and we're making it all real. Next update will be a pre-launch announcement.
privacy matters 2: delicious privacy 04.12.2006, 3:46 PM
Social bookmarking site del.icio.us announced last month that it will give people the option to make bookmarks private -- for "those antisocial types who doesn't like to share their toys." This a sensible layer to add to the service. If del.icio.us really is to take over the function of local browser-based bookmarks, there should definitely be a "don't share" option. A next, less antisocial, step would be to add a layer of semi-private sharing within defined groups -- family, friends, or something resembling Flickr Groups.
Of course, considering that del.icio.us is now owned by Yahoo, the question of layers gets trickier. There probably isn't a "don't share" option for them.
privacy matters 04.11.2006, 6:02 PM
In a recent post, Susan Crawford magisterially weaves together a number of seemingly disparate strands into a disturbing picture of the future of privacy, first looking at the still under-appreciated vulnerability of social networking sites. Recently ratcheted-up scrutiny on MySpace and other similar episodes suggest to Crawford that some sort of privacy backlash is imminent -- a backlash, however, that may come too late.
The "too late" part concerns the all too likely event of a revised Telecommunications bill that will give internet service providers unprecedented control over what data flows through their pipes, and at what speed:
...all of the privacy-related energy directed at the application layer (at social networks and portals and search engines) may be missing the point. The real story in this country about privacy will be at a lower layer - at the transport layer of the internet. The pipes. The people who run the pipes, and particularly the last mile of those pipes, are anxious to know as much as possible about their users. And many other incumbents want this information too, like law enforcement and content owners. They're all interested in being able to look at packets as they go by their routers, something that doesn't traditionally happen on the traditional internet.
...and looking at them makes it possible for much more information to be available. Cisco, in particular, has a strategy it calls the "self-defending network," which boils down to tracking much more information about who's doing what. All of this plays on our desires for security - everyone wants a much more secure network, right?
Imagine an internet without spam. Sounds great, but at what price? Manhattan is a lot safer these days (for white people at least) but we know how Giuliani pulled that one off. By talking softly and carrying a big broom; the Disneyfication of Times Square etc. In some ways, Times Square is the perfect analogy for what America's net could become if deregulated.
And we don't need to wait for Congress for the deregulation to begin. Verizon was recently granted exemption from rules governing business broadband service (price controls and mandated network-sharing with competitors) when a deadline passed for the FCC to vote on a 2004 petition from Verizon to entirely deregulate its operations. It's hard to imagine how such a petition must have read:
"Dear FCC, please deregulate everything. Thanks. --Verizon"
And harder still to imagine that such a request could be even partially granted simply because the FCC was slow to come to a decision. These people must be laughing very hard in a room very high up in a building somewhere. Probably Times Square.
Last month, when a federal judge ordered Google to surrender a sizable chunk of (anonymous) search data to the Department of Justice, the public outcry was predictable. People don't like it when the government starts snooping, treading on their civil liberties, hence the ongoing kerfuffle over wiretapping. What fewer question is whether Google should have all this information in the first place. Crawford picks up on this:
...three things are working together here, a toxic combination of a view of the presidency as being beyond the law, a view by citizens that the internet is somehow "safe," and collaborating intermediaries who possess enormous amounts of data.
The recent Google subpoena case fits here as well. Again, the government was seeking a lot of data to help it prove a case, and trying to argue that Google was essential to its argument. Google justly was applauded for resisting the subpoena, but the case is something of a double-edged sword. It made people realize just how much Google has on hand. It isn't really a privacy case, because all that was sought were search terms and URLS stored by Google -- no personally-identifiable information. But still this case sounds an alarm bell in the night.
New tools may be in the works that help us better manage our online identities, and we should demand that networking sites, banks, retailers and all the others that handle our vital stats be more up front about their procedures and give us ample opportunity to opt out of certain parts of the data-mining scheme. But the question of pipes seems to trump much of this. How to keep track of the layers...
Another layer coming soon to an internet near you: network data storage. Online services that do the job of our hard drives, storing and backing up thousands of gigabytes of material that we can then access from anywhere. When this becomes cheap and widespread, it might be more than our identities that's getting snooped.
Amazon's new S3 service charges 15 cents per gigabyte per month, and 20 cents per data transfer. To the frequently asked question "how secure is my data?" they reply:
Amazon S3 uses proven cryptographic methods to authenticate users. It is your choice to keep your data private, or to make it publicly accessible by third parties. If you would like extra security, there is no restriction on encrypting your data before storing it in S3.
Yes, it's our choice. But what if those third parties come armed with a court order?
blogburst 04.10.2006, 4:14 PM
A small Austin, TX-based company called Pluck is launching a new blog aggregation service called BlogBurst that will filter postings from hundreds of approved bloggers and syndicate their content to major news services (and eventually smaller niche publications as well). Tomorrow, BlogBurst lets rip its fire hose of content at a handful of major newspapers including USA Today publisher Gannett Co., The Washington Post, The San Francisco Chronicle, and local pubs The Austin American-Statesman and San Antonio Express. Some are calling this a further blurring of the boundary between mainstream and independent medias. Seems to me more like an expansion of the umbrella of the former and a buttressing of the oft-lamented "power law" with regard to the latter (how the most popular blogs get entrenched in an "A-list" in spite of popular belief a level playing field). The AP has more.
Any blogger can sign up with BlogBurst but some editorial body there decides which blogs go into the syndication feed. Presumably, if the thing takes off, they'll start breaking it up into multiple feeds -- some generalized, some specialized. Participating publishers are povided with "editorial management tools" called the "publisher workbench." So if I'm a newspaper, I receive a daily dump of thousands of blog postings, broken down into different topic areas. I fiddle around with those in the workbench, choose the ones I want, and then plug them into various slots in my paper. Technically, it works like this (warning, acroynum blitz):
Incidentally, the name blogburst is a bit of co-opted net jargon describing any coordinated effort by bloggers to flood the web with postings on a particular topic -- usually some hot-button issue like the Jyllands-Posten Muhammad cartoons. Search "blogburst" today on Technorati and you'll find a slew of right wing bloggers on a "guard the borders" rhetorical rampage (ha! idealistic me, I initially thought they meant the borders between mainstream and grassroots media!).
Meanwhile, as I write, thousands march down Broadway in New York -- blogging, as it were, with their feet -- in support of America's illegal immigrants.
I wonder how the two-capital-Bs BlogBurst will deal with the political polarization of blogs.
corporate creep 04.06.2006, 8:03 AM
A short article in the New York Times (Friday March 31, 2006, pg. A11) reported that the Smithsonian Institution has made a deal with Showtime in the interest of gaining an "active partner in developing and distributing [documentaries and short films]." The deal creates Smithsonian Networks, which will produce documentaries and short films to be released on an on-demand cable channel. Smithsonian Networks retains the right of first refusal to "commercial documentaries that rely heavily on Smithsonian collection or staff." Ostensibly, this means that interviews with top personnel on broad topics is ok, but it may be difficult to get access to the paleobotanist to discuss the Mesozoic era. The most troubling part of this deal is that it extends to the Smithsonian's collections as well. Tom Hayden, general manager of Smithsonian Networks, said the "collections will continue to be open to researchers and makers of educational documentaries." So at least they are not trying to shut down educational uses of the these public cultural and scientific artifacts.
Except they are. The right of first refusal essentially takes the public institution and artifacts off the shelf, to be doled out only on approval. "A filmmaker who does not agree to grant Smithsonian Networks the rights to the film could be denied access to the Smithsonian's public collections and experts." Additionally, the qualifications for access are ill-defined: if you are making a commercial film, which may also be a rich educational resource, well, who knows if they'll let you in. This is a blatant example of the corporatization of our public culture, and one that frankly seems hard to comprehend. From the Smithsonian's mission statement:
The Smithsonian is committed to enlarging our shared understanding of the mosaic that is our national identity by providing authoritative experiences that connect us to our history and our heritage as Americans and to promoting innovation, research and discovery in science.
Hayden stated the reason for forming Smithsonian Networks is to "provide filmmakers with an attractive platform on which to display their work." Yet, it was clearly stated by Linda St. Thomas, a spokeswoman for the Smithsonian, "if you are doing a one-hour program on forensic anthropology and the history of human bones, that would be competing with ourselves, because that is the kind of program we will be doing with Showtime On Demand." Filmmakers are not happy, and this seems like the opposite of "enlarging our shared understanding." It must have been quite a coup for Showtime to end up with stewardship of one of America's treasured archives.
The application of corporate control over public resources follows the long-running trend towards privatization that began in the 80's. Privatization assumes that the market, measured by profit and share price, provides an accurate barometer of success. But the corporate mentality towards profit doesn't necessarily serve the best interest of the public. In "Censoring Culture: Contemporary Threats to Free Expression" (New Press, 2006), an essay by André Schiffrin outlines the effects that market orientation has had on the publishing industry:
As one publishing house after another has been taken over by conglomerates, the owners insist that their new book arm bring in the kind off revenue their newspapers, cable television networks, and films do....
To meet these new expectations, publishers drastically change the nature of what they publish. In a recent article, the New York Times focused on the degree to which large film companies are now putting out books through their publishing subsidiaries, so as to cash in on movie tie-ins.
The big publishing houses have edged away from variety and moved towards best-sellers. Books, traditionally the movers of big ideas (not necessarily profitable ones), have been homogenized. It's likely that what comes out of the Smithsonian Networks will have high production values. This is definitely a good thing. But it also seems likely that the burden of the bottom line will inevitably drag the films down from a public education role to that of entertainment. The agreement may keep some independent documentaries from being created; at the very least it will have a chilling effect on the production of new films. But in a way it's understandable. This deal comes at a time of financial hardship for the Smithsonian. I'm not sure why the Smithsonian didn't try to work out some other method of revenue sharing with filmmakers, but I am sure that Showtime is underwriting a good part of this venture with the Smithsonian. The rest, of course, is coming from taxpayers. By some twist of profiteering logic, we are paying twice: once to have our resources taken away, and then again to have them delivered, on demand. Ironic. Painfully, heartbreakingly so.
the age of amphibians 04.05.2006, 1:44 AM
Momus is a Scottish pop musician, based in Berlin, who writes smart and original things about art and technology. He blogs a wonderful blog called Click Opera -- some of the best reading on the web. He wears an eye patch. And he is currently doing a stint as an "unreliable tour guide" at the Whitney Biennial, roving through the galleries, sneaking up behind museum-goers with a bullhorn.
A couple of weeks ago, Dan had the bright idea of inviting Momus -- seeing as he is currently captive in New York and interested, like us, in the human migration from analog to digital -- to visit the institute. Knowing almost nothing about who we are or what we do, he bravely accepted the offer and came over to Brooklyn on one of the Whitney's dark days and lunched at our table on the customary menu of falafel and babaganoush. Yesterday, he blogged some thoughts about our meeting.
Early on, as happens with most guests, Momus asked something along the lines of: "so what do you mean by 'future of the book?'" Always an interesting moment, in a generally blue-sky, thinky endeavor such as ours, when you're forced to pin down some specifics (though in other areas, like Sophie, it's all about specifics). "Well," (some clearing of throats) "what we mean is..." "Well, you see, the thing you have to understand is..." ...and once again we launch into a conversation that seems to lap at the edges of our table with tide-like regularity. Overheard:
"Well, we don't mean books in the literal sense..."
"The book at its most essential: an instrument for moving big ideas."
"A sustained chunk of thought."
And so it goes... In the end, though, it seems that Momus figured out what we were up to, picking up on our obsession with the relationship between books and conversation:
It seems they're assuming that the book itself is already over, and that it will survive now as a metaphor for intelligent conversation in networks.
It's always interesting (and helpful) to hear our operation described by an outside observer. Momus grasped (though I don't think totally agreed with) how the idea of "the book" might be a useful tool for posing some big questions about where we're headed -- a metaphorical vessel for charting a sea of unknowns. And yet also a concrete form that is being reinvented.
Another choice tidbit from Momus' report -- the hapless traveler's first encounter with the institute:
I found myself in a kitchen overlooking the sandy back courtyard of a plain clapperboard building on North 7th Street. There were about six men sitting around a kidney-shaped table. One of them was older than the others and looked like a delicate Vulcan. "I expect you're wondering why you're here?" he said. "Yes, I've been very trusting," I replied, wondering if I was about to be held hostage by a resistance movement of some kind.
Well, it turned out that the Vulcan was none other than Bob Stein, who founded the amazing Voyager multi-media company, the reference for intelligent CD-ROM publishing in the 90s.
He took this lovely picture of the office:
What is a blook? It's a blog that turns into a book, the way, in evolution, mammals went back into the sea and became fish again. Except they didn't really do that, although undoubtedly some of us still enjoy a good swim.
And expanding upon this in a comment further down:
...the cunning thing about the concept of the blook is that it posits the book as coming after the blog, not before it, as some evolutionist of media forms would probably do. In this reading, blogs are the past of the book, not its future.
To be that evolutionist for a moment, the "blook" is indeed a curious species, falling somewhere under the genus "networked book," but at the same time resisting cozy classification, wriggling off the taxonomic hook by virtue of its seemingly regressive character: moving from bits back to atoms; live continuous feedback back to inert bindings and glue. I suspect that "the blook" will be looked back upon as an intriguing artifact of a transitional period, a time when the great apes began sprouting gills.
If we are in fact becoming "post-book," might this be a regression? A return to an aquatic state of culture, free-flowing and gradually accreting like oral tradition, away from the solid land of paper, print and books? Are we living, then, in an age of amphibians? Hopping in and out of the water, equally at home in both? Is the blog that tentative dip in the water and the blook the return to terra firma?
But I thought the theory of evolution had broken free of this kind of directionality: the Enlightenment idea of progress, the great chain gang of being. Isn't it all just a long meander, full of forks, leaps and mutations? And so isn't the future of the book also its past? Might we move beyond the book and yet also stay with it, whether as some defined form or an actual thing in our (webbed) hands? No progress, no regress, just one long continuous motion? Sounds sort of like a conversation...
open source DRM? 04.04.2006, 1:04 AM
A couple of weeks ago, Sun Microsystems released specifications and source code for DReaM, an open-source, "royalty-free digital rights management standard" designed to operate on any certified device, licensing rights to the user rather than to any particular piece of hardware. DReaM (Digital Rights Management -- everywhere availble) is the centerpiece of Sun's Open Media Commons initiative, announced late last summer as an alternative to Microsoft, Apple and other content protection systems. Yesterday, it was the subject of Eliot Van Buskirk's column in Wired:
Sun is talking about a sea change on the scale of the switch from the barter system to paper money. Like money, this standardized DRM system would have to be acknowledged universally, and its rules would have to be easily converted to other systems (the way U.S. dollars are officially used only in America but can be easily converted into other currency). Consumers would no longer have to negotiate separate deals with each provider in order to access the same catalog (more or less). Instead, you -- the person, not your device -- would have the right to listen to songs, and those rights would follow you around, as long as you're using an approved device.
The OMC promises to "promote both intellectual property protection and user privacy," and certainly DReaM, with its focus on interoperability, does seem less draconian than today's prevailing systems. Even Larry Lessig has endorsed it, pointing with satisfaction to a "fair use" mechanism that is built into the architecture, ensuring that certain uses like quotation, parody, or copying for the classroom are not circumvented. Van Buskirk points out, however, that the fair use protection is optional and left to the discretion of the publisher (not a promising sign). Interestingly, the debate over DReaM has caused a rift among copyright progressives. Van Buskirk points to an August statement from the Electronic Frontier Foundation criticizing DReaM for not going far enough to safeguard fair use, and for falsely donning the mantle of openness:
Using "commons" in the name is unfortunate, because it suggests an online community committed to sharing creative works. DRM systems are about restricting access and use of creative works.
True. As terms like "commons" and "open source" seep into the popular discourse, we should be increasingly on guard against their co-option. Yet I applaud Sun for trying to tackle the interoperability problem, shifting control from the manufacturers to an independent standards body. But shouldn't mandatory fair use provisions be a baseline standard for any progressive rights scheme? DReaM certainly looks like less of a nightmare than plain old DRM but does it go far enough?
on the importance of the collective in electronic publishing 04.03.2006, 12:04 AM
(The following polemic is cross-posted from the planning site for a small private meeting the Institute is holding later this month to discuss the possible establishment of an electronic press. Also posted on The Valve.)
One of the concerns that often gets raised early in discussions of electronic scholarly publishing is that of business model -- how will the venture be financed, and how will its products be, to use a word I hate, monetized? What follows should not at all suggest that I don't find such questions important. Clearly, they're crucial; unless an electronic press is in some measure self-sustaining, it simply won't last long. Foundations might be happy to see such a venture get started, but nobody wants to bankroll it indefinitely.
I also don't want to fall prey to what has been called the "paper = costly, electronic = free" fallacy. Obviously, many of the elements of traditional academic press publishing that cost -- whether in terms of time, or of money, or both -- will still exist in an all-electronic press. Texts still must be edited and transformed from manuscript to published format, for starters. Plus, there are other costs associated with the electronic -- computers and their programming, to take only the most obvious examples -- that don't exist in quite the same measure in print ventures.
But what I do want to argue for, building off of John Holbo's recent post, is the importance of collective, cooperative contributions of academic labor to any electronic scholarly publishing venture. For a new system like that we're hoping to build in ElectraPress to succeed, we need a certain amount of buy-in from those who stand to benefit from the system, a commitment to get the work done, and to make the form succeed.
I've been thinking about this need for collectivity through a comparison with the model of open-source software. Open source has succeeded, in large part, due to the commitments that hundreds of programmers have made, not just to their individual projects but to the system as a whole. Most of these programmers work regular, paid gigs, working on corporate projects, all the while reserving some measure of their time and devotion for non-profit, collective projects. That time and devotion are given freely because of a sense of the common benefits that all will reap from the project's success.
So with academics. We are paid, by and large, and whether we like it or not, for delivering certain kinds of knowledge-work to paying clients. We teach, we advise, we lecture, and so forth, and all of this is primarily done within the constraints of someone else's needs and desires. But the job also involves, or allows, to varying degrees, reserving some measure of our time and devotion for projects that are just ours, projects whose greatest benefits are to our own pleasure and to the collective advancement of the field as a whole.
If we're already operating to that extent within an open-source model, what's to stop us from taking a further plunge, opening publishing cooperatives, and thereby transforming academic publishing from its current (if often inadvertent) non-profit status to an even lower-cost, collectively underwritten financial model?
I can imagine two possible points of resistance within traditional humanities scholars toward such a plan, points that originate in individualism and technophobia.
Individualism, first: it's been pointed out many times that scholars in the humanities have strikingly low rates of collaborative authorship. Politically speaking, this is strange. Even as many of us espouse communitarian (or even Marxist) ideological positions, and even as we work to break down long-held bits of thinking like the "great man" theory of history, or of literary production, we nonetheless cling to the notion that our ideas are our own, that scholarly work is the product of a singular brain. Of course, when we stop to think about it, we're willing to admit that it's not true -- that, of course, is what the acknowledgments and footnotes of our books are for -- but venturing into actual collaborations remains scary. Moreover, many of us seem to have the same kinds of nervousness about group projects that our students have: What if others don't pull their weight? Will we get stuck with all of the work, but have to share the credit?
I want to answer that latter concern by suggesting, as John has, that a collective publishing system might operate less like those kinds of group assignments than like food co-ops: in order to be a member of the co-op -- and membership should be required in order to publish through it -- everyone needs to put in a certain number of hours stocking the shelves and working the cash register. As to the first mode of this individualist anxiety, though, I'm not sure what to say, except that no scholar is an island, that we're all always working collectively, even when we think we're most alone. Hand off your manuscript to a traditional press, and somebody's got to edit it, and typeset it, and print it; why shouldn't that somebody be you?
Here's where the technophobia comes in, or perhaps it's just a desire to have someone else do the production work masquerading as a kind of technophobia, because many of the responses to that last question seem to revolve around either not knowing how to do this kind of publishing work or not wanting to take on the burden of figuring it out. But I strongly suspect that there will come a day in the not too distant future when we look back on those of us who have handed our manuscripts over to presses for editing, typesetting, printing, and dissemination in much the same way that I currently look back on those emeriti who had their secretaries -- or better still, their wives -- type their manuscripts for them. For better or for worse, word processing has become part of the job; with the advent of the web and various easily learned authoring tools, editing and publishing are becoming part of the job as well.
I'm strongly of the opinion that, if academic publishing is going to survive into the next decades, we need to stop thinking about how it's going to be saved, and instead start thinking about how we are going to save it. And a business model that relies heavily on the collective -- particularly, on labor that is shared for everyone's benefit -- seems to me absolutely crucial to such a plan.