Listing entries tagged with writing
1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11
robert frost's digital disciple 02.05.2008, 9:58 AM
Via Ron Silliman, an interesting profile of Edmund Skellings, poet laureate of Florida since 1980 and newly appointed professor of humanities at Florida Tech. A New Englander, Skellings started off as a poet in the Robert Frost mould, and even studied under Frost at the University of Iowa in the late 50s. Around that time, however, he started experimenting with sound recordings on magnetic tape and later published a book of poems, Duels and Duets, whose covers were two vinyl recordings of Skellings voice. In 1978, Skellings discovered computers and thence embarked on a long career as an electro-poetic experimenter, combining audio recordings with digital animations of imagery and text, all the while retaining a poetic style as accessible and unadorned as Frost's (or so the Florida Today article asserts). You can view some of digital creations on his web site. Skellings isn't necessarily the electronic poet (or animator) for me, but his life is an interesting case study of literary and technological flux.
developing books in networked communities: a conversation with don waters 02.04.2008, 2:22 AM
Two weeks ago, when the blog-based peer review of Noah Wardrip-Fruin's Expressive Processing began on Grand Text Auto, Bob sent a note about the project to Don Waters, the program officer for scholarly communications at the Andrew W. Mellon Foundation -? someone very much at the forefront of developments in the digital publishing arena. He wrote back intrigued but slightly puzzled as to the goals, scope and definitions of the experiment. We forwarded the note to Noah and to Doug Sery, Noah's editor at MIT Press, and decided each to write some clarifying responses from our different perspectives: book author/blogger (Noah), book editor (Doug), and web editor (myself). The result is an interesting exchange about networked publishing and useful meta-document about the project. As our various responses, and Don's subsequent reply, help to articulate, playing with new forms of peer review is only one aspect of this experiment, and maybe not even the most interesting one. The exchange is reproduced below (a couple of names mentioned have been made anonymous).
Don Waters (Mellon Foundation):
Thanks, Bob. This is a very interesting idea. In reading through the materials, however, I did not really understand how, if at all, this "experiment" would affect MIT Press behavior. What are the hypotheses being tested in that regard? I can see, from one perspective, that this "experiment" would result purely in more work for everyone. The author would get the benefit of the "crowd" commenting on his work, and revise accordingly, and then the Press would still send the final product out for peer review and copy editing prior to final publication.
Ben Vershbow (Institute for the Future of the Book):
There are a number of things we set out to learn here. First, can an open, Web-based review process make a book better? Given the inherently inter-disciplinary nature of Noah's book, and the diversity of the Grand Text Auto readership, it seems fairly likely that exposing the manuscript to a broader range of critical first-responders will bring new things to light and help Noah to hone his argument. As can be seen in his recap of discussions around the first chapter, there have already been a number of incisive critiques that will almost certainly impact subsequent revisions.
Second, how can we use available web technologies to build community around a book, or to bring existing communities into a book's orbit? "Books are social vectors, but publishers have been slow to see it," writes Ursula K. Le Guin in a provocative essay in the latest issue of Harper's. For the past three years, the Institute for the Future of the Book's mission has been to push beyond the comfort zone of traditional publishers, exploring the potential of networked technologies to enlarge the social dimensions of books. By building a highly interactive Web component to a text, where the author and his closest peers are present and actively engaged, and where the entire text is accessible with mechanisms for feedback and discussion, we believe the book will occupy a more lively and relevant place in the intellectual ecology of the Internet and probably do better overall in the offline arena as well.
The print book may have some life left in it yet, but it now functions within a larger networked commons. To deny this could prove fatal for publishers in the long run. Print books today need dynamic windows into the Web and publishers need to start experimenting with the different forms those windows could take or else retreat further into marginality. Having direct contact with the author -? being part of the making of the book -? is a compelling prospect for the book's core audience and their enthusiasm is likely to spread. Certainly, it's too early to make a definitive assessment about the efficacy of this Web outreach strategy, but initial indicators are very positive. Looked at one way, it certainly does create more work for everyone, but this is work that has to be done. At the bare minimum, we are building marketing networks and generating general excitement about the book. Already, the book has received a great deal of attention around the blogosphere, not just because of its novelty as a publishing experiment, but out of genuine interest in the subject matter and author. I would say that this is effort well spent.
It's important to note that, despite CHE's lovely but slightly sensational coverage of this experiment as a kind of mortal combat between traditional blind peer review and the new blog-based approach, we view the two review processes as complementary, not competitive. At the end, we plan to compare the different sorts of feedback the two processes generate. Our instinct is that it will suggest hybrid models rather than a wholesale replacement of one system with another.
That being said, our instincts tell us that open blog-based review (or other related forms) will become increasingly common practice among the next generation of academic writers in the humanities. The question for publishers is how best to engage with, and ideally incorporate, these new practices. Already, we see a thriving culture of pre-publication peer review in the sciences, and major publishers such as Nature are beginning to build robust online community infrastructures so as to host these kinds of interactions within their own virtual walls. Humanities publishers should be thinking along the same lines, and partnerships with respected blogging communities like GTxA are a good way to start experimenting. In a way, the MIT-GTxA collab represents an interface not just of two ideas of peer review but between two kinds of publishing imprints. Both have built a trusted name and become known for a particular editorial vision in their respective (and overlapping) communities. Each excels in a different sort of publishing, one print-based, the other online community-based. Together they are greater than the sum of their parts and suggest a new idea of publishing that treats books as extended processes rather than products. MIT may regard this as an interesting but not terribly significant side project for now, but it could end up having a greater impact on the press (and hopefully on other presses) than they expect.
All the best,
Noah Wardrip-Fruin (author, UC San Diego):
Hi Bob -
Yesterday I went to meet some people at a game company. There's a lot of expertise there - and actually quite a bit of reflection on what they're doing, how to think about it, and so on. But they don't participate in academic peer review. They don't even read academic books. But they do read blogs, and sometimes comment on them, and I was pleased to hear that there are some Grand Text Auto readers there.
If they comment on the Expressive Processing manuscript, it will create more work for me in one sense. I'll have to think about what they say, perhaps respond, and perhaps have to revise my text. But, from my perspective, this work is far outweighed by the potential benefits: making a better book, deepening my thinking, and broadening the group that feels academic writing and publishing is potentially relevant to them.
What makes this an experiment, from my point of view, is the opportunity to also compare what I learn from the blog-based peer review to what I learn from the traditional peer review. However, this will only be one data point. We'll need to do a number of these, all using blogs that are already read by the audience we hope will participate in the peer review. When we have enough data points perhaps we'll start to be able to answer some interesting questions. For example, is this form of review more useful in some cases than others? Is the feedback from the two types of review generally overlapping or divergent? Hopefully we'll learn some lessons that presses like MITP can put into practice - suggesting blog-based review when it is most appropriate, for example. With those lessons learned, it will be time to design the next experiment.
Doug Sery (MIT Press):
I know Don's work in digital libraries and preservation, so I'm not surprised at the questions. While I don't know the breadth of the discussions Noah and Ben had around this project, I do know that Noah and I approached this in a very casual manner. Noah has expressed his interest in "open communication" any number of times and when he mentioned that he'd like to "crowd-source" "Expressive Processing" on Grand Text Auto I agreed to it with little hesitation, so I'm not sure I'd call it an experiment. There are no metrics in place to determine whether this will affect sales or produce a better book. I don't see this affecting the way The MIT Press will approach his book or publishing in general, at least for the time being.
This is not competing with the traditional academic press peer-review, although the CHE article would lead the reader to believe otherwise (Jeff obviously knows how to generate interest in a topic, which is fine, but even a games studies scholar, in a conversation I had with him today, laughingly called the headline "tabloidesque.") . While Noah is posting chapters on his blog, I'm having the first draft peer-reviewed. After the peer-reviews come in, Noah and I will sit down to discuss them to see if any revisions to the manuscript need to be made. I don't plan on going over the GTxA comments with Noah, unless I happen to see something that piques my interest, so I don't see any additional work having to be done on the part of MITP. It's a nice way for Noah to engage with the potential audience for his ideas, which I think is his primary goal for all of this. So, I'm thinking of this more as an exercise to see what kind of interest people have in these new tools and/or mechanisms. Hopefully, it will be a learning experience that MITP can use as we explore new models of publishing.
Hope this helps and that all's well.
Thanks, Bob (and friends) for this helpful and informative feedback.
As I understand the explanations, there is a sense in which the experiment is not aimed at "peer review" at all in the sense that peer review assesses the qualities of a work to help the publisher determine whether or not to publish it. What the exposure of the work-in-progress to the community does, besides the extremely useful community-building activity, is provide a mechanism for a function that is now all but lost in scholarly publishing, namely "developmental editing." It is a side benefit of current peer review practice that an author gets some feedback on the work that might improve it, but what really helps an author is close, careful reading by friends who offer substantive criticism and editorial comments. Most accomplished authors seek out such feedback in a variety of informal ways, such as sending out manuscripts in various stages of completion to their colleagues and friends. The software that facilitates annotation and the use of the network, as demonstrated in this experiment, promise to extend this informal practice to authors more generally. I may have the distinction between peer review and developmental editing wrong, or you all may view the distinction as mere quibbling, but I think it helps explain why CHE got it so wrong in reporting the experiment as struggle between peer review and the blog-based approach. Two very different functions are being served, and as you all point out, these are complementary rather than competing functions.
I am very intrigued by the suggestions that scholarly presses need to engage in this approach more generally, and am eagerly learning from this and related experiments, such as those at Nature and elsewhere, more about the potential benefits of this kind of approach.
Great work and many thanks for the wonderful (and kind) responses.
expressive processing meta 01.29.2008, 2:20 PM
To mark the posting of the final chunk of chapter 1 of the Expressive Processing manuscript on Grand Text Auto, Noah has kicked off what will hopefully be a revealing meta-discussion to run alongside the blog-based peer review experiment. The first meta post includes a roundup of comments from the first week and invites readers to comment on the process as a whole. As you'll see, there's already been some incisive feedback and Noah is mulling over revisions. Chapter 2 starts tomorrow.
In case you missed it, here's an intro to the project.
expressive processing: an experiment in blog-based peer review 01.22.2008, 5:30 AM
An exciting new experiment begins today, one which ties together many of the threads begun in our earlier "networked book" projects, from Without Gods to Gamer Theory to CommentPress. It involves a community, a manuscript, and an open peer review process -? and, very significantly, the blessing of a leading academic press. (The Chronicle of Higher Education also reports.)
The community in question is Grand Text Auto, a popular multi-author blog about all things relating to digital narrative, games and new media, which for many readers here, probably needs no further introduction. The author, Noah Wardrip-Fruin, a professor of communication at UC San Diego, a writer/maker of digital fictions, and, of course, a blogger at GTxA. His book, which starting today will be posted in small chunks, open to reader feedback, every weekday over a ten-week period, is called Expressive Processing: Digital Fictions, Computer Games, and Software Studies. It probes the fundamental nature of digital media, looking specifically at the technical aspects of creation -? the machines and software we use, the systems and processes we must learn end employ in order to make media -? and how this changes how and what we create. It's an appropriate guinea pig, when you think about it, for an open review experiment that implicitly asks, how does this new technology (and the new social arrangements it makes possible) change how a book is made?
The press that has given the green light to all of this is none other than MIT, with whom Noah has published several important, vibrantly inter-disciplinary anthologies of new media writing. Expressive Processing his first solo-authored work with the press, will come out some time next year but now is the time when the manuscript gets sent out for review by a small group of handpicked academic peers. Doug Sery, the editor at MIT, asked Noah who would be the ideal readers for this book. To Noah, the answer was obvious: the Grand Text Auto community, which encompasses not only many of Noah's leading peers in the new media field, but also a slew of non-academic experts -? writers, digital media makers, artists, gamers, game designers etc. -? who provide crucial alternative perspectives and valuable hands-on knowledge that can't be gotten through more formal channels. Noah:
Blogging has already changed how I work as a scholar and creator of digital media. Reading blogs started out as a way to keep up with the field between conferences -- and I soon realized that blogs also contain raw research, early results, and other useful information that never gets presented at conferences. But, of course, that's just the beginning. We founded Grand Text Auto, in 2003, for an even more important reason: blogs can create community. And the communities around blogs can be much more open and welcoming than those at conferences and festivals, drawing in people from industry, universities, the arts, and the general public. Interdisciplinary conversations happen on blogs that are more diverse and sustained than any I've seen in person.
Given that ours is a field in which major expertise is located outside the academy (like many other fields, from 1950s cinema to Civil War history) the Grand Text Auto community has been invaluable for my work. In fact, while writing the manuscript for Expressive Processing I found myself regularly citing blog posts and comments, both from Grand Text Auto and elsewhere....I immediately realized that the peer review I most wanted was from the community around Grand Text Auto.
Sery was enthusiastic about the idea (although he insisted that the traditional blind review process proceed alongside it) and so Noah contacted me about working together to adapt CommentPress to the task at hand.
The challenge technically was to integrate CommentPress into an existing blog template, applying its functionality selectively -? in other words, to make it work for a specific group of posts rather than for all content in the site. We could have made a standalone web site dedicated to the book, but the idea was to literally weave sections of the manuscript into the daily traffic of the blog. From the beginning, Noah was very clear that this was the way it needed to work, insisting that the social and technical integration of the review process were inseparable. I've since come to appreciate how crucial this choice was for making a larger point about the value of blog-based communities in scholarly production, and moreover how elegantly it chimes with the central notions of Noah's book: that form and content, process and output, can never truly be separated.
Up to this point, CommentPress has been an all or nothing deal. You can either have a whole site working with paragraph-level commenting, or not at all. In the technical terms of WordPress, its platform, CommentPress is a theme: a template for restructuring an entire blog to work with the CommentPress interface. What we've done -? with the help of a talented WordPress developer named Mark Edwards, and invaluable guidance and insight from Jeremy Douglass of the Software Studies project at UC San Diego (and the Writer Response Theory blog) -? is made CommentPress into a plugin: a program that enables a specific function on demand within a larger program or site. This is an important step for CommentPress, giving it a new flexibility that it has sorely lacked and acknowledging that it is not a one-size-fits-all solution.
Just to be clear, these changes are not yet packaged into the general CommentPress codebase, although they will be before too long. A good test run is still needed to refine the new model, and important decisions have to be made about the overall direction of CommentPress: whether from here it definitively becomes a plugin, or perhaps forks into two paths (theme and plugin), or somehow combines both options within a single package. If you have opinions on this matter, we're all ears...
But the potential impact of this project goes well beyond the technical.
It represents a bold step by a scholarly press -? one of the most distinguished and most innovative in the world -? toward developing new procedures for vetting material and assuring excellence, and more specifically, toward meaningful collaboration with existing online scholarly communities to develop and promote new scholarship.
It seems to me that the presses that will survive the present upheaval will be those that learn to productively interact with grassroots publishing communities in the wild of the Web and to adopt the forms and methods they generate. I don't think this will be a simple story of the blogosphere and other emerging media ecologies overthrowing the old order. Some of the older order will die off to be sure, but other parts of it will adapt and combine with the new in interesting ways. What's particularly compelling about this present experiment is that it has the potential to be (perhaps now or perhaps only in retrospect, further down the line) one of these important hybrid moments -? a genuine, if slightly tentative, interface between two publishing cultures.
Whether the MIT folks realize it or not (their attitude at the outset seems to be respectful but skeptical), this small experiment may contain the seeds of larger shifts that will redefine their trade. The most obvious changes leveled on publishing by the Internet, and the ones that get by far the most attention, are in the area of distribution and economic models. The net flattens distribution, making everyone a publisher, and radically undercuts the heretofore profitable construct of copyright and the whole system of information commodities. The effects are less clear, however, in those hardest to pin down yet most essential areas of publishing -? the territory of editorial instinct, reputation, identity, trust, taste, community... These are things that the best print publishers still do quite well, even as their accounting departments and managing directors descend into panic about the great digital undoing. And these are things that bloggers and bookmarkers and other web curators, archivists and filterers are also learning to do well -? to sift through the information deluge, to chart a path of quality and relevance through the incredible, unprecedented din.
This is the part of publishing that is most important, that transcends technological upheaval -? you might say the human part. And there is great potential for productive alliances between print publishers and editors and the digital upstarts. By delegating half of the review process to an existing blog-based peer community, effectively plugging a node of his press into the Web-based communications circuit, Doug Sery is trying out a new kind of editorial relationship and exercising a new kind of editorial choice. Over time, we may see MIT evolve to take on some of the functions that blog communities currently serve, to start providing technical and social infrastructure for authors and scholarly collectives, and to play the valuable (and time-consuming) roles of facilitator, moderator and curator within these vast overlapping conversations. Fostering, organizing, designing those conversations may well become the main work of publishing and of editors.
I could go on, but better to hold off on further speculation and to just watch how it unfolds. The Expressive Processing peer review experiment begins today (the first actual manuscript section is here) and will run for approximately ten weeks and 100 thousand words on Grand Text Auto, with a new post every weekday during that period. At the end, comments will be sorted, selected and incorporated and the whole thing bundled together into some sort of package for MIT. We're still figuring out how that part will work. Please go over and take a look and if a thought is provoked, join the discussion.
the year of the author 01.07.2008, 9:16 PM
Natalie Merchant, one of my favorite artists, was featured in The New York Times today. She is back after a long hiatus, but if you want to hear her new songs you better stand in line for a ticket to one of her shows because she doesn't plan to release an album anytime soon. She appeared this weekend at the Hiro Ballroom in New York City. According to the Times, a voice in the crowd asked when Ms. Merchant would release a new album, she said with a smile that she was awaiting "a new paradigm for the recording industry."
hmm, well, the good news is that the paradigm is shifting, fast. But we don't yet know if this will be a good thing or a bad thing. It's certainly a bad thing for the major labels, who are losing market share faster then polar bears are losing their ice (sorry for the awful metaphor). But as they continue to shrink, so do the services and protections they offer to the artists. And the more content moves online the less customers are willing to pay for it. Radiohead's recent experiment proves that.
But artists are still embracing new media and using it to take matters into their own hands. In the music industry, a long-tail entrepreneurial system supported by online networks and e-commerce is beginning to emerge. Sites like nimbit empower artists to manage their own sales and promotion, bypassing itunes which takes a hefty 50% off the top and and, unlike record labels, does nothing to shape or nurture an artist's career.
Now, indulge me for a moment while I talk about the Kindle as though it were the ipod of ebooks. It's not, for lots of reasons. But it does have one thing in common with its music industry counterpart, it allows authors to upload their own content and sell it on amazon. That is huge. That alone might be enough to start a similar paradigm shift in publishing. In this week's issue of Publisher's Weekly, Mike Shatzkin predicts it will.
So why have I titled this, "the year of the author"? (I borrowed that phrase from Mike Shatzkin's prediction #3 btw). I'm not trying to say it will be a great year for authors. New media is going to squeeze them as it is squeezing musicians and striking writer's guild members. It is the year of the author, because they will be the ones who drive the paradigm shift. They may begin to use online publishing and distribution tools to bypass traditional publishers and put their work out there en masse. OR they will opt out of the internet's "give-up-your-work-for-free" model and create a new model altogether. Natalie Merchant is opting to (temporarily I hope) bring back the troubadour tradition in the music biz. It will be interesting to see what choices authors make as the publishing industry's ice begins to shift.
a few rough notes on knols 12.17.2007, 5:06 PM
Think you've got an authoritative take on a subject? Write up an article, or "knol," and see how the Web judgeth. If it's any good, you might even make a buck.
Google's new encyclopedia will go head to head with Wikipedia in the search rankings, though in format it more resembles other ad-supported, single-author info sources like the About.com or Squidoo. The knol-verse (how the hell do we speak of these things as a whole?) will be a Darwinian writers' market where the fittest knols rise to the top. Anyone can write one. Google will host it for free. Multiple knols can compete on a single topic. Readers can respond to and evaluate knols through simple community rating tools. Content belongs solely to the author, who can license it in any way he/she chooses (all rights reserved, Creative Commons, etc.). Authors have the option of having contextual ads run to the side, revenues from which are shared with Google. There is no vetting or editorial input from Google whatsoever.
Except... Might not the ads exert their own subtle editorial influence? In this entrepreneurial writers' fray, will authors craft their knols for AdSense optimization? Will they become, consciously or not, shills for the companies that place the ads (I'm thinking especially of high impact topic areas like health and medicine)? Whatever you may think of Wikipedia, it has a certain integrity in being ad-free. The mission is clear and direct: to build a comprehensive free encyclopedia for the Web. The range of content has no correlation to marketability or revenue potential. It's simply a big compendium of stuff, the only mention of money being a frank electronic tip jar at the top of each page. The Googlepedia, in contrast, is fundamentally an advertising platform. What will such an encyclopedia look like?
In the official knol announcement, Udi Manber, a VP for engineering at Google, explains the genesis of the project: "The challenge posed to us by Larry, Sergey and Eric was to find a way to help people share their knowledge. This is our main goal." You can see embedded in this statement all the trademarks of Google's rhetoric: a certain false humility, the pose of incorruptible geek integrity and above all, a boundless confidence that every problem, no matter how gray and human, has a technological fix. I'm not saying it's wrong to build a business, nor that Google is lying whenever it talks about anything idealistic, it's just that time and again Google displays an astonishing lack of self-awareness in the way it frames its services -? a lack that becomes especially obvious whenever the company edges into content creation and hosting. They tend to talk as though they're building the library of Alexandria or the great Encyclopédie, but really they're describing an advanced advertising network of Google-exclusive content. We shouldn't allow these very different things to become as muddled in our heads as they are in theirs. You get a worrisome sense that, like the Bushies, the cheerful software engineers who promote Google's products on the company's various blogs truly believe the things they're saying. That if we can just get the algorithm right, the world can bask in the light of universal knowledge.
The blogosphere has been alive with commentary about the knol situation throughout the weekend. By far the most provocative thing I've read so far is by Anil Dash, VP of Six Apart, the company that makes the Movable Type software that runs this blog. Dash calls out this Google self-awareness gap, or as he puts it, its lack of a "theory of mind":
Theory of mind is that thing that a two-year-old lacks, which makes her think that covering her eyes means you can't see her. It's the thing a chimpanzee has, which makes him hide a banana behind his back, only taking bites when the other chimps aren't looking.
Theory of mind is the awareness that others are aware, and its absence is the weakness that Google doesn't know it has. This shortcoming exists at a deep cultural level within the organization, and it keeps manifesting itself in the decisions that the company makes about its products and services. The flaw is one that is perpetuated by insularity, and will only be remedied by becoming more open to outside ideas and more aware of how people outside the company think, work and live.
He gives some examples:
Connecting PageRank to economic systems such as AdWords and AdSense corrupted the meaning and value of links by turning them into an economic exchange. Through the turn of the millennium, hyperlinking on the web was a social, aesthetic, and expressive editorial action. When Google introduced its advertising systems at the same time as it began to dominate the economy around search on the web, it transformed a basic form of online communication, without the permission of the web's users, and without explaining that choice or offering an option to those users.
He compares the knol enterprise with GBS:
Knol shares with Google Book Search the problem of being both indexed by Google and hosted by Google. This presents inherent conflicts in the ranking of content, as well as disincentives for content creators to control the environment in which their content is published. This necessarily disadvantages competing search engines, but more importantly eliminates the ability for content creators to innovate in the area of content presentation or enhancement. Anything that is written in Knol cannot be presented any better than the best thing in Knol. [his emphasis]
And lastly concludes:
An awareness of the fact that Google has never displayed an ability to create the best tools for sharing knowledge would reveal that it is hubris for Google to think they should be a definitive source for hosting that knowledge. If the desire is to increase knowledge sharing, and the methods of compensation that Google controls include traffic/attention and money/advertising, then a more effective system than Knol would be to algorithmically determine the most valuable and well-presented sources of knowledge, identify the identity of authorites using the same journalistic techniques that the Google News team will have to learn, and then reward those sources with increased traffic, attention and/or monetary compensation.
For a long time Google's goal was to help direct your attention outward. Increasingly we find that they want to hold onto it. Everyone knows that Wikipedia articles place highly in Google search results. Makes sense then that they want to capture some of those clicks and plug them directly into the Google ad network. But already the Web is dominated by a handful of mega sites. I get nervous at the thought that www.google.com could gradually become an internal directory, that Google could become the alpha and omega, not only the start page of the Internet but all the destinations.
It will be interesting to see just how and to what extent knols start creeping up the search results. Presumably, they will be ranked according to the same secret metrics that measure all pages in Google's index, but given the opacity of their operations, who's to say that subtle or unconscious rigging won't occur? Will community ratings factor in search rankings? That would seem to present a huge conflict of interest. Perhaps top-rated knols will be displayed in the sponsored links area at the top of results pages. Or knols could be listed in order of community ranking on a dedicated knol search portal, providing something analogous to the experience of searching within Wikipedia as opposed to finding articles through external search engines. Returning to the theory of mind question, will Google develop enough awareness of how it is perceived and felt by its users to strike the right balance?
One last thing worth considering about the knol -? apart from its being possibly the worst Internet neologism in recent memory -? is its author-centric nature. It's interesting that in order to compete with Wikipedia Google has consciously not adopted Wikipedia's model. The basic unit of authorial action in Wikipedia is the edit. Edits by multiple contributors are combined, through a complicated consensus process, into a single amalgamated product. On Google's encyclopedia the basic unit is the knol. For each knol (god, it's hard to keep writing that word) there is a one to one correspondence with an individual, identifiable voice. There may be multiple competing knols, and by extension competing voices (you have this on Wikipedia too, but it's relegated to the discussion pages).
Viewed in this way, Googlepedia is perhaps a more direct rival to Larry Sanger's Citizendium, which aims to build a more authoritative Wikipedia-type resource under the supervision of vetted experts. Citizendium is a strange, conflicted experiment, a weird cocktail of Internet populism and ivory tower elitism -? and by the look of it, not going anywhere terribly fast. If knols take off, could they be the final nail in the coffin of Sanger's awkward dream? Bryan Alexander wonders along similar lines.
While not explicitly employing Sanger's rhetoric of "expert" review, Google seems to be banking on its commitment to attributed solo authorship and its ad-based incentive system to lure good, knowledgeable authors onto the Web, and to build trust among readers through the brand-name credibility of authorial bylines and brandished credentials. Whether this will work remains to be seen. I wonder... whether this system will really produce quality. Whether there are enough checks and balances. Whether the community rating mechanisms will be meaningful and confidence-inspiring. Whether self-appointed experts will seem authoritative in this context or shabby, second-rate and opportunistic. Whether this will have the feeling of an enlightened knowledge project or of sleezy intellectual link farming (or something perfectly useful in between).
The feel of a site -? the values it exudes -? is an important factor though. This is why I like, and in an odd way trust Wikipedia. Trust not always to be correct, but to be transparent and to wear its flaws on its sleeve, and to be working for a higher aim. Google will probably never inspire that kind of trust in me, certainly not while it persists in its dangerous self-delusions.
A lot of unknowns here. Thoughts?