Listing entries tagged with encyclopedia
1 | 2 | 3
nicholson baker on the charms of wikipedia 03.07.2008, 2:43 AM
I finally got around to reading Nicholson Baker's essay in the New York Review of Books, "The Charms of Wikipedia," and it's... charming. Baker has a flair for idiosyncratic detail, which makes him a particularly perceptive and entertaining guide through the social and procedural byways of the Wikipedia mole hill. Of particular interest are his delvings into the early Wikipedia's reliance on public domain reference works, most notably the famous 1911 Encyclopedia Britannica: "The fragments from original sources persist like those stony bits of classical buildings incorporated in a medieval wall."
Baker also has some smart things to say on the subject of vandalism:
Wikipedians see vandalism as a problem, and it certainly can be, but a Diogenes-minded observer would submit that Wikipedia would never have been the prodigious success it has been without its demons.
This is a reference book that can suddenly go nasty on you. Who knows whether, when you look up Harvard's one-time warrior-president, James Bryant Conant, you're going to get a bland, evenhanded article about him, or whether the whole page will read (as it did for seventeen minutes on April 26, 2006): "HES A BIG STUPID HEAD." James Conant was, after all, in some important ways, a big stupid head. He was studiously anti-Semitic, a strong believer in wonder-weapons - ?a man who was quite as happy figuring out new ways to kill people as he was administering a great university. Without the kooks and the insulters and the spray-can taggers, Wikipedia would just be the most useful encyclopedia ever made. Instead it's a fast-paced game of paintball.
Not only does Wikipedia need its vandals - ?up to a point - ?the vandals need an orderly Wikipedia, too. Without order, their culture-jamming lacks a context. If Wikipedia were rendered entirely chaotic and obscene, there would be no joy in, for example, replacing some of the article on Archimedes with this:
Archimedes is dead.
Other people will also die.
All hail chickens.
The Power Rangers say "Hi"
Even the interesting article on culture jamming has been hit a few times: "Culture jamming," it said in May 2007, "is the act of jamming tons of cultures into 1 extremely hot room."
a few rough notes on knols 12.17.2007, 5:06 PM
Think you've got an authoritative take on a subject? Write up an article, or "knol," and see how the Web judgeth. If it's any good, you might even make a buck.
Google's new encyclopedia will go head to head with Wikipedia in the search rankings, though in format it more resembles other ad-supported, single-author info sources like the About.com or Squidoo. The knol-verse (how the hell do we speak of these things as a whole?) will be a Darwinian writers' market where the fittest knols rise to the top. Anyone can write one. Google will host it for free. Multiple knols can compete on a single topic. Readers can respond to and evaluate knols through simple community rating tools. Content belongs solely to the author, who can license it in any way he/she chooses (all rights reserved, Creative Commons, etc.). Authors have the option of having contextual ads run to the side, revenues from which are shared with Google. There is no vetting or editorial input from Google whatsoever.
Except... Might not the ads exert their own subtle editorial influence? In this entrepreneurial writers' fray, will authors craft their knols for AdSense optimization? Will they become, consciously or not, shills for the companies that place the ads (I'm thinking especially of high impact topic areas like health and medicine)? Whatever you may think of Wikipedia, it has a certain integrity in being ad-free. The mission is clear and direct: to build a comprehensive free encyclopedia for the Web. The range of content has no correlation to marketability or revenue potential. It's simply a big compendium of stuff, the only mention of money being a frank electronic tip jar at the top of each page. The Googlepedia, in contrast, is fundamentally an advertising platform. What will such an encyclopedia look like?
In the official knol announcement, Udi Manber, a VP for engineering at Google, explains the genesis of the project: "The challenge posed to us by Larry, Sergey and Eric was to find a way to help people share their knowledge. This is our main goal." You can see embedded in this statement all the trademarks of Google's rhetoric: a certain false humility, the pose of incorruptible geek integrity and above all, a boundless confidence that every problem, no matter how gray and human, has a technological fix. I'm not saying it's wrong to build a business, nor that Google is lying whenever it talks about anything idealistic, it's just that time and again Google displays an astonishing lack of self-awareness in the way it frames its services -? a lack that becomes especially obvious whenever the company edges into content creation and hosting. They tend to talk as though they're building the library of Alexandria or the great Encyclopédie, but really they're describing an advanced advertising network of Google-exclusive content. We shouldn't allow these very different things to become as muddled in our heads as they are in theirs. You get a worrisome sense that, like the Bushies, the cheerful software engineers who promote Google's products on the company's various blogs truly believe the things they're saying. That if we can just get the algorithm right, the world can bask in the light of universal knowledge.
The blogosphere has been alive with commentary about the knol situation throughout the weekend. By far the most provocative thing I've read so far is by Anil Dash, VP of Six Apart, the company that makes the Movable Type software that runs this blog. Dash calls out this Google self-awareness gap, or as he puts it, its lack of a "theory of mind":
Theory of mind is that thing that a two-year-old lacks, which makes her think that covering her eyes means you can't see her. It's the thing a chimpanzee has, which makes him hide a banana behind his back, only taking bites when the other chimps aren't looking.
Theory of mind is the awareness that others are aware, and its absence is the weakness that Google doesn't know it has. This shortcoming exists at a deep cultural level within the organization, and it keeps manifesting itself in the decisions that the company makes about its products and services. The flaw is one that is perpetuated by insularity, and will only be remedied by becoming more open to outside ideas and more aware of how people outside the company think, work and live.
He gives some examples:
Connecting PageRank to economic systems such as AdWords and AdSense corrupted the meaning and value of links by turning them into an economic exchange. Through the turn of the millennium, hyperlinking on the web was a social, aesthetic, and expressive editorial action. When Google introduced its advertising systems at the same time as it began to dominate the economy around search on the web, it transformed a basic form of online communication, without the permission of the web's users, and without explaining that choice or offering an option to those users.
He compares the knol enterprise with GBS:
Knol shares with Google Book Search the problem of being both indexed by Google and hosted by Google. This presents inherent conflicts in the ranking of content, as well as disincentives for content creators to control the environment in which their content is published. This necessarily disadvantages competing search engines, but more importantly eliminates the ability for content creators to innovate in the area of content presentation or enhancement. Anything that is written in Knol cannot be presented any better than the best thing in Knol. [his emphasis]
And lastly concludes:
An awareness of the fact that Google has never displayed an ability to create the best tools for sharing knowledge would reveal that it is hubris for Google to think they should be a definitive source for hosting that knowledge. If the desire is to increase knowledge sharing, and the methods of compensation that Google controls include traffic/attention and money/advertising, then a more effective system than Knol would be to algorithmically determine the most valuable and well-presented sources of knowledge, identify the identity of authorites using the same journalistic techniques that the Google News team will have to learn, and then reward those sources with increased traffic, attention and/or monetary compensation.
For a long time Google's goal was to help direct your attention outward. Increasingly we find that they want to hold onto it. Everyone knows that Wikipedia articles place highly in Google search results. Makes sense then that they want to capture some of those clicks and plug them directly into the Google ad network. But already the Web is dominated by a handful of mega sites. I get nervous at the thought that www.google.com could gradually become an internal directory, that Google could become the alpha and omega, not only the start page of the Internet but all the destinations.
It will be interesting to see just how and to what extent knols start creeping up the search results. Presumably, they will be ranked according to the same secret metrics that measure all pages in Google's index, but given the opacity of their operations, who's to say that subtle or unconscious rigging won't occur? Will community ratings factor in search rankings? That would seem to present a huge conflict of interest. Perhaps top-rated knols will be displayed in the sponsored links area at the top of results pages. Or knols could be listed in order of community ranking on a dedicated knol search portal, providing something analogous to the experience of searching within Wikipedia as opposed to finding articles through external search engines. Returning to the theory of mind question, will Google develop enough awareness of how it is perceived and felt by its users to strike the right balance?
One last thing worth considering about the knol -? apart from its being possibly the worst Internet neologism in recent memory -? is its author-centric nature. It's interesting that in order to compete with Wikipedia Google has consciously not adopted Wikipedia's model. The basic unit of authorial action in Wikipedia is the edit. Edits by multiple contributors are combined, through a complicated consensus process, into a single amalgamated product. On Google's encyclopedia the basic unit is the knol. For each knol (god, it's hard to keep writing that word) there is a one to one correspondence with an individual, identifiable voice. There may be multiple competing knols, and by extension competing voices (you have this on Wikipedia too, but it's relegated to the discussion pages).
Viewed in this way, Googlepedia is perhaps a more direct rival to Larry Sanger's Citizendium, which aims to build a more authoritative Wikipedia-type resource under the supervision of vetted experts. Citizendium is a strange, conflicted experiment, a weird cocktail of Internet populism and ivory tower elitism -? and by the look of it, not going anywhere terribly fast. If knols take off, could they be the final nail in the coffin of Sanger's awkward dream? Bryan Alexander wonders along similar lines.
While not explicitly employing Sanger's rhetoric of "expert" review, Google seems to be banking on its commitment to attributed solo authorship and its ad-based incentive system to lure good, knowledgeable authors onto the Web, and to build trust among readers through the brand-name credibility of authorial bylines and brandished credentials. Whether this will work remains to be seen. I wonder... whether this system will really produce quality. Whether there are enough checks and balances. Whether the community rating mechanisms will be meaningful and confidence-inspiring. Whether self-appointed experts will seem authoritative in this context or shabby, second-rate and opportunistic. Whether this will have the feeling of an enlightened knowledge project or of sleezy intellectual link farming (or something perfectly useful in between).
The feel of a site -? the values it exudes -? is an important factor though. This is why I like, and in an odd way trust Wikipedia. Trust not always to be correct, but to be transparent and to wear its flaws on its sleeve, and to be working for a higher aim. Google will probably never inspire that kind of trust in me, certainly not while it persists in its dangerous self-delusions.
A lot of unknowns here. Thoughts?
the encyclopedia of life 05.21.2007, 1:53 AM
E. O. Wilson, one of the world's most distinguished scientists, professor and honorary curator in entomology at Harvard, promoted his long-cherished idea of The Encyclopedia of Life, as he accepted the TED Prize 2007.
The reason behind his project is the catastrophic human threat to our biosphere. For Wilson, our knowledge of biodiversity is so abysmally incomplete that we are at risk of losing a great deal of it even before we discover it. In the US alone, of the 200,000 known species, only about 15% have been studied well enough to evaluate their status. In other words, we are "flying blindly into our environmental future." If we don't explore the biosphere properly, we won't be able to understand it and competently manage it. In order to do this, we need to work together to help create the key tools that are needed to inspire preservation and biodiversity. This vast enterprise, equivalent of the human genome project, is possible today thanks to scientific and technological advances. The Encyclopedia of Life is conceived as a networked project to which thousands of scientists, and amateurs, form around the world can contribute. It is comprised of an indefinitely expandable page for each species, with the hope that all key information about life can be accessible to anyone anywhere in the world. According to Wilson's dream, this aggregation, expansion, and communication of knowledge will address transcendent qualities in the human consciousness and will transform the science of biology in ways of obvious benefit to humans as it will inspire present, and future, biologists to continue the search for life, to understand it, and above all, to preserve it.
The first big step in that dream came true on May 9th when major scientific institutions, backed by a funding commitment led by the MacArthur Foundation, announced a global effort to launch the project. The Encyclopedia of Life is a collaborative scientific effort led by the Field Museum, Harvard University, Marine Biological Laboratory (Woods Hole), Missouri Botanical Garden, Smithsonian Institution, and Biodiversity Heritage Library, and also the American Museum of Natural History (New York), Natural History Museum (London), New York Botanical Garden, and Royal Botanic Garden (Kew). Ultimately, the Encyclopedia of Life will provide an online database for all 1.8 million species now known to live on Earth.
As we ponder about the meaning, and the ways, of the network; a collective place that fosters new kinds of creation and dialogue, a place that dehumanizes, a place of destruction or reconstruction of memory where time is not lost because is always available, we begin to wonder about the value of having all that information at our fingertips. Was it having to go to the library, searching the catalog, looking for the books, piling them on a table, and leafing through them in search of information that one copied by hand, or photocopied to read later, a more meaningful exercise? Because I wrote my dissertation at the library, though I then went home and painstakingly used a word processor to compose it, am not sure which process is better, or worse. For Socrates, as Dan cites him, we, people of the written word, are forgetful, ignorant, filled with the conceit of wisdom. However, we still process information. I still need to read a lot to retain a little. But that little, guides my future search. It seems that E.O. Wilson's dream, in all its ambition but also its humility, is a desire to use the Internet's capability of information sharing and accessibility to make us more human. Looking at the demonstration pages of The Encyclopedia of Life, took me to one of my early botanical interests: mushrooms, and to the species that most attracted me when I first "discovered" it, the deadly poisonous Amanita phalloides, related to Alice in Wonderland's Fly agaric, Amanita muscaria, which I adopted as my pen name for a while. Those fabulous engravings that mesmerized me as a child, brought me understanding as a youth, and pleasure as a grown up, all came back to me this afternoon, thanks to a combination of factors that, somehow, the Internet catalyzed for me.
an encyclopedia of arguments 02.21.2007, 6:41 AM
I just came across this though apparently it's been up and running since last summer. Debatepedia is a free, wiki-based encyclopedia where people can collaboratively research and write outlines of arguments on contentious subjects -- stem cell reseach, same-sex marriage, how and when to withdraw from Iraq (it appears to be focused in practice if not in policy on US issues) -- assembling what are essentially roadmaps to important debates of the moment. Articles are organized in "logic trees," a two-column layout in which pros and cons, fors and againsts, yeas and neas are placed side by side for each argument and its attendant sub-questions. A fairly strict citations policy ensures that each article also serves as a link repository on its given topic.
This is an intriguing adaptation of the Wikipedia model -- an inversion you could say, in that it effectively raises the "talk" pages (discussion areas behind an article) to the fore. Instead of "neutral point of view," with debates submerged, you have an emphasis on the many-sidedness of things. The problem of course is that Debatepedia's format suggests that all arguments are binary. The so-called "logic trees" are more like logic switches, flipped on or off, left or right -- a crude reduction of what an argument really is.
I imagine they used the two column format for simplicity's sake -- to create a consistent and accessible form throughout the site. It's true that representing the full complexity of a subject on a two-dimensional screen lies well beyond present human capabilities, but still there has to be some way to present a more shaded spectrum of thought -- to triangulate multiple perspectives and still make the thing readable and useful (David Weinberger has an inchoate thought along similar lines w/r/t to NPR stories and research projects for listeners -- taken up by Doc Searls).
I'm curious to hear what people think. Pros? Cons? Logic tree anyone?
ecclesiastical proust archive: starting a community 02.09.2007, 7:46 AM
(Jeff Drouin is in the English Ph.D. Program at The Graduate Center of the City University of New York)
About three weeks ago I had lunch with Ben, Eddie, Dan, and Jesse to talk about starting a community with one of my projects, the Ecclesiastical Proust Archive. I heard of the Institute for the Future of the Book some time ago in a seminar meeting (I think) and began reading the blog regularly last Summer, when I noticed the archive was mentioned in a comment on Sarah Northmore's post regarding Hurricane Katrina and print publishing infrastructure. The Institute is on the forefront of textual theory and criticism (among many other things), and if:book is a great model for the kind of discourse I want to happen at the Proust archive. When I finally started thinking about how to make my project collaborative I decided to contact the Institute, since we're all in Brooklyn, to see if we could meet. I had an absolute blast and left their place swimming in ideas!
While my main interest was in starting a community, I had other ideas about making the archive more editable by readers that I thought would form a separate discussion. But once we started talking I was surprised by how intimately the two were bound together.
For those who might not know, The Ecclesiastical Proust Archive is an online tool for the analysis and discussion of à la recherche du temps perdu (In Search of Lost Time). It's a searchable database pairing all 336 church-related passages in the (translated) novel with images depicting the original churches or related scenes. The search results also provide paratextual information about the pagination (it's tied to a specific print edition), the story context (since the passages are violently decontextualized), and a set of associations (concepts, themes, important details, like tags in a blog) for each passage. My purpose in making it was to perform a meditation on the church motif in the Recherche as well as a study on the nature of narrative.
I think the archive could be a fertile space for collaborative discourse on Proust, narratology, technology, the future of the humanities, and other topics related to its mission. A brief example of that kind of discussion can be seen in this forum exchange on the classification of associations. Also, the church motif which some might think too narrow actually forms the central metaphor for the construction of the Recherche itself and has an almost universal valence within it. (More on that topic in this recent post on the archive blog).
Following the if:book model, the archive could also be a spawning pool for other scholars' projects, where they can present and hone ideas in a concentrated, collaborative environment. Sort of like what the Institute did with Mitchell Stephens' Without Gods and Holy of Holies, a move away from the 'lone scholar in the archive' model that still persists in academic humanities today.
One of the recurring points in our conversation at the Institute was that the Ecclesiastical Proust Archive, as currently constructed around the church motif, is "my reading" of Proust. It might be difficult to get others on board if their readings on gender, phenomenology, synaesthesia, or whatever else would have little impact on the archive itself (as opposed to the discussion spaces). This complex topic and its practical ramifications were treated more fully in this recent post on the archive blog.
I'm really struck by the notion of a "reading" as not just a private experience or a public writing about a text, but also the building of a dynamic thing. This is certainly an advantage offered by social software and networked media, and I think the humanities should be exploring this kind of research practice in earnest. Most digital archives in my field provide material but go no further. That's a good thing, of course, because many of them are immensely useful and important, such as the Kolb-Proust Archive for Research at the University of Illinois, Urbana-Champaign. Some archives such as the NINES project also allow readers to upload and tag content (subject to peer review). The Ecclesiastical Proust Archive differs from these in that it applies the archival model to perform criticism on a particular literary text, to document a single category of lexia for the experience and articulation of textuality.
If the Ecclesiastical Proust Archive widens to enable readers to add passages according to their own readings (let's pretend for the moment that copyright infringement doesn't exist), to tag passages, add images, add video or music, and so on, it would eventually become a sprawling, unwieldy, and probably unbalanced mess. That is the very nature of an Archive. Fine. But then the original purpose of the project doing focused literary criticism and a study of narrative might be lost.
If the archive continues to be built along the church motif, there might be enough work to interest collaborators. The enhancements I currently envision include a French version of the search engine, the translation of some of the site into French, rewriting the search engine in PHP/MySQL, creating a folksonomic functionality for passages and images, and creating commentary space within the search results (and making that searchable). That's some heavy work, and a grant would probably go a long way toward attracting collaborators.
So my sense is that the Proust archive could become one of two things, or two separate things. It could continue along its current ecclesiastical path as a focused and led project with more-or-less particular roles, which might be sufficient to allow collaborators a sense of ownership. Or it could become more encyclopedic (dare I say catholic?) like a wiki. Either way, the organizational and logistical practices would need to be carefully planned. Both ways offer different levels of open-endedness. And both ways dovetail with the very interesting discussion that has been happening around Ben's recent post on the million penguins collaborative wiki-novel.
Right now I'm trying to get feedback on the archive in order to develop the best plan possible. I'll be demonstrating it and raising similar questions at the Society for Textual Scholarship conference at NYU in mid-March. So please feel free to mention the archive to anyone who might be interested and encourage them to contact me at firstname.lastname@example.org. And please feel free to offer thoughts, comments, questions, criticism, etc. The discussion forum and blog are there to document the archive's development as well.
Thanks for reading this very long post. It's difficult to do anything small-scale with Proust!
Posted by jeff drouin at 7:46 AM
| Comments (2)
tags: Online , academia , archive , blogs , books , digital , encyclopedia , folksonomy , hypertext , literature , multimedia , narrative , network , novel , open_access , pedagogy , peer_review , photography , publishing , reading , search , social_software , tagging , technology , textuality , university , web , wiki , writing
scholarpedia: sharpening the wiki for expert results 12.27.2006, 12:39 PM
Eugene M. Izhikevich, a Senior Fellow in Theoretical Neurobiology at The Neurosciences Institute in San Diego, wants to see if academics can collaborate to produce a peer reviewed equivalent to Wikipedia. The attempt is Scholarpedia, a free peer reviewed encyclopedia, entirely open to public contributions but with editorial oversight by experts.
At first, this sounded to me a lot like Larry Sanger's Citizendium project, which will attempt to add an expert review layer to material already generated by Wikipedia (they're calling it a "progressive fork" off of the Wikipedia corpus). Sanger insists that even with this added layer of control the open spirit of Wikipedia will live on in Citizendium while producing a more rigorous and authoritative encyclopedia.
It's always struck me more as a simplistic fantasy of ivory tower-common folk détente than any reasoned community-building plan. We'll see if Walesism and Sangerism can be reconciled in a transcendent whole, or if intellectual class warfare (of the kind that has already broken out on multiple occasions between academics and general contributors on Wikipedia) -- or more likely inertia -- will be the result.
The eight-month-old Scholarpedia, containing only a few dozen articles and restricted for the time being to three neuroscience sub-fields, already feels like a more plausible proposition, if for no other reason than that it knows who its community is and that it establishes an unambiguous hierarchy of participation. Izhikevich has appointed himself editor-in-chief and solicited full articles from scholarly peers around the world. First the articles receive "in-depth, anonymous peer review" by two fellow authors, or by other reviewers who measure sufficiently high on the "scholar index." Peer review, it is explained, is employed both "to insure the accuracy and quality of information" but also "to allow authors to list their papers as peer-reviewed in their CVs and resumes" -- a marriage of pragmaticism and idealism in Mr. Izhikevich.
After this initial vetting, the article is officially part of the Scholarpedia corpus and is hence open to subsequent revisions and alterations suggested by the community, which must in turn be accepted by the author, or "curator," of the article. The discussion, or "talk" pages, familiar from Wikipedia are here called "reviews." So far, however, it doesn't appear that many of the approved articles have received much of a public work-over since passing muster in the initial review stage. But readers are weighing in (albeit in modest numbers) in the public election process for new curators. I'm very curious to see if this will be treated by the general public as a read-only site, or if genuine collaboration will arise.
It's doubtful that this more tightly regulated approach could produce a work as immense and varied as Wikipedia, but it's pretty clear that this isn't the goal. It's a smaller, more focused resource that Izhikevich and his curators are after, with an eye toward gradually expanding to embrace all subjects. I wonder, though, if the site wouldn't be better off keeping its ambitions concentrated, renaming itself something like "Neuropedia" and looking simply to inspire parallel efforts in other fields. One problem of open source knowledge projects is that they're often too general in scope (Scholarpedia says it all). A federation of specialized encyclopedias, produced by focused communities of scholars both academic and independent -- and with some inter-disciplinary porousness -- would be a more valuable, if less radical, counterpart to Wikipedia, and more likely to succeed than the Citizendium chimera.