Listing entries tagged with science
1 | 2 | 3
SciVee: web video for the sciences 08.20.2007, 2:25 PM
Via Slashdot, I just came across what could be a major innovation in science publishing. The National Science Foundation, the Public Library of Science and the San Diego Supercomputing Center have joined forces to launch, SciVee, an experimental media sharing platform that allows scientists to synch short video lectures with paper outlines:
SciVee, created for scientists, by scientists, moves science beyond the printed word and lecture theater taking advantage of the internet as a communication medium where scientists young and old have a place and a voice.
The site is in alpha and has only a handful of community submissions, but it's enough to give a sense of how profoundly useful this could become. Video entries can be navigated internally by topic segments, and are accompanied by a link to the full paper, jpegs of figures, tags, a reader rating system and a comment area.
Peer networking functions are supposedly also in the works, although this seems geared solely as a dissimenation and access tool for already vetted papers, not a peer-to-peer review forum. It would be great within this model to open submissions to material other than papers such as documentaries, simulations, teaching modules etc. It has the potential to grow into a resource not just for research but for pedagogy and open access curriculum building.
It's very encouraging to see web video technologies evolving beyond the generalized, distractoid culture of YouTube and being adapted to the needs of particular communities. Scholars in the humanities, film and media studies especially, should take note. Imagine a more advanced version of the In Media Res feature we have running over at MediaCommons, where in addition to basic blog-like commenting you could have audio narration of clips, video annotation with time code precision, football commentator-style drawing over the action, editing tools and easy mashup capabilitiesâ€”all of it built on robust archival infrastructure of the kind that underlies SciVee.
nature opens slush pile to the world 06.20.2007, 6:16 AM
This is potentially a big deal for scholarly publishing in the sciences. Inspired by popular "preprint" servers like the Cornell-hosted arXiv.org, the journal Nature just launched a site, "Nature Precedings", where unreviewed scientific papers can be posted under a CC license, then discussed, voted upon, and cited according to standards usually employed for peer-reviewed scholarship.
Over the past decade, preprint archives have become increasingly common as a means of taking the pulse of new scientific research before official arbitration by journals, and as a way to plant a flag in front of the gatekeepers' gates in order to outmaneuver competition in a crowded field. Peer review journals are still the sine qua non in terms of the institutional warranting of scholarship, and in the process of academic credentialling and the general garnering of prestige, but the Web has emerged as the arena where new papers first see the light of day and where discussion among scholars begins to percolate. More and more, print publication has been transforming into a formal seal of approval at the end of a more unfiltered, networked process. Clearly, Precedings is Nature's effort to claim some of the Web territory for itself.
From a cursory inspection of the site, it appears that they're serious about providing a stable open access archive, referencable in perpetuity through broadly accepted standards like DOI (Digital Object Identifier) and Handles (which, as far as I can tell, are a way of handling citations of revised papers). They also seem earnest about hosting an active intellectual community, providing features like scholar profiles and a variety of feedback mechanisms. This is a big step for Nature, especially following their tentative experiment last year with opening up peer review. At that time they seemed almost keen to prove that a re-jiggering of the review process would fail to yield interesting results and they stacked their "trial" against the open approach by not actually altering the process, or ultimately, the stakes, of the closed-door procedure. Not surprisingly, few participated and the experiment was declared an interesting failure. Obviously their thinking on this matter did not end there.
Hosting community-moderated works-in-development might just be a new model for scholarly presses, and Nature might just be leading the way. We'll be watching this one.
More on David Weinberger's blog.
the encyclopedia of life 05.21.2007, 1:53 AM
E. O. Wilson, one of the world's most distinguished scientists, professor and honorary curator in entomology at Harvard, promoted his long-cherished idea of The Encyclopedia of Life, as he accepted the TED Prize 2007.
The reason behind his project is the catastrophic human threat to our biosphere. For Wilson, our knowledge of biodiversity is so abysmally incomplete that we are at risk of losing a great deal of it even before we discover it. In the US alone, of the 200,000 known species, only about 15% have been studied well enough to evaluate their status. In other words, we are "flying blindly into our environmental future." If we don't explore the biosphere properly, we won't be able to understand it and competently manage it. In order to do this, we need to work together to help create the key tools that are needed to inspire preservation and biodiversity. This vast enterprise, equivalent of the human genome project, is possible today thanks to scientific and technological advances. The Encyclopedia of Life is conceived as a networked project to which thousands of scientists, and amateurs, form around the world can contribute. It is comprised of an indefinitely expandable page for each species, with the hope that all key information about life can be accessible to anyone anywhere in the world. According to Wilson's dream, this aggregation, expansion, and communication of knowledge will address transcendent qualities in the human consciousness and will transform the science of biology in ways of obvious benefit to humans as it will inspire present, and future, biologists to continue the search for life, to understand it, and above all, to preserve it.
The first big step in that dream came true on May 9th when major scientific institutions, backed by a funding commitment led by the MacArthur Foundation, announced a global effort to launch the project. The Encyclopedia of Life is a collaborative scientific effort led by the Field Museum, Harvard University, Marine Biological Laboratory (Woods Hole), Missouri Botanical Garden, Smithsonian Institution, and Biodiversity Heritage Library, and also the American Museum of Natural History (New York), Natural History Museum (London), New York Botanical Garden, and Royal Botanic Garden (Kew). Ultimately, the Encyclopedia of Life will provide an online database for all 1.8 million species now known to live on Earth.
As we ponder about the meaning, and the ways, of the network; a collective place that fosters new kinds of creation and dialogue, a place that dehumanizes, a place of destruction or reconstruction of memory where time is not lost because is always available, we begin to wonder about the value of having all that information at our fingertips. Was it having to go to the library, searching the catalog, looking for the books, piling them on a table, and leafing through them in search of information that one copied by hand, or photocopied to read later, a more meaningful exercise? Because I wrote my dissertation at the library, though I then went home and painstakingly used a word processor to compose it, am not sure which process is better, or worse. For Socrates, as Dan cites him, we, people of the written word, are forgetful, ignorant, filled with the conceit of wisdom. However, we still process information. I still need to read a lot to retain a little. But that little, guides my future search. It seems that E.O. Wilson's dream, in all its ambition but also its humility, is a desire to use the Internet's capability of information sharing and accessibility to make us more human. Looking at the demonstration pages of The Encyclopedia of Life, took me to one of my early botanical interests: mushrooms, and to the species that most attracted me when I first "discovered" it, the deadly poisonous Amanita phalloides, related to Alice in Wonderland's Fly agaric, Amanita muscaria, which I adopted as my pen name for a while. Those fabulous engravings that mesmerized me as a child, brought me understanding as a youth, and pleasure as a grown up, all came back to me this afternoon, thanks to a combination of factors that, somehow, the Internet catalyzed for me.
perelman's proof / wsj on open peer review 09.18.2006, 11:05 AM
Last week got off to an exciting start when the Wall Street Journal ran a story about "networked books," the Institute's central meme and very own coinage. It turns out we were quoted in another WSJ item later that week, this time looking at the science journal Nature, which over the summer has been experimenting with opening up its peer review process to the scientific community (unfortunately, this article, like the networked books piece, is subscriber only).
I like this article because it smartly weaves in the story of Grigory (Grisha) Perelman, which I had meant to write about earlier. Perelman is a Russian topologist who last month shocked the world by turning down the Fields medal, the highest honor in mathematics. He was awarded the prize for unraveling a famous geometry problem that had baffled mathematicians for a century.
There's an interesting publishing angle to this, which is that Perelman never submitted his groundbreaking papers to any mathematics journals, but posted them directly to ArXiv.org, an open "pre-print" server hosted by Cornell. This, combined with a few emails notifying key people in the field, guaranteed serious consideration for his proof, and led to its eventual warranting by the mathematics community. The WSJ:
...the experiment highlights the pressure on elite science journals to broaden their discourse. So far, they have stood on the sidelines of certain fields as a growing number of academic databases and organizations have gained popularity.
One Web site, ArXiv.org, maintained by Cornell University in Ithaca, N.Y., has become a repository of papers in fields such as physics, mathematics and computer science. In 2002 and 2003, the reclusive Russian mathematician Grigory Perelman circumvented the academic-publishing industry when he chose ArXiv.org to post his groundbreaking work on the Poincaré conjecture, a mathematical problem that has stubbornly remained unsolved for nearly a century. Dr. Perelman won the Fields Medal, for mathematics, last month.
(Warning: obligatory horn toot.)
"Obviously, Nature's editors have read the writing on the wall [and] grasped that the locus of scientific discourse is shifting from the pages of journals to a broader online conversation," wrote Ben Vershbow, a blogger and researcher at the Institute for the Future of the Book, a small, Brooklyn, N.Y., , nonprofit, in an online commentary. The institute is part of the University of Southern California's Annenberg Center for Communication.
Also worth reading is this article by Sylvia Nasar and David Gruber in The New Yorker, which reveals Perelman as a true believer in the gift economy of ideas:
Perelman, by casually posting a proof on the Internet of one of the most famous problems in mathematics, was not just flouting academic convention but taking a considerable risk. If the proof was flawed, he would be publicly humiliated, and there would be no way to prevent another mathematician from fixing any errors and claiming victory. But Perelman said he was not particularly concerned. "My reasoning was: if I made an error and someone used my work to construct a correct proof I would be pleased," he said. "I never set out to be the sole solver of the Poincaré."
Perelman's rejection of all conventional forms of recognition is difficult to fathom at a time when every particle of information is packaged and owned. He seems almost like a kind of mystic, a monk who abjures worldly attachment and dives headlong into numbers. But according to Nasar and Gruber, both Perelman's flouting of academic publishing protocols and his refusal of the Fields medal were conscious protests against what he saw as the petty ego politics of his peers. He claims now to have "retired" from mathematics, though presumably he'll continue to work on his own terms, in between long rambles through the streets of St. Petersburg.
Regardless, Perelman's case is noteworthy as an example of the kind of critical discussions that scholars can now orchestrate outside the gate. This sort of thing is generally more in evidence in the physical and social sciences, but ought too to be of great interest to scholars in the humanities, who have only just begun to explore the possibilities. Indeed, these are among our chief inspirations for MediaCommons.
Academic presses and journals have long functioned as the gatekeepers of authoritative knowledge, determining which works see the light of day and which ones don't. But open repositories like ArXiv have utterly changed the calculus, and Perelman's insurrection only serves to underscore this fact. Given the abundance of material being published directly from author to public, the critical task for the editor now becomes that of determining how works already in the daylight ought to be received. Publishing isn't an endpoint, it's the beginning of a process. The networked press is a guide, a filter, and a discussion moderator.
Nature seems to grasp this and is trying with its experiment to reclaim some of the space that has opened up in front of its gates. Though I don't think they go far enough to effect serious change, their efforts certainly point in the right direction.
some thoughts on mapping 09.15.2006, 9:12 AM
Mapping is a useful abstraction for exploring ideas, and not just for navigation through the physical world. A recent exhibit, Places & Spaces: Mapping Science, (at the New York Public Library of Science, Industry, and Business), presented maps that render the invisible path of scientific progress using metaphors of cartography. The maps ranged in innovation: there were several that imitated traditional geographical and topographical maps, while others created maps based on nodal presenation—tree maps and hyperbolic radial maps. Nearly all relied on citation analysis for the data points. Two interesting projects: Brad Paley's TextArc Visualization of "The History of Science", which maps scientific progress as described in the book "The History of Science"; and Ingo Gunther's Worldprocessor Globes, which are perfectly idiosyncratic in their focus.
But, to me, the exhibit highlighted a fundamental drawback of maps. Every map is an incomplete view of an place or a space. The cartographer makes choices about what information to include, but more significantly, what information to leave out. Each map is a reflection of the cartographer's point of view on the world in question.
Maps serve to guide—whether from home to a vacation house in the next state, or from the origin of genetic manipulation through to the current replication practices stem-cell research. In physical space, physical objects circumscribe your movement through that space. In mental space, those constraints are missing. How much more important is it, then, to trust your guide, and understand the motivations behind your map? I found myself thinking that mapping as a discipline has the same lack of transparency as traditional publishing.
How do we, in the spirit of exploration, maintain the useful art of mapping, yet expand and extend mapping for the networked age? The network is good at bringing information to people, and collecting feedback. A networked map would have elements of both information sharing, and information collection, in a live, updateable interface. Jeff Jarvis has discussed this idea already in his post on networked mapping. Jarvis proposes mashing up Google maps (or open street map) with other software to create local maps, by and for the community.
This is an excellent start (and I hope we'll see integration of mapping tools in the near future), but does this address the limitations of cartographic editing? What I'm thinking about is something less like a Google map, and more like an emergent terrain assembled from ground-level and satellite photos, walks, contributed histories, and personal memories. Like the Gates Memory Project we did last year, this space would be derived from the aggregate, built entirely without the structural impositions of a predetermined map. It would have a Borgesian flavor; this derived place does not have to be entirely based on reality. It could include fantasies or false memories of a place, descriptions that only exists in dreams. True, creating a single view of such a map would come up against the same problems as other cartographic projects. But a digital map has the ability to reveal itself in layers (like old acetate overlays did for elevation, roads, and buildings). Wouldn't it be interesting to see what a collective dreamscape of New York looked like? And then to peel back the layers down to the individual contributions? Instead of finding meaning through abstraction, we find meaningful patterns by sifting through the pile of activity.
We may never be able to collect maps of this scale and depth, but we will be able to see what a weekend of collective psychogeography can produce at the Conflux Festival, which opened yesterday in locations around NYC. The Conflux Festival (formerly the Psychogeography Festival) is "the annual New York festival for contemporary psychogeography, the investigation of everyday urban life through emerging artistic, technological and social practice." It challenges notions of public and private space, and seeks out areas of exploration within and at the edges of our built environment. It also challenges us, as citizens, to be creative and engaged with the space we inhabit. With events going on in the city simultaneously at various locations, and a team of students from Carleton college recording them, I hope we'll end up with a map composed of narrative as much as place. Presented as audio- and video-rich interactions within specific contexts and locations in the city, I think it will give us another way to think about mapping.
on the future of peer review in electronic scholarly publishing 06.28.2006, 7:08 AM
Over the last several months, as I've met with the folks from if:book and with the quite impressive group of academics we pulled together to discuss the possibility of starting an all-electronic scholarly press, I've spent an awful lot of time thinking and talking about peer review -- how it currently functions, why we need it, and how it might be improved. Peer review is extremely important -- I want to acknowledge that right up front -- but it threatens to become the axle around which all conversations about the future of publishing get wrapped, like Isadora Duncan's scarf, strangling any possible innovations in scholarly communication before they can get launched. In order to move forward with any kind of innovative publishing process, we must solve the peer review problem, but in order to do so, we first have to separate the structure of peer review from the purposes it serves -- and we need to be a bit brutally honest with ourselves about those purposes, distinguishing between those purposes we'd ideally like peer review to serve and those functions it actually winds up fulfilling.
The issue of peer review has of course been brought back to the front of my consciousness by the experiment with open peer review currently being undertaken by the journal Nature, as well as by the debate about the future of peer review that the journal is currently hosting (both introduced last week here on if:book). The experiment is fairly simple: the editors of Nature have created an online open review system that will run parallel to its traditional anonymous review process.
From 5 June 2006, authors may opt to have their submitted manuscripts posted publicly for comment.
Any scientist may then post comments, provided they identify themselves. Once the usual confidential peer review process is complete, the public 'open peer review' process will be closed. Editors will then read all comments on the manuscript and invite authors to respond. At the end of the process, as part of the trial, editors will assess the value of the public comments.
As several entries in the web debate that is running alongside this trial make clear, though, this is not exactly a groundbreaking model; the editors of several other scientific journals that already use open review systems to varying extents have posted brief comments about their processes. Electronic Transactions in Artificial Intelligence, for instance, has a two-stage process, a three-month open review stage, followed by a speedy up-or-down refereeing stage (with some time for revisions, if desired, inbetween). This process, the editors acknowledge, has produced some complications in the notion of "publication," as the texts in the open review stage are already freely available online; in some sense, the journal itself has become a vehicle for re-publishing selected articles.
Peer review is, by this model, designed to serve two different purposes -- first, fostering discussion and feedback amongst scholars, with the aim of strengthening the work that they produce; second, filtering that work for quality, such that only the best is selected for final "publication." ETAI's dual-stage process makes this bifurcation in the purpose of peer review clear, and manages to serve both functions well. Moreover, by foregrounding the open stage of peer review -- by considering an article "published" during the three months of its open review, but then only "refereed" once anonymous scientists have held their up-or-down vote, a vote that comes only after the article has been read, discussed, and revised -- this kind of process seems to return the center of gravity in peer review to communication amongst peers.
I wonder, then, about the relatively conservative move that Nature has made with its open peer review trial. First, the journal is at great pains to reassure authors and readers that traditional, anonymous peer review will still take place alongside open discussion. Beyond this, however, there seems to be a relative lack of communication between those two forms of review: open review will take place at the same time as anonymous review, rather than as a preliminary phase, preventing authors from putting the public comments they receive to use in revision; and while the editors will "read" all such public comments, it appears that only the anonymous reviews will be considered in determining whether any given article is published. Is this caution about open review an attempt to avoid throwing out the baby of quality control with the bathwater of anonymity? In fact, the editors of Atmospheric Chemistry and Physics present evidence (based on their two-stage review process) that open review significantly increases the quality of articles a journal publishes:
Our statistics confirm that collaborative peer review facilitates and enhances quality assurance. The journal has a relatively low overall rejection rate of less than 20%, but only three years after its launch the ISI journal impact factor ranked Atmospheric Chemistry and Physics twelfth out of 169 journals in 'Meteorology and Atmospheric Sciences' and 'Environmental Sciences'.
These numbers support the idea that public peer review and interactive discussion deter authors from submitting low-quality manuscripts, and thus relieve editors and reviewers from spending too much time on deficient submissions.
By keeping anonymous review and open review separate, without allowing the open any precedence, Nature is allowing itself to avoid asking any risky questions about the purposes of its process, and is perhaps inadvertently maintaining the focus on peer review's gatekeeping function. The result of such a focus is that scholars are less able to learn from the review process, less able to put comments on their work to use, and less able to respond to those comments in kind.
If anonymous, closed peer review processes aren't facilitating scholarly discourse, what purposes do they serve? Gatekeeping, as I've suggested, is a primary one; as almost all of the folks I've talked with this spring have insisted, peer review is necessary to ensuring that the work published by scholarly outlets is of sufficiently high quality, and anonymity is necessary in order to allow reviewers the freedom to say that an article should not be published. In fact, this question of anonymity is quite fraught for most of the academics with whom I've spoken; they have repeatedly responded with various degrees of alarm to suggestions that their review comments might in fact be more productive delivered publicly, as part of an ongoing conversation with the author, rather than as a backchannel, one-way communication mediated by an editor. Such a position may be justifiable if, again, the primary purpose of peer review is quality control, and if the process is reliably scrupulous. However, as other discussants in the Nature web debate point out, blind peer review is not a perfect process, subject as it is to all kinds of failures and abuses, ranging from flawed articles that nonetheless make it through the system to ideas that are appropriated by unethical reviewers, with all manner of cronyism and professional jealousy inbetween.
So, again, if closed peer review processes aren't serving scholars in their need for feedback and discussion, and if they can't be wholly relied upon for their quality-control functions, what's left? I'd argue that the primary purpose that anonymous peer review actually serves today, at least in the humanities (and that qualifier, and everything that follows from it, opens a whole other can of worms that needs further discussion -- what are the different needs with respect to peer review in the different disciplines?), is that of institutional warranting, of conveying to college and university administrations that the work their employees are doing is appropriate and well-thought-of in its field, and thus that these employees are deserving of ongoing appointments, tenure, promotions, raises, and whathaveyou.
Are these the functions that we really want peer review to serve? Vast amounts of scholars' time is poured into the peer review process each year; wouldn't it be better to put that time into open discussions that not only improve the individual texts under review but are also, potentially, productive of new work? Isn't it possible that scholars would all be better served by separating the question of credentialing from the publishing process, by allowing everything through the gate, by designing a post-publication peer review process that focuses on how a scholarly text should be received rather than whether it should be out there in the first place? Would the various credentialing bodies that currently rely on peer review's gatekeeping function be satisfied if we were to say to them, "no, anonymous reviewers did not determine whether my article was worthy of publication, but if you look at the comments that my article has received, you can see that ten of the top experts in my field had really positive, constructive things to say about it"?
Nature's experiment is an honorable one, and a step in the right direction. It is, however, a conservative step, one that foregrounds the institutional purposes of peer review rather than the ways that such review might be made to better serve the scholarly community. We've been working this spring on what we imagine to be a more progressive possibility, the scholarly press reimagined not as a disseminator of discrete electronic texts, but instead as a network that brings scholars together, allowing them to publish everything from blogs to books in formats that allow for productive connections, discussions, and discoveries. I'll be writing more about this network soon; in the meantime, however, if we really want to energize scholarly discourse through this new mode of networked publishing, we're going to have to design, from the ground up, a productive new peer review process, one that makes more fruitful interaction among authors and readers a primary goal.