Listing entries tagged with web_2.0
future of flickr 06.20.2006, 12:31 PM
Wired News reported last week, that some users of Flickr were upset at the enforcing of, until now a rarely mentioned, Flickr policy of making non-photographic images unavailable to the public if the account does not mostly contain photographs. Although Flickr is mostly known as a photo sharing site, people often post various digitized images into Flickr including our collaborator, Alex Itin. Currently, users of Second Life are receiving particular attention with Flickr's posting policies.
The article quotes Stewart Butterfield saying, "the rationale is that when people do a global search on Flickr, they want to find photos."
I can appreciate that Flickr wants to maintain a clear brand identity. They have created one of the most successful open photo sharing websites to date and, they don't want to dilute their brand. However, isn't this just a tagging issue? It is ironic that Flickr, one of the pioneering Web 2.0 apps, whose success strongly relies on the power of folksonomy, misses this point. Flickr was one of the primary ways the general public figured out how tagging works, and their users should be able to figure out how to selection what kinds of images they want.
How much of a stretch would it be for Flickr to become an image sharing website, including tags for photographs, scanned analog images, and born digital images?
FInally, Second Life had a recent event with a tie-in to a virtual X-Men movie premiere, whose images made their way into Flickr. When asked to comment about it, Butterfield goes on to say, "Flickr wasn't designed for Universal or Sony to promote their movie. Flickr is very explicitly for personal, noncommercial use" rather than "using a photo as a proxy for an ad."
Again, I appreciate their sentiment. However, is there a feasible way to enforce this kind of policy? Is it ok to for me to post a picture of my trip to Seattle, wearing an Izod shirt, holding a Starbucks cups, in front of the Space Needle? Isn't this a proxy for an ad? As we have noted before, architecture, such as Disneyland, the Chrylser Building and Space Needle are all copyrighted. Our clothes are plastered with icons and slogans. Food and drinks are covered with logos. We are a culture of brands and increasing everything in our lives is branded. It should come to no surprise that the media we, as a culture, produce reflects these brands, corporate identities, and commercial bodies.
The decreases in cost of digital production tools have vastly increased amateur media production. Flickr provides a great service to users of the web to support the sharing of all the media people are creating. However, Flickr created something bigger than they originally intended. Rather than limiting themselves to photo sharing, there is much more potential in creating a space for the sharing of and community building around all digital images.
if not rdf, then what? 03.28.2006, 11:35 AM
I posted about RDF and the difficulty the web development community has had fully adopting RDF and ontologies as a method of metadata organization. I said that one of the reasons was the relative complexity of RDF and the cost of generating useful metadata (as opposed to just enough information to solve the current problem). Simon St. Laurent has a nice redux of the matter. I won't try to duplicate that, but I do want to explain some of the details about RDF. Though I made a case for how complex RDF is when used to create fully relational data sets, I didn't do a very good job of explaining how simple RDF is in principle. RDF proponents believe they are building the future. I'm not entirely convinced, but I want to take a close look at RDF before I consider other solutions.
RDF seems overwhelming, but in the inimitable words of Squire Patsy, "It's only a model!" A model, in this case, that can representat digital and real things and their relationships. The promise of RDF is that it can describe everything using a combination of unique identifiers, properties and property values.
The heart of RDF is the unique identifier. Your name is a unique identifier, but only as long as there is no one else in the room who answers to [your-name-here]. This, clearly, is not a good way to create a universal identification system. Your social security number is a unique identifier in this country, but it doesn't signify much in China, and the system is not extensible (we'd run out of numbers if we tried to SSN the Chinese). Your email address is a unique identifier on the Internet—it works pretty well as a unique identifier. A Universal Resource Indicator (URI) is a little more extensible, and, since it's longer than an email, can provide more information. You can use a URI to identify something, even if it can't be retrieved through the web. A product at Amazon.com, for example, could have a unique URI, even though you still need a truck to bring it to you.
If we look at objects in the real world, they have physical properties, like size, color, and hardness. An example: my kitchen table. It's a three dimensional object, so it has height, width, length. It's made of wood, it has been stained. It also has informational properties: the date I purchased it, the person who sold it to me, the area of the country it came from, the level of personal attachment I have for the thing. Each of these properties can be put into RDF, by linking it to a schema that defines the property in a normative fashion. It'll make a little more sense when I give an example. But for that to happen I need to describe...
Property values are the names, numbers, and dates that make properties make sense. My kitchen table is 78" long x 28" wide x 34" tall, dark-walnut stained, and soft (as wood goes). I bought it in February, 2002 from Joe Komenda, and I'm never going to part with it (even though it isn't really NYC apartment sized). Property values are the easy part of the metadata. Associating property values to properties, and properties to normative schemas, that's when things get tricky.
Here's the example I promised (bound in an XML format):
<kt:seller rdf:resource="http://www.komenda.fake/Joseph%20Komenda#" />
<kt:sellit>Never ever ever</kt:sellit>
http://www.jdwilbur.fake/furniture/kitchen-table: The URI of my kitchen table
kt:height: The property height from my schema defined here: http://www.jdwilbur.fake/furniture#
34: The property value that tells me how tall my table is. I would infer from the schema that the value is in inches, not millimeters or light years
For the purposes of this example, I've made up my own fake schema (which would be a bunch of lines of xml similar to the example above) and included three real ones: Dublin Core dc, Geomap 2d geom2d for mapping coordinates, and map to relate the coordinates to physical locations. My schema, kt (which is a stand for the words kitchen table) includes some special properties like seller and sellit. The seller, Joe Komenda, has his own URI (it appears after rdf:resource). The others are fairly standard, but have a specific meaning in my personal context. The only other tricky part is the geographic coordinates, because I'm using three different schemas to define a geographic point. (It's just an example taken from mapbureau. It could resolve to the middle of the Pacific Ocean for all I know)
The obvious point here is that writing RDF is hard. We need automated tools to help us compose in this syntax, which is convoluted but requires perfection to work. Humans are not perfect; RDF is not our language. RDF also requires front-loading: developing schemas and choosing terms, URI's, finding prior art so that terms can be reused. We need tools to help us manage that aspect. And we need applications that demand RDF. Currently, the demand for RDF is low because it is mostly for the sake of maintaing the richness of a data set for some future application—not the ones I work with every day.
So if RDF, syntactically difficult, but conceptually easy, cannot get adopted, what is the alternative? The web API. A wide variety of new web applications and services are accompanied by an API. It seems like you can hardly be part of Web 2.0 without one. What does the API have that RDF doesn't? Simplicity. Famililarity. You cannot interact with an API unless you follow the rules. Fine. Same with RDF. But the rules of an API fall into the familiar realm of setting parameters, grabbing previously named functions, and following the documentation. This is like a caffeinated beverage for developers: they instinctively know how to consume it. More than that, API's mean that people can innovate on an interface level, even if they don't have serious coding chops. I've seen the Google API implemented in twenty minutes. This is a more fluid way to develop; one that feels more comfortable even if it sacrifices information richness. We'll get to RDF one day, maybe in Web 3.5, but until then we will take small steps towards data sharing and interoperability with API's.
hmmm... online word processing 11.21.2005, 6:18 PM
Not quite sure what I think of this new web-based word processor, Writely. Cute Web 2.0ish name, "beta" to the hilt. It's free and quite easy to get started. I guess it falls into that weird zone of transitional unease between desktop computing and the wide open web, where more and more of our identity and information resides. Some of the tech specifics: Writely saves documents in Word and (as of today) Open Office formats, outputs as RSS and to some blogging platforms (not ours), and can also be saved as a simple web page (here's the Writely version of this post). A key feature is that Writely documents can be written and edited by multiple authors, like SubEthaEdit only totally net-based. It feels more or less like a disembodied text editor for a wiki.
I'm trying to think about what's different about writing online. Movable Type, our blogging software, is essentially an ultra-stripped-down text editor -- web-based -- and it's no fun to work in. That's partly because the text field is about the size of a mail slot, but writing online can be annoying for other reasons, chief among them the fact that you have to be online to work, and second that you are susceptible to the chance mishaps of the browser (accidentally backing up and losing everything, it crashes, you forgot to pay Time Warner and they turn off the web etc.). But with a conventional word processor you're vulnerable to the mishaps of the machine (hard drive dies and you didn't back it up, it crashes and you didn't save, coffee spills...). Writely saves everything automatically as you go, maintaining a revision history and tracking changes -- a very nice feature.
They say this is the future of software, at least for the simple everyday kind of stuff: web-based tool suites and tons of online data storage. I guess it's nice not having to be tied to one machine. Your work is just out there, waiting for you to log in. But then again, your work is just out there...
Posted by ben vershbow at 06:18 PM
| Comments (2)
tags: Social Software , Web2.0 , document , ebook , open_office , openoffice , social_software , web_2.0 , word , word_processing , word_processor , writely , writing
questions and answers 10.25.2005, 10:16 AM
in 1980 and 81 i had a dream job -- charlie van doren, the editorial director of Encyclopedia Britannica, hired me to think about the future of encyclopedias in the digital era. i parlayed that gig into an eighteen-month stint with Alan Kay when he was the chief scientist at Atari. Alan had read the paper i wrote for britannica -- EB and the Intellectual Tools of the Future -- and in his enthusiastic impulsive style, said, "this is just the sort of thing i want to work on, why not join me at Atari."
while we figured that the future encyclopedia should at the least be able to answer most any factual question someone might have, we really didn't have any idea of the range of questions people would ask. we reasoned that while people are curious by nature, they fall out of the childhood habit of asking questions about anything and everything because they get used to the fact that no one in their immediate vicinity actually knows or can explain the answer and the likelihood of finding the answer in a readily available book isn't much greater.
so, as an experiment we gave a bunch of people tape recorders and asked them to record any question that came to mind during the day -- anything. we started collecting question journals in which people whispered their wonderings -- both the mundane and the profound. michael naimark, a colleague at Atari was particularly fascinated by this project and he went to the philippines to gather questions from a mountain tribe.
anyway, this is a long intro to the realization that between wikipedia and google, alan's and my dream of a universal question/answer machine is actually coming into being. although we could imagine what it would be like to have the ability to get answers to most any question, we assumed that the foundation would be a bunch of editors responsible for the collecting and organizing vast amounts of information. we didnt' imagine the world wide web as a magnet which would motivate people collectively to store a remarkable range of human knowledge in a searchable database.
on the other hand we assumed that the encylopedia of the future would be intelligent enough to enter into conversation with individual users, helping them through rough spots like a patient tutor. looks like we'll have to wait awhile for that.
nicholas carr on "the amorality of web 2.0" 10.17.2005, 9:00 AM
Nicholas Carr, who writes about business and technology and formerly was an editor of the Harvard Business Review, has published an interesting though problematic piece on "the amorality of web 2.0". I was drawn to the piece because it seemed to be questioning the giddy optimism surrounding "web 2.0", specifically Kevin Kelly's rapturous late-summer retrospective on ten years of the world wide web, from Netscape IPO to now. While he does poke some much-needed holes in the carnival floats, Carr fails to adequately address the new media practices on their own terms and ends up bashing Wikipedia with some highly selective quotes.
Carr is skeptical that the collectivist paradigms of the web can lead to the creation of high-quality, authoritative work (encyclopedias, journalism etc.). Forced to choose, he'd take the professionals over the amateurs. But put this way it's a Hobson's choice. Flawed as it is, Wikipedia is in its infancy and is probably not going away. Whereas the future of Britannica is less sure. And it's not just amateurs that are participating in new forms of discourse (take as an example the new law faculty blog at U. Chicago). Anyway, here's Carr:
The Internet is changing the economics of creative work - or, to put it more broadly, the economics of culture - and it's doing it in a way that may well restrict rather than expand our choices. Wikipedia might be a pale shadow of the Britannica, but because it's created by amateurs rather than professionals, it's free. And free trumps quality all the time. So what happens to those poor saps who write encyclopedias for a living? They wither and die. The same thing happens when blogs and other free on-line content go up against old-fashioned newspapers and magazines. Of course the mainstream media sees the blogosphere as a competitor. It is a competitor. And, given the economics of the competition, it may well turn out to be a superior competitor. The layoffs we've recently seen at major newspapers may just be the beginning, and those layoffs should be cause not for self-satisfied snickering but for despair. Implicit in the ecstatic visions of Web 2.0 is the hegemony of the amateur. I for one can't imagine anything more frightening.
He then has a nice follow-up in which he republishes a letter from an administrator at Wikipedia, which responds to the above.
Encyclopedia Britannica is an amazing work. It's of consistent high quality, it's one of the great books in the English language and it's doomed. Brilliant but pricey has difficulty competing economically with free and apparently adequate....
...So if we want a good encyclopedia in ten years, it's going to have to be a good Wikipedia. So those who care about getting a good encyclopedia are going to have to work out how to make Wikipedia better, or there won't be anything.
Posted by ben vershbow at 09:00 AM
| Comments (5)
tags: Libraries, Search and the Web , OS , Online , Publishing, Broadcast, and the Press , Social Software , Web2.0 , amateur , blog , blogging , blogs , book , books , britannica , collective , encyclopedia , encyclopedia_britannica , internet , journalism , mainstream_media , media , msm , open_content , open_source , publishing , web , web_2.0 , wiki , wikipedia