Yes, I’m afraid it’s yet another over the top response to yesterday’s big announcement of Google Wave, the latest paradigm shifting gob-smackingly brilliant piece of technology (or PR depending on your viewpoint) out of Google. My interest, however is pretty specific, how can we leverage it to help us capture, communicate, and publish research? And my opinion is that this is absolutely game changing – it makes a whole series of problems simply go away, and potentially provides a route to solving many of the problems that I was struggling to see how to manage.
Firstly, lets look at the grab bag of generic issues that I’ve been thinking about. Most recently I wrote about how I thought “real time” wasn’t the big deal but giving the user control back over the timeframe in which streams came into them. I had some vague ideas about how this might look but Wave has working code. When the people who you are in conversation with are online and looking at the same wave they will see modifications in real time. If they are not in the same document they will see the comments or changes later, but can also “re-play” changes. But a lot of thought has clearly gone into thinking about the default views based on when and how a person first comes into contact with a document.
Another issue that has frustrated me is the divide between wikis and blogs. Wikis have generally better editing functionality, but blogs have workable RSS feeds, Wikis have more plugins, blogs map better onto the diary style of a lab notebook. None of these were ever fundamental philosophical differences but just historical differences of implementations and developer priorities. Wave makes most of these differences irrelevant by creating a collaborative document framework that easily incorporates much of the best of all of these tools within a high quality rich text and media authoring platform. Bringing in content looks relatively easy and pushing content out in different forms also seems to be pretty straightforward. Streams, feeds, and other outputs, if not native, look to be easily generated either directly or by passing material to other services. The Waves themselves are XML which should enable straightforward parsing and tweaking with existing tools as well.
One thing I haven’t written much about but have been thinking about is the process of converting lab records into reports and onto papers. While there wasn’t much on display about complex documents a lot of just nice functionality, drag and drop links, options for incorporating and embedding content was at least touched on. Looking a little closer into the documentation there seems to be quite a strong provenance model, built on a code repository style framework for handling document versioning and forking. All good steps in the right direction and with the open APIs and multitouch as standard on the horizon there will no doubt be excellent visual organization and authoring tools along very soon now. For those worried about security and control, a 30 second mention in the keynote basically made it clear that they have it sorted. Private messages (documents? mecuments?) need never leave your local server.
Finally the big issue for me has for some time been bridging the gap between unstructured capture of streams of events and making it easy to convert those to structured descriptions of the intepretation of experiments. The audience was clearly wowed by the demonstration of inline real time contextual spell checking and translation. My first thought was – I want to see that real-time engine attached to an ontology browser or DbPedia and automatically generating links back to the URIs for concepts and objects. What really struck me most was the use of Waves with a few additional tools to provide authoring tools that help us to build the semantic web, the web of data, and the web of things.
For me, the central challenges for a laboratory recording system are capturing objects, whether digital or physical, as they are created, and then serve those back to the user, as they need them to describe the connections between them. As we connect up these objects we will create the semantic web. As we build structured knowledge against those records we will build a machine-parseable record of what happened that will help us to plan for the future. As I understand it each wave, and indeed each part of a wave, can be a URL endpoint; an object on the semantic web. If they aren’t already it will be easy to make them that. As much as anything it is the web native collaborative authoring tool that will make embedding and pass by reference the default approach rather than cut and past that will make the difference. Google don’t necessarily do semantic web but they do links and they do embedding, and they’ve provided a framework that should make it easy to add meaning to the links. Google just blew the door off the ELN market, and they probably didn’t even notice.
Those of us interested in web-based and electronic recording and communication of science have spent a lot of the last few years trying to describe how we need to glue the existing tools together, mailing lists, wikis, blogs, documents, databases, papers. The framework was never right so a lot of attention was focused on moving things backwards and forwards, how to connect one thing to another. That problem, as far as I can see has now ceased to exist. The challenge now is in building the right plugins and making sure the architecture is compatible with existing tools. But fundamentally the framework seems to be there. It seems like it’s time to build.
A more sober reflection will probably follow in a few days ;-)
It sounds a bit like the moment when Mosaic launched and suddenly it was clear that a web browser was the only tool that you needed – all those gophers, wais etc etc were subsumed iinto the web. Perhaps Wave will do that for all our communications and collaborations tools.
It sounds a bit like the moment when Mosaic launched and suddenly it was clear that a web browser was the only tool that you needed – all those gophers, wais etc etc were subsumed iinto the web. Perhaps Wave will do that for all our communications and collaborations tools.
Frank, as I said to someone yesterday, when they write the history of the web, Thursday will be the day that email died. It may be too early to call it the day that the wordprocessor died but I guess we’ll see. I think it will clearly subsume wikis and blogs at least as separate services – there will still need to be some “public facing” approaches that may be different to what was demoed which was essentially person to person. There was no discussion of subscribing per se. But that is easily do-able.
It’s interesting actually – some people just get it and many others don’t seem to. All the translation and spell checking and clever embedding is very flash but the key revolution at the core of it seems to me to be the combining of messaging and the collaborative document. Everything else is just nice added functionality.
Frank, as I said to someone yesterday, when they write the history of the web, Thursday will be the day that email died. It may be too early to call it the day that the wordprocessor died but I guess we’ll see. I think it will clearly subsume wikis and blogs at least as separate services – there will still need to be some “public facing” approaches that may be different to what was demoed which was essentially person to person. There was no discussion of subscribing per se. But that is easily do-able.
It’s interesting actually – some people just get it and many others don’t seem to. All the translation and spell checking and clever embedding is very flash but the key revolution at the core of it seems to me to be the combining of messaging and the collaborative document. Everything else is just nice added functionality.
They didn’t really talk about working with non-text data in the presentation, but I’m sure being able to manipulate it is going to be important for doing science online. I can see that front-ends can be built to help, but would Waves support a sufficiently rich database underneath it, that would allow querying and manipulation?
They didn’t really talk about working with non-text data in the presentation, but I’m sure being able to manipulate it is going to be important for doing science online. I can see that front-ends can be built to help, but would Waves support a sufficiently rich database underneath it, that would allow querying and manipulation?
Hi Bob, if you look closely at some of the background information the basic file format for a Wave is a chunk of XML. I am guessing that worst case scenario you could create display plugins or processing tools that would recognise any arbitrary file wrapped in xml and know what to do with it. I would imagine that there are a number of people feverishing writing or adapting tools for display and manipulation of OOxml and ms-xml file formats as I type :-)
Databases could be more challenging but if you create a robot which parses XML to e.g SQL then I imagine you could build documents for interaction with arbitrary databases as well, or SOAP or SPARQL endpoints. I would guess that a lot of this can be adapated from existing web interfaces pretty easily. And the fact that forms are a native concept for a Wave means that gathering information to populate databases should be very easy.
Hi Bob, if you look closely at some of the background information the basic file format for a Wave is a chunk of XML. I am guessing that worst case scenario you could create display plugins or processing tools that would recognise any arbitrary file wrapped in xml and know what to do with it. I would imagine that there are a number of people feverishing writing or adapting tools for display and manipulation of OOxml and ms-xml file formats as I type :-)
Databases could be more challenging but if you create a robot which parses XML to e.g SQL then I imagine you could build documents for interaction with arbitrary databases as well, or SOAP or SPARQL endpoints. I would guess that a lot of this can be adapated from existing web interfaces pretty easily. And the fact that forms are a native concept for a Wave means that gathering information to populate databases should be very easy.
Friendfeed conversation is here http://friendfeed.com/cameronneylon/9864a4e7/omg-this-changes-everything-or-yet-another-wave
Friendfeed conversation is here http://friendfeed.com/cameronneylon/9864a4e7/omg-this-changes-everything-or-yet-another-wave
Of course it also starts to make FB and similar services redundant, so it will be interesting to see what happens in the wake of this (maybe it will be renamed Google Tsunami?).
My biggest worry so far is that there will now be even more information to cope with. But tbf I haven’t made it to the end of the presentation, so maybe there will be some clever way of summarising highlights – or that this will be developed.
Of course it also starts to make FB and similar services redundant, so it will be interesting to see what happens in the wake of this (maybe it will be renamed Google Tsunami?).
My biggest worry so far is that there will now be even more information to cope with. But tbf I haven’t made it to the end of the presentation, so maybe there will be some clever way of summarising highlights – or that this will be developed.
Waiting for the public release and crossing my fingers … Google Wave will definitevely bring web collaboration to the mass and change mentality (the main challenge in the research community).
I attended Google IO end of May, if you have a wave account we can share waves, here my account tigresse@wavesandbox.com
Waiting for the public release and crossing my fingers … Google Wave will definitevely bring web collaboration to the mass and change mentality (the main challenge in the research community).
I attended Google IO end of May, if you have a wave account we can share waves, here my account tigresse@wavesandbox.com