Reflecting on a Wave: The Demo at Science Online London 2009

Yesterday, along with Chris Thorpe and Ian Mulvany I was involved in what I imagine might be the first of a series of demos of Wave as it could apply to scientists and researchers more generally. You can see the backup video I made in case we had no network on Viddler. I’ve not really done a demo like that live before so it was a bit difficult to tell how it was going from the inside but although much of the tweetage was apparently underwhelmed the direct feedback afterwards was very positive and perceptive.

I think we struggled to get across an idea of what Wave is, which confused a significant proportion of the audience, particularly those who weren’t already aware of it or who didn’t have a pre-conceived idea of what it might do for them. My impression was that those in the audience who were technically oriented were excited by what they saw. If I was to do a demo again I would focus more on telling a story about writing a paper – really give people a context for what is going on. One problem with Wave is that it is easy to end up with a document littered with chat blips and I think this confused an audience more used to thinking about documents.

The other problem is perhaps that a bunch of things “just working” is underwhelming when people are used to the idea of powerful applications that they do their work in. Developers get the idea that this is all happening and working in a generic environment, not a special purpose built one, and that is exciting. Users just expect things to work or they’re not interested. Especially scientists. And it would be fair to say that the robots we demonstrated, mostly the work of a few hours or a few days, aren’t incredibly impressive on the surface. In addition, when it is working at its best the success of Wave is that it can make it look easy, if not yet polished. Because it looks easy people then assume it is so its not worth getting excited about. The point is not that it is possible to automatically mark up text, pull in data, and then process it. It is that you can do this effectively in your email inbox with unrelated tools, that are pretty easy to build, or at least adapt. But we also clearly need some flashier demos for scientists.

Ian pulled off a great coup in my view by linking up the output of one Robot to a visualization provided by another. Ian has written a robot called Janey which talks to the Journal/Author Name Estimator service. It can either suggest what journal to send a paper to based on the abstract or suggest other articles of interest. Ian had configured the Robot the night before so it could also get the co-authorship graph for a set of papers and put that into a new blip in the form of a list of edges (or co-authorships).

The clever bit was that Ian had found another Robot, written by someone entirely different that visualizes connection graphs. Ian set the blip that Janey was writing to to be one that the Graph robot was watching and the automatically pulled data was automatically visualized [see a screencast here]. Two Robots written by different people for different purposes can easily be hooked up together and just work. I’m not even sure whether Ian had had a chance to test it or not prior to the demo…but it looked easy, why wouldn’t people expect two data processing tools to work seamlessly together? I mean, it should just work.

The idea of a Wave as a data processing workflow was implicit in what I have written previously but Ian’s demo, and a later conversation with Alan Cann really sharpened that up in my mind. Alan was asking about different visual representations of a wave. The current client essentially uses the visual metaphor of an email system. One of the points for me that came from the demo is that it will probably be necessary to write specific clients that make sense for specific tasks. Alan asked about the idea of a Yahoo Pipes type of interface. This suggests a different way of thinking about Wave, instead of a set of text or media elements, it becomes a way to wire up Robots, automated connections to webservices. Essentially with a set of Robots and an appropriate visual client you could build a visual programming engine, a web service interface, or indeed a visual workflow editing environment.

The Wave client has to walk a very fine line between presenting a view of the Wave that the target user can understand and working with and the risk of constraining the users thinking about what can be done. The amazing thing about Wave as a framework is that these things are not only do-able but often very easy. The challenge is actually thinking laterally enough to even ask the question in the first place. The great thing about a public demo is that the challenges you get from the audience make you look at things in different ways.

Allyson Lister blogged the session, there was a FriendFeed discussion, and there should be video available at some point.

12 Replies to “Reflecting on a Wave: The Demo at Science Online London 2009”

  1. In large part, my comments were self-interested, although inspired by the remarks made earlier about the authoring environment not being very good at present. I agree that the present battle is to get across what Wave actually is and the potential it has, but beyond that, democratization of the technology via a more accessible (for mere mortals) interface seems to me to be very important.

  2. In large part, my comments were self-interested, although inspired by the remarks made earlier about the authoring environment not being very good at present. I agree that the present battle is to get across what Wave actually is and the potential it has, but beyond that, democratization of the technology via a more accessible (for mere mortals) interface seems to me to be very important.

  3. I’m still a little puzzled as to why people thought the interface “difficult”. I wouldn’t have thought it looked that different to an email inbox or a document. There are issues with it but they don’t seem to me to be fundamental to the look and feel or what is going on. At one level all we were doing was typing and adding more people to a conversation.

    Real power users will need to write Robots but I think many people will get by with ones that are built by other people, they’re really just easily hackable plugins in many ways. The idea of building a service that builds robots for specific purposes is rather appealing though. A lot of them will be very generic tools, recognise a piece of text and replace it with a link/specific terms/annotation etc based on an existing web service.

  4. I’m still a little puzzled as to why people thought the interface “difficult”. I wouldn’t have thought it looked that different to an email inbox or a document. There are issues with it but they don’t seem to me to be fundamental to the look and feel or what is going on. At one level all we were doing was typing and adding more people to a conversation.

    Real power users will need to write Robots but I think many people will get by with ones that are built by other people, they’re really just easily hackable plugins in many ways. The idea of building a service that builds robots for specific purposes is rather appealing though. A lot of them will be very generic tools, recognise a piece of text and replace it with a link/specific terms/annotation etc based on an existing web service.

  5. It strikes me that it might also be instructive to show how one would go about doing something like extracting the co-authorship network using non-wave tools. It would look something like this:

    1. open document editor and create text
    2. open browser and go to the JANE web site
    3. cut and paste text into submission form
    4. save returned xml to a new file
    5. parse xml with python to create .dot file
    6. run graphviz on .dot file to generate image file
    7. open image file in previewing program to make sure it looked OK
    8. finally import image file into document editor

  6. It strikes me that it might also be instructive to show how one would go about doing something like extracting the co-authorship network using non-wave tools. It would look something like this:

    1. open document editor and create text
    2. open browser and go to the JANE web site
    3. cut and paste text into submission form
    4. save returned xml to a new file
    5. parse xml with python to create .dot file
    6. run graphviz on .dot file to generate image file
    7. open image file in previewing program to make sure it looked OK
    8. finally import image file into document editor

  7. Maybe I am misinformed, but I thought it worked just fine offline. I don’t see what the problem would be.

    My guess is that it will first take off in blog posts. This is a place where people don’t have high interface standards (unlike collaborative editing), where not too many features are needed (the versioning feature is a major problem right now) and where it can easily spread without requiring commitments (unlike changing your IM client or email).

    The interface is a major problem, though. Developers look at this and say, wow, the features are really powerful. Users look at it and can’t see past the lack of polish. You might say that these users are short-sighted, but I think they might be right. Getting the interface perfect requires a *lot* of work, which few developers appreciate. There’s a reason iPhone apps look great, while Blackberry apps look, have always looked, and probably always will look terrible. Really, I would have been happier if the Wave demo had started with a highly polished application for a limited functionality, say blog comments. Then they could have turned to describe all the other cool features.

  8. Maybe I am misinformed, but I thought it worked just fine offline. I don’t see what the problem would be.

    My guess is that it will first take off in blog posts. This is a place where people don’t have high interface standards (unlike collaborative editing), where not too many features are needed (the versioning feature is a major problem right now) and where it can easily spread without requiring commitments (unlike changing your IM client or email).

    The interface is a major problem, though. Developers look at this and say, wow, the features are really powerful. Users look at it and can’t see past the lack of polish. You might say that these users are short-sighted, but I think they might be right. Getting the interface perfect requires a *lot* of work, which few developers appreciate. There’s a reason iPhone apps look great, while Blackberry apps look, have always looked, and probably always will look terrible. Really, I would have been happier if the Wave demo had started with a highly polished application for a limited functionality, say blog comments. Then they could have turned to describe all the other cool features.

  9. Sean, yes you can edit text offline, I guess you might be able to drop images and media in as well but I’ve had severe issues with choppy connections and edits getting lost. On the other hand I’m just trying now while turning my wireless off and it does seem to work much better than it did, so clearly a lot of work going on behind the scenes. Obviously a lot of the other online functionality is lost and I haven’t tested what happens with conflicting edits.

    I agree with what you’re saying about the interface to a certain extent but I think getting that balance right is hard. As far as I know this is the earliest state Google have ever released anthing in and the reasons for that were clearly to get the developers interested and biting. But I agree that the main inroads, particularly for research, may be where people write specialist polished clients for specific tasks. So use the full functionality in the background to do specific things but hide the complexities from the user by giving them a clean and simple interface.

  10. Sean, yes you can edit text offline, I guess you might be able to drop images and media in as well but I’ve had severe issues with choppy connections and edits getting lost. On the other hand I’m just trying now while turning my wireless off and it does seem to work much better than it did, so clearly a lot of work going on behind the scenes. Obviously a lot of the other online functionality is lost and I haven’t tested what happens with conflicting edits.

    I agree with what you’re saying about the interface to a certain extent but I think getting that balance right is hard. As far as I know this is the earliest state Google have ever released anthing in and the reasons for that were clearly to get the developers interested and biting. But I agree that the main inroads, particularly for research, may be where people write specialist polished clients for specific tasks. So use the full functionality in the background to do specific things but hide the complexities from the user by giving them a clean and simple interface.

  11. This has been much more informative, but I appreciate the problems of representing any new technology in an empathetic manner; and like any science, we don’t get it right on the first attempt. I think the suggestion that such a demonstration be put into the context of a particular role/project for which a jobbing scientist might use it, such as writing a paper, is a good one.

    I guess we all take things for granted and are too ready to believe that most things ‘have been done’ in new media (whether we’ve heard about it or not), thus, also as you suggest, it’s a good idea to state what hasn’t been possible previously and how this is remedied by this new technology – much in the same manner that you would sell any grant to a funding body ;-)

    Anyway, I enjoyed the demonstration, and whilst I doubt I’ll be writing any robots any time soon, I’ll certainly be scanning for them when Wave is released.

  12. This has been much more informative, but I appreciate the problems of representing any new technology in an empathetic manner; and like any science, we don’t get it right on the first attempt. I think the suggestion that such a demonstration be put into the context of a particular role/project for which a jobbing scientist might use it, such as writing a paper, is a good one.

    I guess we all take things for granted and are too ready to believe that most things ‘have been done’ in new media (whether we’ve heard about it or not), thus, also as you suggest, it’s a good idea to state what hasn’t been possible previously and how this is remedied by this new technology – much in the same manner that you would sell any grant to a funding body ;-)

    Anyway, I enjoyed the demonstration, and whilst I doubt I’ll be writing any robots any time soon, I’ll certainly be scanning for them when Wave is released.

Comments are closed.