Reflecting on a Wave: The Demo at Science Online London 2009

Yesterday, along with Chris Thorpe and Ian Mulvany I was involved in what I imagine might be the first of a series of demos of Wave as it could apply to scientists and researchers more generally. You can see the backup video I made in case we had no network on Viddler. I’ve not really done a demo like that live before so it was a bit difficult to tell how it was going from the inside but although much of the tweetage was apparently underwhelmed the direct feedback afterwards was very positive and perceptive.

I think we struggled to get across an idea of what Wave is, which confused a significant proportion of the audience, particularly those who weren’t already aware of it or who didn’t have a pre-conceived idea of what it might do for them. My impression was that those in the audience who were technically oriented were excited by what they saw. If I was to do a demo again I would focus more on telling a story about writing a paper – really give people a context for what is going on. One problem with Wave is that it is easy to end up with a document littered with chat blips and I think this confused an audience more used to thinking about documents.

The other problem is perhaps that a bunch of things “just working” is underwhelming when people are used to the idea of powerful applications that they do their work in. Developers get the idea that this is all happening and working in a generic environment, not a special purpose built one, and that is exciting. Users just expect things to work or they’re not interested. Especially scientists. And it would be fair to say that the robots we demonstrated, mostly the work of a few hours or a few days, aren’t incredibly impressive on the surface. In addition, when it is working at its best the success of Wave is that it can make it look easy, if not yet polished. Because it looks easy people then assume it is so its not worth getting excited about. The point is not that it is possible to automatically mark up text, pull in data, and then process it. It is that you can do this effectively in your email inbox with unrelated tools, that are pretty easy to build, or at least adapt. But we also clearly need some flashier demos for scientists.

Ian pulled off a great coup in my view by linking up the output of one Robot to a visualization provided by another. Ian has written a robot called Janey which talks to the Journal/Author Name Estimator service. It can either suggest what journal to send a paper to based on the abstract or suggest other articles of interest. Ian had configured the Robot the night before so it could also get the co-authorship graph for a set of papers and put that into a new blip in the form of a list of edges (or co-authorships).

The clever bit was that Ian had found another Robot, written by someone entirely different that visualizes connection graphs. Ian set the blip that Janey was writing to to be one that the Graph robot was watching and the automatically pulled data was automatically visualized [see a screencast here]. Two Robots written by different people for different purposes can easily be hooked up together and just work. I’m not even sure whether Ian had had a chance to test it or not prior to the demo…but it looked easy, why wouldn’t people expect two data processing tools to work seamlessly together? I mean, it should just work.

The idea of a Wave as a data processing workflow was implicit in what I have written previously but Ian’s demo, and a later conversation with Alan Cann really sharpened that up in my mind. Alan was asking about different visual representations of a wave. The current client essentially uses the visual metaphor of an email system. One of the points for me that came from the demo is that it will probably be necessary to write specific clients that make sense for specific tasks. Alan asked about the idea of a Yahoo Pipes type of interface. This suggests a different way of thinking about Wave, instead of a set of text or media elements, it becomes a way to wire up Robots, automated connections to webservices. Essentially with a set of Robots and an appropriate visual client you could build a visual programming engine, a web service interface, or indeed a visual workflow editing environment.

The Wave client has to walk a very fine line between presenting a view of the Wave that the target user can understand and working with and the risk of constraining the users thinking about what can be done. The amazing thing about Wave as a framework is that these things are not only do-able but often very easy. The challenge is actually thinking laterally enough to even ask the question in the first place. The great thing about a public demo is that the challenges you get from the audience make you look at things in different ways.

Allyson Lister blogged the session, there was a FriendFeed discussion, and there should be video available at some point.

Watching the future…student demos at University of Toronto

On Wednesday morning I had the distinct pleasure of seeing a group of students in the Computer Science department at the University of Toronto giving demos of tools and software that they have been developing over the past few months. The demos themselves were of a consistently high standard throughout, in many ways more interesting and more real than some of the demos that I saw the previous night at the “professional” DemoCamp 21. Some, and I emphasise only some, of the demos were less slick and polished but in every case the students had a firm grasp of what they had done and why, and were ready to answer criticisms or explain design choices succinctly and credibly. The interfaces and presentation of the software was consistently not just good, but beautiful to look at, and the projects generated real running code that solved real and immediate problems. Steve Easterbrook has given a run down of all the demos on his blog but here I wanted to pick out three that really spoke to problems that I  have experienced myself.

I mentioned Brent Mombourquette‘s work on Breadcrumbs yesterday (details of the development of all of these demos is available on the student’s linked blogs). John Pipitone demonstrated this Firefox extension that tracks your browsing history and then presents it as a graph. This appealed to me immensely for a wide range of reasons: firstly that I am very interested in trying to capture, visualise, and understand the relationships between online digital objects. The graphs displayed by breadcrumbs immediately reminded me of visualisations of thought processes with branches, starting points, and the return to central nodes all being clear. In the limited time for questions the applications in improving and enabling search, recording and sharing collections of information, and even in identifying when thinking has got into a rut and needs a swift kick were all covered. The graphs can be published from the browser and the possibilities that sharing and analysing these present are still popping up with new ideas in my head several days later. In common with the rest of the demos my immediate response was, “I want to play with that now!”

The second demo that really caught my attention was a MediaWiki extension called MyeLink written by Maria Yancheva that aimed to find similar pages on a wiki. This was particularly aimed at researchers keeping a record of their work and wanting to understand how one page, perhaps describing an experiment that didn’t work, was different to a similar page, describing and experiment that did. The extension identifies similar pages in the wiki based on either structure (based primarily on headings I think) or in the text used. Maria demonstrated comparing pages as well as faceted browsing of the structure of the pages in line with the extension. The potential here for helping people manage their existing materials is huge. Perhaps more exciting, particularly in the context of yesterday’s post about writing up stories, is the potential to assist people with preparing summaries of their work. It is possible to imagine the extension first recognising that you are writing a summary based on the structure, and then recognising that in previous summaries you’ve pulled text from a different specific class of pages, all the while helping you to maintain a consitent and clear structure.

The last demo I want to mention was from Samar Sabie of a second MediaWiki extension called VizGraph. Anyone who has used a MediaWiki or a similar framework for recording research knows the problem. Generating tables, let alone graphs, sucks big time. You have your data in a CSV or Excel file and you need to transcribe, by hand, into a fairly incomprehensible, but more importantly badly fault intolerant, syntax to generate any sort of sensible visualisation. What you want, and what VizGraph supplies is a simple Wizard that allows you to upload your data file (CSV or Excel naturally) steps you through a few simple questions that are familiar from the Excel chart wizards and then drops that back into the page as a structured text data that is then rendered via the GoogleChart API. Once it is there you can, if you wish, edit the structured markup to tweak the graph.

Again, this was a great example of just solving the problem for the average user, fitting within their existing workflow and making it happen. But that wasn’t the best bit. The best bit was almost a throwaway comment as we were taken through the Wizard; “and check this box if you want to enable people to download the data directly from a link on the chart…”. I was sitting next to Jon Udell and we both spontaneously did a big thumbs up and just grinned at each other. It was a wonderful example of “just getting it”. Understanding the flow, the need to enable data to be passed from place to place, while at the same time make the user experience comfortable and seamless.

I am sceptical about the rise of a mass “Google Generation” of tech savvy and sophisticated users of web based tools and computation. But what Wednesday’s demos showed to me in no uncertain terms was that when you provide a smart group of people, who grew up with the assumption that the web functions properly, with tools and expertise to effectively manipulate and compute on the web then amazing things happen.  That these students make assumptions of how things should work, and most importantly that they should, that editing and sharing should be enabled by default, and that user experience needs to be good as a basic assumptionwas brought home by a conversation we had later in the day at the Science 2.0 symposium.

The question was  “what does Science 2.0 mean anyway?”. A question that is usually answered by reference to Web 2.0 and collaborative web based tools. Steve Easterbrooks‘s opening gambit in response was “well you know what Web 2.0 is don’t you?” an this was met with slightly glazed stares. We realized that, at least to a certain extent, for these students there is no Web 2.0. It’s just the way that the web, and indeed the rest of the world, works. Give people with these assumptions the tools to make things and amazing stuff happens. Arguably, as Jon Udell suggested later in the day, we are failing a generation by not building this into a general education. On the other hand I think it pretty clear that these students at least are going to have a big advantage in making their way in the world of the future.

Apparently screencasts for the demoed tools will be available over the next few weeks and I will try and post links here as they come up. Many thanks to Greg Wilson for inviting me to Toronto and giving me the opportunity to be at this session and the others this week.