Practical communications management in the laboratory – getting semantics from context
Rule number one: Never give your students your mobile number. They have a habit of ringing it.
Our laboratory is about a ten minute walk from my office. Some of the other staff have offices five minutes away in the other direction and soon we will have another lab which is another ten minute walk away in a third direction. I am also offsite a lot of the time. Somehow we need to keep in contact between the labs and between the people. This is a question of passing queries around but also of managing the way these queries interrupt what I and others are doing.
Having broken rule #1 I am now trying to manage my attention when my phone keeps going off with updates, questions, and details. Much of it at inconvenient times and much of it things that other people could answer. So what is the best way to spread the load and manage the inbox?
What I am going to propose is to setup a lab account on Twitter. If I we get everyone to follow this account and set updates to be sent via SMS to everyone’s phones we have a nice simple notification system. We just set up a Twitter client on each computer in the lab, logged into that account, agree a partly standardised format for Tweets (primarily including person’s name) and go from there. This will enable people to ask questions (and anyone to answer them), provide important updates or notices (equipment broken, or working again), and to keep people updated with what is happening. It also means that we will have a log of everyone’s queries, answers, and notices that we can go back to and archive.
So a fair question at this point would be why don’t we do this through the LaBLog? Surely it would be better to keep all these queries in one place? Well one answer is that we are still struggling to deploy the LaBLog at RAL, but that’s a story for a separate post. But there is a fundamental difference in the way we interact with Twitter/SMS and notifications through the LaBLog via RSS. Notification of new material on the LaBLog via RSS is slow, but more importantly it is fundamentally a ‘pull’ interaction. I choose when to check it. Twitter and specifically the SMS notification is a ‘push’ interaction which will be better when you need people to notice, such as when you’re asking an urgent question, or need to post an urgent notice (e.g. don’t use the autoclave!). However, both allow me to see the content before deciding whether to answer, a crucial difference with a mobile phone call, and they give me options over what medium to respond with. They return the control over my time back to me rather than my phone.
The point is that these different streams have different information content, different levels of urgency, and different currency (how long they are important for). We need different types of action and different functionality for both. Twitter provides forwarding to our mobile devices, regardless (almost) of where in the world we are currently located, providing a mechanism for direct delivery. One of the fundamental problems with all streaming protocols and applications is that they have no internal notion of priority, urgency, or currency. We are rapidly approaching the point where to simple skim all of our incoming streams (currently often in many different places) is not an option. Aggregating things into one place where we can triage them will help but we need some mechanism for encoding urgency, importance, and currency. The easiest way for us to achieve this at the moment is to use multiple services.
One approach to this problem would be a single portal/application that handled all these streams and understood how to deal with them. My guess is that Workstreamr is aiming to fit into this niche as an enterprise solution to handling all workstreams from the level of corporate governance and strategic project management through to the office watercooler conversation. There is a challenging problem in implementing this. If all content is coming into one portal, and can be sent (from any appropriate device) through the same portal, how can the system know what to do with it? Does it pop up as an urgent message demanding the bosses attention or does it just go into a file that can be searched at a later date? This requires that the system either infer or have users provide an understanding of what should be done with a specific message. Each message therefore requires a rich semantic content indicating its importance, possibly its delivery mechanism, and whether this differs for different recipients. The alternative approach is to do exactly what I plan to do – use multiple services so that the semantic information about what should be done with each post is encoded from its context. It’s a bit crude but the level of urgency or importance is encoded in the choice of messenging service.
This may seem like rather a lot of weight to give to the choice between tweeting and putting up a blog post but this is part of a much larger emerging theme. When I wrote about data repositories I mentioned the implicit semantics that comes from using repositories such as slideshare and Flickr (or the PDB) that specialise in a specific kind of content. We talk a lot about semantic publishing and complain that people ‘don’t want to put into the metadata’ but if we recorded data at source, when it is produced, then a lot of the metadata would be built in. This is fundamentally the publish@source concept that I was introduced to by the group of Jeremy Frey at Southampton University. If someone logs into an instrument, we know who generated the data file and when, and we know what that datafile is about and looks like. The datafile itself will contain date and instrument settings. If the sample list refers back to URIs in a notebook then we have all the information on the samples and their preparation. If we know when and where the datafile was recorded and we are monitoring room conditions then we have all of that metadata built in as well.
The missing piece is the tools that bring all this together and a more sophisticated understanding of how we can bring all these streams together and process them. But at the core, if we capture context, capture user focus, and capture the connections to previous work then most of the hard work will be done. This will only become more true as we start to persuade instrument manufacturers to output data in standard formats. If we try and put the semantics back in after the fact, after we’ve lost those connections, then we are just creating more work for ourselves. If the suite of tools can be put together to capture and collate it at source then we can make our lives easier – and that in turn might actually persuade people to adopt these tools.
The key question of course…which Twitter client should I use? :)