Finding the time…

My melting time
Image by Aníbal Pées Labory via Flickr

Long term readers of this blog will know that I occasionally write an incomprehensible post that no-one understands about the nature of time on the web. This is my latest attempt to sort my thinking out on the issue. I don’t hold out that much hope but it seemed appropriate for the New Year…

2010 was the year that real time came to the mainstream web. From the new Twitter interface to live updates, the flash crash, any number of other developments and stories focussed on how everything is getting faster, better, more responsive. All of this is good, but I don’t think its the end game. Real time is fun but it is merely a technical achievement. It is just faster. It demonstrates our technical ability to overcome observable latency but beyond that not a lot.

Real time also seems to have narrowed the diversity of our communities, paradoxically by speeding up the conversation. As conversations have moved from relatively slow media (such as blog comments) through non-real time services, through to places like twitter I have noticed the geographical spread of my conversations has narrowed. I am more limited because the timeframe of the conversations limited them to people near enough to my own timezone. As I move to different timezones, the people, the subjects, and the tone of the conversation changes. I become trapped by the native timeframe of the conversation, which on Twitter is just slightly slower than a spoken conversation.

A different perspective. Someone last year (I’m embarrassed to say I can’t remember who) talked to me about the idea of using the live twitter stream generated during a talk to subtitle the video (thanks to @ambrouk for the link) of that talk to enable searching. Essentially using the native timestamp of both twitter and the video to synchronise a textual record of what the speaker was talking about. Now this is interesting from a search perspective but I found it even more interesting from a conversational perspective. Imagine that watching a video of a talk and you see embedded a tweeted comment that you want to follow up. Well you can just reply, but the original commenter won’t have any context for your pithy remark. But what if it were possible to use the video to recreate the context? The context is at least partly shared, if the original commenter was viewing the talk remotely then almost completely shared, so can we (partially) recreate enough of the setting, efficiently enough, to enable that conversation to continue?

This is now a timeshifted conversation. Time shifting, or more precisely controlling the intrinsic timescale of a conversation, is for me the big challenge. Partly I was prompted to write this post by the natural use of “timeshifting” in a blog post by Andrew Walkingshaw in reference to using Instapaper. Instapaper lets you delay a very simple “conversation” into a timeframe under your control but it is very crude. The context is only re-created in as much as the content that you selected is saved for a later time. To really enable conversations to be timeshifted requires much more sophisticated recreation of context as well as very sensitive notification. When is the right moment to re-engage?

One of the things I love about Friendfeed (and interestingly one of the things Robert Scoble hates, but that’s another blog post) is the way that as a conversation proceeds a whole thread is promoted back to the top of the stream as a new comment or interaction comes in. This both provides notification that the conversation is continuing but also critically recreates the context of the ongoing discussion. I think this is part of what originally tipped me into thinking about time and context.

The point is that technically we need to regain the control of our time. Currently the value of our conversations are diminished by our ability to control their intrinsic timescale. For people like Scoble who actually live in the continuous flow, this is fine. But this is not a feasible mode of interaction for many of us. It isn’t a productive mode of interaction for many of us, much of the time, and we are losing potential value that is in the stream. We need mechanisms that re-surface the conversation at the right time, and on the right timescale, we need tools that enable us to timeshift conversations both with people and with technology, but above all we need effective and efficient ways to recover the context in which those conversations are taking place.

If these problem can be solved then we can move away from the current situation where social media tools are built by and used and critiqued largely by the people who can spend the most time interactign with them. We don’t get a large proportion of the potential value out of these tools because they don’t support occasional and timeshifted modes of interaction, which in turn means that most people don’t get much value out them, and in turn means that most people don’t use them. Facebook is so dominant precisely because the most common conversation is effectively saying “hello, I’m still here!”, something that requires very little context to make sense. That lack of a need for context makes it possible for everyone from the occasional user to the addict to get value from the service. It doesn’t matter how long it takes for someone to reply “hello, I’m still here as well”, the lack of required context means it still makes sense. Unless you’ve forgotten entirely who the person is…

To extract the larger potential value from social media, particularly in professional settings, we need to make this work on a much more sophisticated scale. Notifications that come when they should based on content and importance, capturing and recreating context that makes it possible to continue conversations over hours, days, years or even decades. If this can be made to work, then a much wider range of people will gain real value from their interactions. If a larger proportion of people are interacting there is more value that can be realised. The real time web is an important step along the road in this direction but it is really only first base. Time to move on.

Enhanced by Zemanta

7 Replies to “Finding the time…”

  1. Thanks for the thought provoking post.
    I’m not at all convinced that “real-time” or synchronous systems are actually particularly beneficial. They can certainly be fun, and there are times when being able to quickly share ideas and get feedback is useful, but then any system which provides notifications can achieve the same effect. Semi-synchronous systems fulfill the same role, but without prejudicing against people who are less adept at keyboard use, or who speak a different language, or have a life (!)

    But there is a deeper point which your post also hints at, I think. The increase in speed of communications can alter the participant’s ability to engage with a broad set of resources. It can also mean that influential “posts” can quickly attract a “following”, even if possibly based on an underlying mis-comprehension. Communication is a feedback system, and such systems can become less stable when the time delays are decreased, specifically when the system involves positive feedback. This is the sort of behaviour seen when an idea/video/meme spreads virally, and can also be seen when a celebrity or charismatic leader gets ‘the word’ out to a lot of people who may not be as critical as perhaps they might. On the flip side, in those cases where the participants in the conversation query or critique a post, the extra speed is useful in that it is a negative feedback cycle – part of a system which is naturally self limiting.

    Politics, journalism and product hype tend to ‘like’ positive feedback, whereas science and rational debate probably prefer, and use, negative feedback. Both can thrive in a real-time environment, but whether it is healthy for society for them to do so may be another matter.

    The negative feedback side of things definitely requires accurate contextual information, as you observe. In terms of tools, we may be better served with technologies like Wave or Belugapod to support the passing of context alongside the latest comments (possibly with some adaptation). It is probably also useful to have a level of delay from the perspective of allowing ideas to develop before being released into a potentially harsh environment. I suspect that this has been an advantage of the tendency to use jargon – although it diminishes the chances of the majority of the population from being able to quickly grasp what is being said, this means that the local community (of ‘experts’) can bounce ideas around and allow them to develop before they are mauled by others. Community size is also important in this; very small communities will tend to develop positive feedback loops (or they won’t tend to work together), and very large communities may have negative feedback loops predominating (natural conservatism through a diverse range of personal experiences leading to assumptions about whether something will work). Medium sized groupings allow ideas to develop whilst still being open to scrutiny.

    Anyway, just some thoughts :-)

  2. Martin (Hawksey) originally implemented an ‘anytime comment’ utility with uTitle [ http://www.rsc-ne-scotland.org.uk/mashe/2010/06/convergence-youtube-meets-twitter-in-timeline-commenting-of-youtube-videos-using-twitter-utitle/%5D – the service can be found here: http://www.rsc-ne-scotland.org.uk/mashe/utitle/connect.php

    One of the issues we found with adding tweets to a video was that it might take several seconds to type a tweet before sending it, whioch would mean that a tweeted annotation would be dis-located from the part of the video that prompted the tweet; uTitle partially addresses this issue by optionally generating a time stamp corresponding to the time a user starts typing – so tweets can be more accurately aligned on the timeline with the point of the video that prompted the user to make an annotation.

    PS it’s worth checking Martin’s blogs for more recent posts, including this description of how Martin’s apps have been used to augment videos from several recent conferences.: http://www.rsc-ne-scotland.org.uk/mashe/2010/12/making-ripples-in-a-big-pond-optimising-videos-with-an-ititle-twitter-track/

  3. Thank you Cameron et al.

    I’ve just read about ostatus.org which status.net implements.

    For a few months, status.net based services like identi.ca have provided reasonable conversation context.

    A recent change to Twitter also allowed users to get some context, but I don’t find it easy to see entire conversations. Perhaps there is another service that does that for Twitter…

  4. Hadn’t come across OStatus before but it looks interesting. Will need to dig a little further. The point about conversations on Twitter is an interesting one and there are certainly disparate views. I think the point is that different people need different views. Twitter as a transport protocol (rather than a user service) doesn’t natively support the conversation. Or at least it doesn’t unless people use the @reply button in a compliant client. This is the most irritating thing for me, not being able to track conversations after the fact. Interestingly where I link to Scoble’s answer on Quora above its the one thing he absolutely thinks is the wrong thing to do. I think this is driven by the way he uses social media and he doesn’t have the perspective that some of the rest of us have but his point is worth taking seriously.

  5. I’ve been thinking about these ideas too, albeit in a slightly different way. Besides context, which other commenters have already covered, one missing element from ff, twitter and facebook is finding related information (similar posts, subjects, etc.) although there are some tools like tag clouds that do this in a rudimentary way. I think the next generation of these tools will not only control the time element of the conversation but have more “data mining” and subject mining tools to help discovery of information.

  6. Elizabeth, yes the closest we seem to have to this at the moment is either
    search, or the related items in Friendfeed (which is only triggered by a
    common link) or shared tags and links in bookmarking services. One of the
    things I’ve always liked about the PubMed web site was the “related items”
    button which I found to be a good way to be more confident that I’d caught
    everything relevant.

  7. “… is the way that as a conversation proceeds a whole thread is promoted back to the top of the stream as a new comment or interaction comes in.”

    This Quora thing kinda does that, but has the bad aspect that it keeps the earlier notification, and that it only shows the last addition, thus eliminating the context… but thanks for bringing it up! It’s indeed one good feature to like or not like services!

    One thing any service should have too, is a best-of-the-day summary… but I think that falls under your not-everyone-can-follow-the-live-flow paragraph…

Comments are closed.