“Real Time”: The next big thing or a pointer to a much more interesting problem?

There has been a lot written and said recently about the “real time” web most recently in an interview of Paul Buchheit on ReadWriteWeb. The premise is that if items and conversations are carried on in “real time” then they are more efficient and more engaging. The counter argument has been that they become more trivial. That by dropping the barrier to involvement to near zero, the internal editorial process that forces each user to think a little about what they are saying, is lost generating a stream of drivel. I have to admit upfront that I really don’t get the excitement. It isn’t clear to me that the difference between a five or ten second refresh rate versus a 30 second one is significant.

In one sense I am all for getting a more complete record onto the web, at least if there is some probability of it being archived. After all this is what we are trying to do with the laboratory recording effort; creat as complete a record on the web as possible. But at some point there is always going to be an editorial process. In a blog it takes some effort to write a post and publish it, creating a barrier which imposes some editorial filter. Even on Twitter the 140 character limit forces people to be succinct and often means a pithy statement gets refined before hitting return. In an IM or chat window you will think before hitting return (hopefully!). Would true “real time” mean watching as someone typed or would it have to be a full brain dump as it happened? I’m not sure I want either of these, if I want real time conversation I will pick up the phone.

But while everyone is focussed on “real time” I think it is starting to reveal a more interesting problem. One I’ve been thinking about for quite a while but have been unable to get a grip on. All of these services have different intrinsic timeframes. One of the things I dislike about the new FriendFeed interface is the “real time” nature of it. What I liked previously was that it had a slower intrinsic time than, say, Twitter or instant messenging, but a faster intrinsic timescale than a blog or email. On Twitter/IM conversations are fast, seconds to minutes, occassionally hours. On FriendFeed they tend to run from minutes to hours, with some continuing on for days, all threaded and all kept together. Conversations in blog comments run over hours, to days, email over days, newspapers over weeks, academic literature over months and years.

Different people are comfortable with interacting with streams running at these different rates. Twitter is too much for some, as is FriendFeed, or online content at all. Many don’t have time to check blog comments, but perhaps are happy to read the posts once a day. But these people probably appreciate that the higher rate data is there. Maybe they come across an interesting blog post referring to a comment and want to check the comment, maybe the comment refers to a conversation on Twitter and they can search to find that. Maybe they find a newspaper article that leads to a wiki page and on to a pithy quote from an IM service. This type of digging is enabled by good linking practice. And it is enabled by a type of social filtering where the user views the stream at a speed which is compatible with their own needs.

The tools and social structures are well developed now for this kind of social filtering where a user outsources that function to other people, whether they are on FriendFeed, or are bloggers or traditional dead-tree journalist. What I am less sure about is the tooling for controlling the rate of the stream that I am taking in. Deepak wrote an interesting post recently on social network filtering, with the premise that you needed to build a network that you trusted to bring important material to your attention. My response to this is that there is a fundamental problem that, at the moment, you can’t independently control both the spread of the net you set, and the speed at which information comes in. If you want to cover a lot of areas you need to follow a lot of people and this means the stream is faster.

Fundamentally, as the conversation has got faster and faster, no-one seems to be developing tools that enable us to slow it down. Filtering tools such as those built into Twitter clients help. One of the things I do like about the new Friendfeed interface is the search facility that allows you to set filters that display only those items with a certain number of “likes” or comments help. But what I haven’t seen are tools that are really focussed on controlling the rate of a stream, that work to help you optimize your network to provide both spread and rate. And I haven’t seen much thought go into tools or social practices that enable you to bump an item from one stream to a slower stream to come back to later. Delicious is the obvious tool here; bookmarking objects for later attention, but how many people actually go back to their bookmarks on a regular basis and check over them?

Dave Allen probably best described the concept of a “Tickler File“, a file where you place items into a date marked slot based on when you think you need to be reminded about them.  The way some people regularly review their recent bookmarks and then blog the most interesting ones is an example of a process that achives the same thing. I think this is probably a good model to think about. A tool, or set of practices, that park items for a specified, and item or class specific, period of time and then pulls them back up and puts them in front of you. Or perhaps does it in a context dependent fashion, or both, picking the right moment in a specific time period to have it pop up. Ideally it will also put them, or allow you to put them, back in front of your network for further consideration as well. We still want just the one inbox for everything. It is a question of having control over the intrinsic timeframes of the different streams coming into it, including streams that we set up for ourselves.

As I said, I really haven’t got a good grip on this, but my main point is that I think Real Time is just a single instance of giving users access to one specific intrinsic timeframe. The much more interesting problem, and what I think will be one of the next big things is the general issue of giving users temporal control within a service, particularly for enterprise applications.