Is Birmingham Happy?

I’ve been running a, very rough, scrape of the Birmingham (UK) based interweb for ’emotional wellbeing’ since April of 2008. Simply put a script running twice a day read in Tweets, news headlines and (originally) blog posts and compared the words within them to a table I’d drawn up of ’emotion’ words and fairly arbitrary scores.

It was surprisingly interesting to watch: despite its roughness, the internal consistency let patterns emerge. It broadly followed weather and sports results, with some peaks and dips you could map to specific happenings, or news stories.

graph of emotion scores

It lead to a spin off focussing on Tweets from MPs, which I think influenced some of the developments that Tweetminster produced in the next year or so.

It was the patterns that lead me to keep putting off improving the algorithm, but recent Twitter API developments meant I had to do some work anyway and that (together with another project, of which more soon) gave me the impetus to give the project an overhaul. And here’s how it works now…

Twitter’s geolocation services are now much improved, so I can specify a point (the centre of Victoria Square in Birmingham) and a radius (10 miles) and get a reasonably accurate dump of Tweet data back—the algorithm calls for the most recent 1000.

Twitter is now the sole focus of data, in keeping with the ‘conversational pychogeography‘ aims of the project (in essence, words used without too much pre-meditation are more interesting than those written purely for publication). It also provides much more and more reactive data.

The words contained within these tweets are then compared to data from the University of Florida (The Affective Norms for English Words – PDF link). Within that data set each word covered (there are around a thousand in the set I’ve using) is given a score for Valence (sad to happy on a scale 0-10), Arousal (asleep to awake on a scale of 0-10) and Dominance (feeling lack of control to feeling in control  on a scale of 0-10). The scores are then collated and a mean calculated. The overall emotional wellbeing score here is calculated as a mean of the three individual means, although the scores are revealed individually on the site.

I’m unsure if combining the results in this way is the best, which is why the site reveals the working — the Twitter feed just goes with one value for ease of understanding and adds a rating adjective too:

if ($brumemotion<100){$rating="fantastic";}
if ($brumemotion<90){$rating="superb";}
if ($brumemotion<80){$rating="good";}
if ($brumemotion<70){$rating="okay";}
if ($brumemotion<60){$rating="average";}
if ($brumemotion<50){$rating="quiet";}
if ($brumemotion<40){$rating="subdued";}
if ($brumemotion<30){$rating="low";}
if ($brumemotion<20){$rating="dreadful";}
if ($brumemotion<10){$rating="awful";}

The Twitter feed produces results twice a day, and these scores are being saved to visualise more graphically, but the website updates every ten seconds (and will self-refresh if you stay on the site) and also displays a word cloud of the currently found ’emotion words’:

is Brum happy right now?

Thoughts on further development

I’ve been experimenting with more local results (here is a version running on just one Birmingham post code — B13) as well as live graphing. I also have a version that will analyse results for a hashtag—something we may use in conjunction with the Civico player to produce ‘wormals’ (graphs of sentiment) during conferences.

But for now, I’m happy to let the new algorithm bed in—wondering about the amount of data and frequency that will be required to see the most detail—and to see what patterns we can spot.

Feedback welcome. Go see for yourself or follow on Twitter.

Mayors, badges, and blue plaques?


What’s Foursquare for? I think people are still working it out. While the benefits for businesses of being able to target customers and record movement are interesting (and would be much improved with widespread adoption), it’s harder to see just why people are using it. That’s not because I’ve got privacy issues (I do in fact experiment with Foursquare), it’s just that it’s a clumsy way to do most location based things.

Much has been said about the ‘game dynamic’, the badges and mayorships, but it’s a pretty simple and pointless game really. No rules to speak of, no way of easily judging performance, no winner–and horrendously easy to cheat (try logging in to the site on your computer, you can check in anywhere). My jury is still out on it.

What does interest me is the idea of leaving ‘tips’ at locations, not so much in the “great coffee round the corner” or “Order the special sauce!”–with illiterate exclamation marks–way, but real information.

This is just something I’ve been experimenting with with Michael Grimes, there’s hardly any information in it yet, and no real way to get it out. It’s an attempt at a syntax, and an encouragement to people to try to use these new locative tools to add information. There’s more here, but basically the idea is to add ‘blue plaque’-style information as ‘tips’ on Foursquare and the like.

There are plenty of sites or apps that do things such as overlaying Wikipedia information over maps, but this isn’t quite the same thing. The problem with Wikipedia for location specific information is the same that blue plaques have: namely invisible gatekeepers of what’s “notable” or “historic”. Civic Societies with arcane rules (eg. subject must have been dead for 50 years), or Wikipedia editors waiting to pounce on things without reliable, verifiable, sources make it hard for people to record history as it happens. And it happens in tiny, homely, ways that committees can’t record.

I’ve tried, jokingly really, to liberate the blue plaque before, but this online way might actually take off.

Blue Plaques

As a demonstration, I’ve knocked up this: a #bp map. It pulls tips from Foursquare that are tagged #bp and lays them on a Google Map. You can change the point of refference of the map by entering latitude and longitude as part of the URL:

But there aren’t, as yet, many more ‘plaques’ to look at –why not add a few?

Chernobyl Fallout

“What would happen if the world were suddenly without people; if humans vanished off the face of the earth? How would nature react —and how swiftly?” That’s the question asked by the documentary ‘Chernobyl Reclaimed‘, and the answer seems to be ‘it gets on just fine without us’.

I’ve been wondering if the social web doesn’t work in much the same way.

The social web, or to be more technically correct the social Internet has been around for a long time. USENET is over a decade older than the World Wide Web and though its appearance is something like a forum it’s a little more complicated than that.

It’s based on a hierarchy of groups, organised as something that we’d read like a domain name: sci.physics or or, users subscribe and their client keeps track of what’s read and unread. It’s not quite synchronous: the service or Newsservers that you subscribed to may only have taken a portion of the available groups and once you post it has to propagate through to the other servers.

That said, if you’ve used a message board or forum or Facebook discussion you’d be right at home — you can go off and try it now. Google Groups in part acts as a newsserver and you can subscribe to USENET groups and post via the web or email. You won’t find much action in most of them, and even those with fairly high post-counts probably aren’t as lively as the once were. I’ve just taken a look into, a group I lurked in a little back in the ’90s and, while there are posts and the odd conversation thread, it’s about 60/40 with spam posts offering cheap watches, easy jobs and easy women.

One trusts that those still frequenting those groups have learned to live with the spam, they have good filters or a high tolerance— or perhaps, despite the hassle, still find the groups the best place for what they do and the community survives.

Spam isn’t the only reason to move on, of course,  and some of the more general groups have found discussion splintering, devolving and just going elsewhere.

I thought for a time that this might lead to a social Internet Gaia hypothesis: that the various systems on the Internet are ” closely integrated to form a complex interacting system that maintains the [economic] and [conversational] conditions in a preferred homeorhesis” to paraphrase the original. In short that the social aspect of Internet routes around blockages much like the data packets do.

In shorter: people move on if the space no longer works the best.

And they’ve certainly moved on from the group that is uk.local.birmingham (stared I learned the other day, by a friend of mine back in the early part of the 1990s). I still keep a subscription to it on the off chance, but from a 1999 high-point of nearly 3,000 messages a month it’s now dribbled down to about 20. Apart from spam, the only surges of activity are bile-filled back-and-forths somehow connected with sometime Birmingham ‘King of Clubs’ (with all that entails) Eddie Fewtrell. In any real terms this newsgroup has returned to nature.

And yet it’s still going.

uk.local.birmingham | Google Groups

The spam is automated, so it doesn’t know that it’s not reaching people. The website (in my case) and newsservers don’t know that the traffic isn’t human so they continue to serve it. Those few real subscribers either no longer use the email addresses they signed up with, can’t be bothered to unsubscribe, or have long since filtered the responses away. Or they’re me—too sacred to mis anything—or the blokes whose ’70s territorial spats are best  conducted from the safety of a kitchen laptop. With Smooth FM on.

The Internet doesn’t care who or what is using it, it just bats content around. People set things up and then leave, it still carries on. How many services have you set to autopost, or synced with newer or better spaces and then sort of stopped using?

If every real person left Twitter tomorrow, like some dystopian novel (the film would be terrible), Twitter would carry on. As long as someone was still paying the server bills: pumped in Facebook statuses would still be posted, Foursquare mayors would still be declared, ‘news’ from thousands of company sites would be Twitterfeeded (or similar) to a gasping lack of public. And bots would generate new, Twitter only, content some silly, some aggregated, some spam.

The words ‘New Blog:’, ‘Breaking’, and ‘I am Jack’s colon.’ would still appear, and a lot of those posts would be shuffled off to or even some Facebook statuses. Autopost is the weed that would grow over our cities, spam is the animals slowly taking over. Our social web Ghosts in the Hollow.

In a way this already happens, there are thousands of social web accounts that exist purely to exist: automatic and unweeded, they either spam or have been set up and discarded. The amount of companies ‘talking’ to each other on Twitter is amusing to behold, often set up on a whim and operated from another service (usually Facebook) the accounts Tweet— but really that’s all that’s going on.

I was alerted to a local shopping mall being ‘on Twitter’ the other day, it’s been ‘tweeting’ for nearly a year. Following’ (despite never, it seems, logging into Twitter or dealing with @messages) 114 and being followed’ by 126 accounts. 90% of both of those numbers are other organisations tweeting nothing in much the same way, or people who work for those organisations. The account is a bot, talking to other bots and doing nothing except perhaps disappointing anyone who did want to engage with them.

The weeds are poking though even in fairly well ‘Liked’ Facebook pages too. Facebook page spam is on the increase, leave your page unattended for a day or so and if it’s popular enough to have attracted the attention there will be Russian brides and pyramid schemes posting. If it’s a page you created for fun, fair enough. It’s all about effort and no-one will think any less of you—but if it’s your work, I think you owe it to whoever you’re trying to talk to to care a little more.

Facebook | Birmingham: It's Not Shit

What can you do? Think carefully about what you automate, close or mothball old and unused profiles and pages. The usual stuff you’ll never get round to doing.

Twitter, reportedly, has about 3% of it’s servers at any one time full of Tweets about Justin Bieber. That’s some power of stardom, but think about it: how many of those accounts are autoposting to Facebook (or vice versa)? That’s 3% of Facebook’s severs too. And’s and mySpace’s, perhaps. Maybe 1% of the Internet?

I’m guestimating to the point of losing all thread of argument, but the ecological consequences of auto-posting to dead services is probably fairly significant. We could be sucking the planet dry with our automated laziness.

But the animals and plants will do just fine.

The future of publishing

If there’s one thing that fills the web more than cat pictures it’s ruminations on the state, past or future of newspapers and magazines. The truth is old models are failing and no-one really knows. Rupert Murdoch is trying paywalls, which is a possibility for publications with existing audiences and strong brands, but what can a start-up publication do?

In my own small way I’m experimenting — this week sees the launch of a—yes—paper-based magazine that Danny Smith and I have been working on for the best part of six months. This is what it looks like:

Pile of Dirty Bristow magazines

Things we’ve worked out so far:

  • Print is really expensive at small scale, but it’s still much easier to get people excited to work for and to sell than web content.
  • Brand is all important: we’ve gone for wilfully obtuse and arty—we think that’s a sector we can sell to.
  • A clean break between web and print means that you need to create lots of reasons for, and a fair amount of, ‘related but not similar’ content. Content that reaches the same audience, but isn’t seen as either a free or a second-rate version of what you’re asking payment for in print.
  • A new thing needs its networks—we’ve tried to make sure that everyone that can feel ownership of the magazine finds it easy to talk about and share stuff about it with their networks.
  • If you’ve got a brand, related events can make a fair bit of money—but they’re an additional risk. We’re operating at small scale, but publishers have tried this — Wrox Press when I worked for it’s web design offshoot was trying to maximise return on brand by conferences, it didn’t bring in enough money to save the company. It seems easier, however, to sell a specific happening via the social web than it does an ongoing concept.
  • No-one’s going to pay to get past the paywall on a Twitter account—well only about ten people in my experience.

As well as being exhausting and a great hobby, there’s been a fair few opportunities to try out different promotional web-tricks that I’m going to use again. Issue two shouldn’t take so long.

Lost and Found

Meet Scabbycat. He pitched up in our front yard last week sometime.

Found in Billesley Lane/Springfield Road area. Border of B13 (Moseley) and B14 (Kings Heath)

He’s okay, bit neglected looking — but doesn’t have plans to leave, so we’re looking after him as best we can. He can’t come to live in our house as he might have something that our cats could catch, but we’ve got him some shelter and are feeding him. The vet’s has confirmed that he’s not microchipped (so we can’t find his owner), and is reasonably healthy, but there’s nowhere for him to go for a few weeks — none of the cats homes or sanctuaries we’ve contacted have a space.

We’ve tried tweeting and blogging about him, but the chance of connecting to his owners is only improved if they are good at searching the social web.

There is, if you Google, a ‘National Missing Pets Register’ website. It’s nothing official, rather a altruistic effort by a web designer called Steve Dawson — but it has some sort of traction, and visibility is all in this instance. There aren’t a great deal of lost/found notices on there, but it’s the main site certainly.

The site notes that developments are still ongoing — search on the site would be an easy win in the sense of making the site better (no idea how easy technically), as would listing by location (the tighter the better), notifications, RSS feeds and better photo handling.

A few of these would help other people spread the information (location-based feeds especially), but there’s nothing here that harnesses the social power of the web — so I’m going to throw some ideas about, maybe there’s a service someone could build in here.

For me the problem is that the only people on the site are within the transaction — they’re the people who have either found or lost a pet (and they’re a subset of all of these even). We need something than can use the connections and serendipity of the social web to increase the chances of a reuniting.

With tighter location feeds the site could power lost/found pet widgets for local sites and blogs. That would increase the likelihood of some who can make a connection spotting the pet — but also increase the awareness of the site itself.

Is there something in game mechanics? — Possibly attempting to match descriptions or photos of lost/found animals (currently no way to really search for matches) or in some way improving the descriptions/tagging/locations of found animals.

What could be done with mobile or locations? — Is there a way that “spottings” (helpful, but not as good as a find) could be registered easily? Could you add poster information or information found on the streets but not added by the owners for whatever reason.

Direct feeds in from dog wardens, cats/dogs homes/police/vets — could a site it be useful for them too. Making it as easy as possible for the overworked needs to be given thought.

Is it ‘fixmypet’?

Can you add? Or more importantly can you build or fund?


When chaos runs slowly enough it can look like calm. But it’s still chaos.

There has been chaos within the methods of communication used between authority and those it claims authority over for hundreds of years — but until recently you could only see it with the scale of hindsight. There hasn’t been a transition from calm to chaos, merely a speeding up of that chaos.

Stop thinking that it’s about to calm down any time soon, or ever.

Is Flipboard that ‘just good enough’ RSS reader?

A few posts ago I postulated that for most people, for most types of ‘news’ algorithms based around attention and the social graph may well be almost good enough to replace the idea of subscribing to RSS feeds of content directly.

And then Flipboard came along.

Flipboard is a news reader for the iPad with an exquisite interface that’s just right for the touchscreen interface — it’s all page turning and large buttons. So far so Reeder (my favourite iPad RSS reader), but Flipboard does a couple of things that make it stand out, and doesn’t do a couple of things that make it on the way to being the sort of thing that will make RSS feeds as user technology scarcer:

  • It presents curated bundles of feeds ‘FlipNews’ or ‘FlipStyle’ — editorially picking the major (ie mainstream, “just good enough”) feeds for  subject areas.
  • It allows you to sign in with Twitter and Facebook, pulling in links and content linked to by people you’re in contact with.
  • It doesn’t mention (or it seems actually use) RSS, it just grabs the content and shows it to you.
  • It doesn’t let you add your own feeds.

There’s no attention profiling that I can see, but there’s absolutely no reason why it shouldn’t be an addition that the makers are working on. No, it won’t replace an RSS reader — but it’s not trying, and in many situations it’s better for that.

Some work to do to implement something similar that works on other sorts of devices (a conventional computer would need a very different interface), but for iPad it’s good enough.

The public/private problem

People in difficult situations have always relied on dark humour to get them through, police, doctors, solders are well known for it. Private grief or impotent horror at public events produces jokes or thoughts that are not always palatable. It was always thus, I’m sure you can remember school-yard jokes about major disasters, I’m sure that psychologists could point to research about why we do it and why it helps.

Last Friday night Twitter, the only special media form I use often enough to have been checking on a weekend evening, was alive with comment on the Raoul Moat case and the rolling TV news coverage of it. Rolling news, particularly the Sky version, is an easy and oft used target amongst the (mostly liberal, mostly educated, mostly cynical) people that I come into contact with there. The repetitive nature of 24 hour news, the lack of actual happenings — it’s easy meat for the sort of “social satire” that Twitter does around major news events.

A difficult, horrific and scary, situation was made mundane by the coverage. That’s what rolling TV news does.

And then something really odd happened. Paul Gascoigne turned up.

It was sad, Gazza has had well publicised mental health and addiction problems for some years – but there is no denying that the event provided all the essential ingredients for comedy: juxtaposition, recognition, shared nervousness, mundanity (in his shopping list of things brought, and in his use of unimaginative nicknames).

It would be, and I’m sure will and should remain, unthinkable for mainstream comedians to do Gazza/Moat material — but in private most people would have been comfortable to share in the darkly comic aspects of the story. And laugh, because there’s nothing else you can possibly do in that moment to change anything.

Here lies the collision we’re about to see (or are seeing) between that with the media can show as acceptable reaction and what we now know about the actual reaction of huge numbers of people. We may have in the past heard ‘sick’ jokes at work or in the pub, in recent years my SMS inbox has filled with them from those a generation above me (and it has too this week) but it’s only now that the public sphere has communication tools that allow this to happen in ‘public’.

Cue media (and political, in politics’s role as a branch of media) outrage.

So we have a problem — there seems that there is no way that the media or those courting it for political purposes can take anything but the outraged position. If anyone in that sphere were to step out of line then they would swiftly become the story, and they have power, influence, and money to lose.

We saw this in the General Election campaign, potential candidates were hounded out after using the social web to express opinions that everyone would have expected them to hold in private. Maybe they should have known better (in fact, they of all people — in the game where leaping on signs of unconformity is to conform — should know most of all), but it’s a regimented and dull World we’re being forced to live in, one where no-one can make a mistake however small.

Imagine if Princess Diana died again tomorrow, how far would the media’s reaction (which would no doubt be the same as it was them) be from the public (or at least  public space online) reaction?  If I’ve read one think piece, years later, about how the “public outpouring of grief” wasn’t shared by anywhere near to all of the public I’ve read hundreds. Now people might well be brave enough to say so.

What happens in online social interaction isn’t, for most, a truly public space — it may be open to all but it is intended to be read by those who are connected to them. Hence we get a false dichotomy; all utterances on the social web are public, but some are more public than others. We have to move to a way where all media, social or otherwise can cope with that.

“Just good enough” and why RSS readers might be skip-tech

As I said, I don’t see QR codes ever being widely accepted in the UK, easier and lower impact methods of getting to the right web content have already started to take over. They’ll be skipped, be skip-tech, because easier to understand methods are “just good enough”.

This isn’t an isolated incident. The rise of the flip camera or the mp3, even the cassette tapes are the triumph of “just good enough” (see Wired 17.09). Sure, audiophiles want lossless FLAC files, gold cables, valve speakers but it’s niche activity — better yes, but a better than most people neither need nor really care about enough to put in the extra time or money.

I’ve been thinking that RSS, at least direct use of RSS by people subscribing in readers might just be about to go the same way.

Attention profiling and algorithmic suggestion aren’t great, yet — but they’re getting better and combining with social search results (Google’s “results from your social circle” and packages of social links like Twitter Times are good) will soon produce a news feed that is good enough for most purposes.

I teach search and RSS skills a lot. I spent a good couple of hours with Multistory this morning helping the team get to grips with the technology, the ideas of search feeds, sharing RSS items with each other, using the technology to its best effect; we were looking at how to to a great job at being aware of everything they can be on the be to do with their work. But we professional connectors, the researchers, the obsessives in our fields will be niche — for the things we care about most we’ll work hard but for other stuff, “just good enough” will be good enough.

The new BBC news site layout makes the actual RSS feeds for the sections (as opposed to the explanation of what the technology is)  much harder to find,  it’s pushed into the browser detection removing it from being a piece of obvious or mainstream tech.

I no longer subscribe to any news feeds from the mainstream national press, already the combination of social links and search for the topics I know I care about is “good” enough for the things I’m interested in about to find me “just enough” of the time. Those talking about “the death of RSS” for the last couple of years, have meant that people were getting their timely links through social means, that’s not as good, but for the people who’ve stopped using RSS readers it’s “just good enough”.

I’m not sure of the format that this “just good enough” reader will take — design it this instant and perhaps it would look like one for the experimental layouts the New York Times has been working on, or perhaps the Guardian’s iPhone app. Make a personalised feed that measured your attention, factored in the news and added the best from your social graph (well enough )and pump it direct to you via Twitter or wherever else you spend your online life and it’d take off like a shot. It wouldn’t be enough for everyone, or provide enough on each of everyone’s interest — but “just good enough” for most, most if the time it’d be.

I saw Eli Pariser talk recently and he wowed the crowd by showing the difference between “your Google” and other peoples’ — even if you’re not signed in,  search results are personalised based on a ton of factors (location, cookies of previous searches etc). The worry that hits people is that this may mean that searchers aren’t exposed to opinions from outside of their social graphs. Our “just good enough” reader has the same problems — but there’s no reason that the algorithm for it shouldn’t be open, so at least you would have the opportunity to be aware of the bargain you were making.

Most may not care, “just good enough” will be good enough and the idea of RSS as a front-facing mainstream technology may well be gone.

Clay Shirky and the Cognitive Surplus

George Orwell said, in an essay of fulsome praise of the man and his work, that Charles Dickens “was not a revolutionary writer”. He didn’t mean that Dickens wasn’t capable of or responsible for revolutions in prose, but that despite the image as a champion of the downtrodden he didn’t wish for systemic revolution — everything would be better, Dickens thought, if people were nicer.

That almost sums up what I think of the work of Clay Shirky, in his first book Here Comes Everybody and now in the new Cognitive Surplus he gives example after example of positive ways that the social web has altered the way people behave and organise, but while talking about revolution he is offering not too much more than the idea that the rules can be as simple as “be nice”. Like the first book it’s a great read, it’s enthusing and Shirky explains the ‘why’ better than almost anyone else — he even, surprisingly to me as it’s the first time I’ve read or heard him touch upon it,  has a belated go at the ‘how’.

The cognitive surplus of the title is the comeback to the question “how do people find the time?” often asked about people who are active on the social web — Shirky’s (rather glib, he admits himself) answer is “they stopped watching television”.  You can get the gist of this from some of his recent talks like this one in Bristol (thanks Pete), but to sum up and very much paraphrase: ‘economic circumstances since the 1940s have given people more free time and they now have tools to use that time on a wider collaborative scale’.

Where I was uncomfortable with Here Come Everybody where the examples where it seemed as if an educated, connected, class could use these tools to produce pressure even if that was exerted on a lower class. I’m unsure as to whether Shirky doesn’t see these issues, doesn’t see them as a problem, or, is merely pointing out facts without editorialising. It may be due to my own thoughts around class and digital inclusion, or it may be due to the American perspective on class issues being different. Where Cognitive Surplus falls down for me is not just this, although problems do seem to be on the radar,  but the way civic actions formed from this surplus are strictly divided from the merely communal.

Continue reading Clay Shirky and the Cognitive Surplus