I’ve tried two experiments with the “is Birmingham happy” algorithm in the last few days, as they’re not based on place it makes more sense to use the popular term ‘sentiment analysis’ to refer to what it’s doing in this instance. As they were both reasonably short uses it was posible to update the reading often (and use a smaller number of tweets as the sample, giving more variation in the average scores) and give the sentiment graphs a live ‘wormal’ feeling, watching the ratings change over time.
First was on the Personal Democracy Forum EU conference in Barcelona, for the length of the two-day conference I monitored the hashtag #pdfeu every five minutes:
(click image for larger view)
The highest rating was 64.4% (at 12:45pm on Tuesday), the lowest 49.6% (Monday at 12:14pm during a short power failure). What was interesting to me was that the “arousal” rating seemed to work well as it stayed pretty steady during the power failure  (or even leaped up a little) even as the happiness of the hashtag users  dived. Post-lunch conference lulls and periods of excitement (the big spikes in day two, at least, corresponded with much applause) were mapped quite accurately.
The overall average was 57.29%. If you would like to explore or graph the data yourself, you can see in all in a Google Spreadsheet here.
Secondly I tried a much shorter and more mainstream application, David Cameron’s speech to the Conservative Party Conference:
The emotion tracking tool graphed here ran every 10 seconds during David Cameron’s speech to the CPC and analysed the last 100 tweets with the hashtag #cpc10 and the word “tories”. I chose two versions as I wasn’t sure that non-Conservative supporters would use the ‘official’ hashtag, I theorised that they would be likely to use the word ‘tories’. As it turned out I think that while there was a more even spread of pro and anti political types using the hashtag than I expected, but the ‘tories’ Tweeters were definitely more hostile. (See the data.) There was greater movement across the graph than on any other test I’ve run.
Conclusions? None so far, other than that I think this might be a very useful tool, and that more interesting data is created the more Tweets you have and the more you can afford (server-wise) to poll for results. I’m itching to try it on another big live event with conflicting opinions, that might mean training it on a reality TV event. Roll on the X-Factor.
Comments are closed