future web

Skip-Tech — be careful what you back

POSTED IN future web, good practice, social media | TAGS : , , , 5 July 2010

I’ve have three separate conversations about QR (Quick Response) codes, which lead me to wonder if there were at last reaching some kind of mainstream use or at least critical mass. If you’ve not come across them, they’re a two dimensional bar-code (conventional barcodes are only one dimensional, the width of the black and white stripes represent numbers — the length is only so that they can be seen) that is meant to allow readers to be sent straight to a  web page.

QR codes were invented in Japan and have seen very heavy use there over the last ten or so years: a friend has been over and brought back some odd mushroom-chocolate sweets and even they had a code prominently on the box:

����!  #bigarigatou #mushroombenefactor #babelfishisn... on Twitpic

I wouldn’t have been able to use it though, as I was in a pub with no wifi or mobile reception. Even if I had that I’d still need to: know what a QR code is, have a mobile device that can use them (common in Japan, not quite so common in the UK — for example even iPhones don’t come with an app built in, older ones don’t have a camera hi-res enough to take the shots), be able to take a good enough photo (i.e. not see the code from too great a distance, in low light, on the move etc). Try and often as not you get something like this:

QR Codes were invented to solve a problem, the problem was that it wasn’t easy to display URLs in a usable way — in Japan the display of URLs was complicated by the use of western characters in web addresses (steps are being taken to even this out), but in countries using western characters the URL has to be pretty long before it isn’t quicker and easier to type the address into your browser. As someone interested in technology I was excited to see one on a pop bottle last year, took a while to find an app, and take a photo that worked — only to be directed to pespi.com.

The codes are even now starting to fade out in Japan — it’s becoming more usual now is for adverts/packaging to say “search for A COUPLE OF KEYWORDS” rather than use a QR or a URL. This has been the case in South Korea for a while, and is starting to catch on in the UK too (if you see a trailer for the new Leonardo DiCaprio film you’ll see that after tiny references to  its website URL and it’s Facebook page you get a full screen caption asking you to ‘Search for Inception movie’). In my mind this is because we’re skipping QR Codes in the UK — not because they were bad technology but because other pressures have combined to overtake them before they have become mainstream. In this case it’s a combination of the lack of a direct problem to solve and the realisation that vast numbers of people will search even when given a URL (yes, they type the URL into Google). The ‘search us’ method is becoming even more common as it’s easier to own a search term than to buy a memorable and usable URL.

If you were trying mass communication you’d have wasted any time invested in QR Codes (luckily they’re cheap/free, educating people as to what they were might not have been) — they’re “skip-tech”: technology that is overtaken too quickly to gain a foothold.

The more you look, the more you can see it — skip-tech isn’t failed technology like Betamax or HD-DVD that had direct competition and lost out in a format war (although both HD-DVD and Blu-Ray look like becoming skip-tech to cloud or hard drive storage), it’s tech that while it does its job doesn’t fit in with the way people use technology.

MiniDiscs are the ultimate skip-tech: the first consumer digital recording format, they offered the sound quality and convenience of a CD, with the record-ability of tape. Despite working well and finding favour amongst radio people, the advent of recordable CDs (that could be played on the same systems even if recorded elsewhere) soon afterwards killed it as a format.

It’s not always so direct as a format change, but a change in behaviour. DVD recorders to use with your TV were quickly overtaken by easier-to-understand (what was all that “finalizing” about?) PVRs and online catch-up services, not to underestimate the sheer number of ‘opportunities to see’ (er, repeats) most programmes and the trend to watching whole series in box-set size gulps.

Online there’s more chance for this to happen more quickly, as with most things. The proliferation of photo sharing services backed by camera companies, processing companies and even printer firms was quickly overtaken by Facebook becoming the place to share photos. Facebook has millions more members than much better photo-specific services such as Flickr, as the sharing of photos has become something that you do within your social graph much more than with the “whole web”. The sharing of photos has become both more communal (i.e. people mostly do it in the same space) and less open (generally) than people were expecting — the branded photo-upload site is skip-tech.

If the technology is quick and cheap enough to be treated as disposable it needn’t matter if you back a piece of skip-tech — nor will it if your aims are to reach the early adopters or for a short time — but it’s dangerous to base longer term strategy on technology where the use or the community hasn’t started to form a little.

Be quick, be agile, look out for new things — but always back the social use over the flash new tech.

We know where we are, and is that about all?

POSTED IN future web, geodata | TAGS : , , , 25 May 2010

Mapping is about boundaries and scale, watching the recent BBC series about the history of the map you could see how — once the powerful were commissioning — that the placing of boundaries and the scale of each territory (often exaggerated) were the focus.

And boundaries were one of the things that open data advocates were most pleased that the Ordinance Survey released for use recently. At at Andrew McKenzie’s Mapitude event in Birmingham the attendees worked on a Ward Comparison site that plotted the boundaries of Walsall Council wards automatically on Google Maps:

Ward Mapper | St. Matthew's Ward

Good work, and as Michael Grimes says: “surprising as it may sound, this hasn’t really been done before. Prior to 1 April 2010, UK ward boundary data were simply not available for public use; groups of fools like us could not have spent our free time building this tool for the public good.”

But he also has another really good point that anything like this would be improved by “users choosing boundaries based on their own understanding of geography (administrative boundaries for religious groups or sports organisations, for example) as well as the official civic ones”.

I’m quite obsessed with the idea of defining areas in a meaningful way, have done a fair amount of work with organisations that use internal or official descriptions or designations and then expect the public to grasp what it means to them. People don’t know, or usually care, which NHS PCT they live in and people don’t know where ward or constituency boundaries — it just doesn’t fit in with how people’s lives work.

The problem is that even defined administrative boundaries are confusing — do we mean New York city or New York state?  The ‘West Midlands’ one is a classic where different boundaries of different bodies cause confusion; Governmentally the region is Stoke down to Worcester, the  BBC’s West Midlands also has Gloucestershire in it, where as everyone thinks the West Midlands is just Birmingham and the Black Country — the West Midlands county.

I think most mapping online is using either administrative boundaries or postcode level (where people know which bit they’re in, but not too much about where one might end or begin).

I think that most people grasp:

  • County
  • Town/City (or ‘council level’) &
  • Postcode

as methods or locating where they live but will have a hazy idea of where each starts/ends. For other areas all my be even more hazy.

I had an idea ages ago for a sort of scraper of the social web that helped define “conversational” (here I mean natural language) boundaries rather than administrative ones (for example, where I live the definition of  Moseley stretches well into King’s Heath, Balsall Heath and even Billesley — all neighbouring areas —in conciousness). You could sort of do it with mentions of places vs geolocation on Twitter  — although there’s maybe not quite enough data there yet.

Flickr tried something similar with geolocated photos that also had location information added in the tags, it resulted in boundaries that were quite different (I can’t find the link right now, anyone?). I think we need to pay more attention to these ‘real’ boundaries.

The new Whuffocracy

POSTED IN future web, social media | TAGS : , , , , , 17 November 2009

The idea of rule by those with the most social capital is explored by Cory Doctrow in his novela ‘Down and Out in the Magic Kingdom‘. It doesn’t end well, and also considers a future in which the amount of social capital — or ‘whuffie’ — each person has is easily polled as everyone is wired into the ‘net.

It’s not a meritocracy, we understand that those with the most social capital aren’t always the best to lead (nor do they wish to — as Stephen Fry often makes clear). It’s not democracy (although it has certain democratising features), as there isn’t a system — nor an equal voting footing (those with more social capital can push others into positions of power).

In societies (those online are the only in which we have seen movements toward this) where there is “rule” by those with the most social capital it can be seen from the outside almost a ‘mob rule’ (with those with social capital directing the mob).  The Jan Moir protests are seen as ‘democracy’ or ‘mob rule’ — and as Paul Bradshaw says (in his first comment) “nothing inbetween”.

Influence by social capital can be seen as ‘mafia’s, cliques, etc — which baffles those ‘inside’ them as they aren’t necessarily knowingly exerting influence.

There’s real need for the study of this ‘system’ of rule — and it needs a term so we can distinguish it from others, so we can agree what’s happening and look at the effects. I’m going for whuffocracy — unless someone already has come up with a better one.

What does hyperlocal mean? And what does that mean for news?

POSTED IN blogging, future web, geodata | TAGS : , , 2 November 2009

Local is local for different people for different reasons — some to do with legislature, economics, transport, facilities, people, even “community” (another word with little definite meaning). Local in newsgathering has been based upon technologies (eg transmitter placement) or economies of scale (how many towns can a newspaper serve on the same staff?) — but with those areas no longer valid, how is news to be gathered and published.

Hyperlocal is the buzzword that being used to describe those new models — “The term “hyperlocal” is sometimes used to refer to news coverage of community-level events.” (Wikipedia). Community in this sense has no real definition — except that it’s assumed to be smaller than the “local” of the traditional news gatherer.

It’s understood that online news sources (those that are extensions of the off-web mechanisms) are struggling in part because people don’t read every page on their website — only the stories that interest them. The phrase in the US is “print dollars become online dimes” (or somesuch) — the same content (and a potentially wider audience for each piece) doesn’t generate the same revenues. This is because it’s possible to split, target and assess those eyeballs and click-throughs.

Extend that into the “local arena” and there’s less room for the niche — it’s not feasible at all to suggest that advertising can pay to generate  truly niche content. A stark contrast with niche but non-geographic interests, which can find a audience online that outstrips any a conventionally distributed source can provide.

A quick and dirty example: It’s no good saying that an announcement of (say) a new ukulele group in a suburb is of interest to all in that suburb — it isn’t — and as filtering gets better that information will only reach those that care enough and are local enough. At that level even a specialist shop won’t pay enough to gather than information.

It wouldn’t work in a current (read historical) “local” news source — unless as a little bit of human interest filler, and it wasn’t that item that was attracting the advertisers — it was a place in the whole “package”, which soon will no longer exist.

But there are people and companies that still want to do the local news gathering — why? There are a few competing (or complimentary) models emerging — here’s one way I think we can divide them:

  • the very local, volunteer run (one or two people) blog (often no ads, or a trickle of Google ad money — but no real desire to make money)
  • the local blog that runs (sort of) like a small newspaper (ads are often sold direct to local business in the same way as a newspaper)
  • the network — where sub-sites for areas are created (the ads are sold centrally, the object is to keep the overall site running rather than the sub-sites)
  • the aggregator — where content is electronically pulled from various social (or news) spaces — (ads sold by the aggregator for the aggregator)

All but the volunteer-led source are facing the exact same problem as the “traditional operators” — a fight between scale of operation and potential income. The local blog “newspaper” has more flexibility and less costs than the traditional operator and can work on much tighter margins, but it still has to balance between area covered and effort expended. What the two “ground-up” models have as an advantage is that they can feel the size of the area for themselves from experience, they are covering an area that makes sense to them. This might not be at a level that can attract enough advertisements (Philip John is encouraged by the take-up on The Lichfield Blog, but it can’t be paying for much above server costs — will it eventually? I have no idea). What the networks and the aggregators seem to be doing is picking a size that they can sell advertisements too and making that the area on which they focus.

I did a very quick, small and unscientific survey on Twitter asking people what sort of area felt “local” to them — with these results. People had very different ideas of “local” — a road with 35 people, to a country with 4,500,000. Even those picking the same definition (eg my suburb) had wide ideas of what size that was. This isn’t surprising — and many people (rightly) said things to the effect that “my local airport is further away than my local shop” and “it depends”.

Without some system of “soviets” — a network of ultra local sites, each feeding upwards and having new input at each level of scale — there’s no way that one news source (or type of news source) can cover all of the news needs of each people. What’s happened in the past is that the “distribution” scale featured it’s own level and a pick’n’mix of that below — people understandably felt that wasn’t serving them well and have started to create outlets at different scale. These so far have worked in tandem with existing outlets — so aren’t really equipped to replace them.

The worrying (for some) is that the “distribution” scale corresponded roughly with that of legislature (although not a good fit always) — so that’s a gap that is less obviously well filled. Pits’n’Pots – for example – does a great job at the level of Stoke’s Council, but what independent outlet is operating at the scale of regional development agency? Is there anyone that can hold AWM to account (although one might argue that few do anyway)?

It’s my contention that different types of sites will plug these gaps — I could see a “what are they up to” site run for most legislative bodies or quangos, or different sites sharing resources to hold bodies to account. Support networks, collaboration will be what’s needed. There are some businesses there (tech support, local ad sales perhaps), but it’s not huge profits — and there aren’t certainly for content creators.

It’s lucky, therefore, that most content creators are doing it out of duty or love.

We’re in a period of transition — we know that no-one source is enough, but the don’t yet have the methods to pick the bits from separate sources. Aggregation tools as they stand aren’t the answer, and it’s difficult for people to be brave enough to “trust the network” as we have to.

It’ll come, it’s coming — but we’re not there yet — exciting isn’t it?

C&binet on the future of local news

POSTED IN Conferences & Talks, future web | TAGS : , , , 29 October 2009

I spent a fascinating couple of days down in that London at an gathering of those interested in the future of local news — organised by Sion Simon (Minister for Creative Industries) at the Department for Culture, Media & Sport, who are hoping to have got some useful ideas for future legislation (the Digital Economy Bill)  It was brilliant to have a range of people from different backgrounds and interest groups to talk to and learn from — too many events are focused around one industry or interest group and end up being (to be clichéd) an echo chamber — with it being particularly good to hear about how things are shaping up in the States. Hannah over at Podnosh gives a good overview of the whats and the whos.

But first the “bad news”, see the decline in regional newspaper circulation from 1993 (as shown in a very impressive set of slides from Douglas McCabe of Enders Analysis)
Fullscreen

What strikes me is that, while the decline starts to happen consistently with widespread internet adoption, there are huge drops in years before that, including 1993; before the world wide web. Something was up with what the regional press was offering long before people started getting news online —  and local news online provision lagged behind that of national (certainly in Birmingham where it’s only been about a year since the local papers started to publish properly on the web).

Read More »

Geo Attention Mapping by Bookmarking

POSTED IN future web, geodata, my projects | TAGS : , 18 September 2009

It’s something I’ve been going on about for a long time, but as an exercise in explaining it quickly I entered it as a proposal in MySociety’s Call for Proposals 2009. Here’s what I wrote:

Describe your idea:

A new delicious tagging plugin that also harnesses the power of location services — so that it bookmarks where people find things interesting.

It could use triple tags to simply add a geo-attention “point” to each item bookmarked.

This tagging could then be used by anyone to see where bookmarks were interesting.

What problem does it solve?:

All attempts to collate and distribute local information are stymied by the fact that placing most information “on a map” is complex (council decisions, govt dept decisions affect discreet boundaried areas, news either has a “spot” and fade in influence or something more esoteric). People don’t do it, or the tech isn’t there for them to.

Coming at it from the other angle would give a sort of heatmap of influence which could prove useful for all sorts of projects — particularly those interested in local news and democracy.

Does that sound worthwhile? Please comment on the MySociety page.

Are hyperlinks still hyper enough?

POSTED IN future web | TAGS : , , 5 September 2009

It’s the link economy, it’s the foundation of the World Wide Web, it’s the reason for the funny “aitch tee tee pee” that still confuses people when reading out URLs, but is the hyperlink dying? Or at least falling into dull maturity?

Let me explain: I’ve been a bit fluey this week and haven’t been doing much. Which is okay, in that it gives me a bit of time to relax. I spent an hour watching this “fantasy documentary” by Douglas Adams from the 90s, where he’s in love with the “interactive”.

As the Watchification blog says:

“although much of the ‘browsing’ mechanism feels familiar and obvious, this documentary was created in 1990. That’s two years before the first web browser. The internet was a very different place then.”

And yes Douglas does get a ton of things right, or finds time to listen to the right people — the experience of “browsing” does feel very much like, well not so much the internet but those “interactive CD ROMs” we experienced before browsing proper.

Watchification: Hyperland

There is a — for the time — a hugely interesting project featured, and it’s a multimedia version of hypercard — illustrating how stories can be woven around aand snake away from a single event. In this case Picasso’s Guernica, which gets placed in history, in location and in art the different timelines weaving around each other. Which could lead to 100s of different user journeys (you could even get a 90 second “here’s your best bits” clip afterwards to blipvert yourself back to knowledge when the memory started to fade).

But however you navigate through this mutimedia (or interactive TV as the documentary calles it), it’s not truly interactive — it’s multi-pathed at best. So when people start to talk about “interactive TV” they’re meaning nothing so much as a very expensive version of a Chose Your Own Adventure book.

File:Cave of time.jpg - Wikipedia, the free encyclopedia

Lots of paths, lots of choice it’s just in the end a series of derived and constructed narratives. Unlike a game, where it’s possible for you to lose quite quickly — with the promise of you getting better and the experience getting longer — the interactive media has to provide a worthwhile experience for people, no matter what choices they make.

It may be that a lot of the paths end in defeat, as in this analysed adventure:

adventure.4hvf4u9ne2aswks40cok08wso.8td8r2s3w1cs4kksc4okksgg8.th.jpeg (JPEG Image, 545շ06 pixels) - Scaled (97%)

At that point, in our documentary, as with the books and Don Bluth lazerdisc game, every path is human created. The authors didn’t think of user generated paths — how could they, their content was hard fought, researched and seemlessly interwoven. And “who wants to hear about what people had for breakfast”.

But moving around created paths wasn’t “surfing”, surfing is dangerous and unpredictable, — and wasn’t possible until the means of content production were democratised — and… hang on… isn’t  “surfing” a really bad metaphor for how people use the internet?

Or perhaps not, because: no mater what the path of a surf you end on on the beach, slightly wetter than when you started, and when you “surf the web” these days you eventually end up on Wikipedia, slightly better informed than when you started.

When did you last “surf” the net in that early 00′s aimless wandering way? Today’s hypelinks are often factual dead-ends. The ones in this blog post are destinations, not way points on some greater “internet experience”. Two end in Wikipedia, a site that (while facilitating the in-wiki-flâneur) offers the last (or as last as we normally need) word in most discursive journeys.

Do you surf off for related information, do you click on those links to “find out more”? I’d say that almost 90% of links in user-generated content are these dead ends — official homepages, user accounts, and wikipedia articles. And those links overwhelm the search engine mechanisms becoming the top results for any search anyway. If I’m writing a blog post and want to link out for some facts, I’m ether going to know the URL (the homepage) or search — with almost always will lead me to the Wikipedia article or official homepage anyway.

Where’s the huge rip curl, the danger of wipeout or the shark? Are we likely to get a nougties Jan and Dean penning paens to the the joy of knowing exactly what you’re going to get when you click on a link? That and that you’ll be back to continue the original narative.

Is the personally curatored link dead or dying? When was it last a human recommendation at the end of the blue underlined phrase for you? With the rise of the microblog, most links are in plain view anyway (or obscured only through the magician’s sleight of hand of the URL shortener) — misdirection is frowned upon.

My point: if all we’re linking to is the obvious fact, machines can do with for us, language processing, search recommendations, browser tech and we’ll never have to link again. Watch Tom Baker as the “agent” — the on screen guide in the video, he’s automatically making the connections. That’s were the programme is most prescient.

The danger we once felt, the intellectual wanderings we once took are now usual, every destination is discrete and intended. We are awaiting attention profiling and auto peer “likes”  to generate our chanced upon content — but won’t we just get sucked into more of the same?

Brink back the random hyperlink? Go see if you can catch a wave.

Open data may need to change the way data is collected and stored

POSTED IN future web, geodata, good practice | TAGS : , , , 9 July 2009

Like Nick Booth, I was pretty excited to hear of the results of this FOI request for data about parking fines from Birmingham City Council. Not so much as because I care about parking fines, but because of the opportunity to see some of the huge chucks of data that we’re all pressing for release automatically rather than only on request. This request (check out the wording for a great example of how to phrase these requests so you get everything you ask for) was put in by Heather Brooke as part of an investigation on the new Help Me Investigate site (disclaimer: I’m part of the community management team on this).

Plotting the data on a map (alongside other data) could show all manner of things — but more importantly raise questions that are worth investigation: are regulations enforced more in certain areas, does enforcement contribute to lowering numbers of accidents on those roads, whatever anyone cares enough about.

But, not easily. Yet. The spreadsheets reveal that the location data in there is just shy of being able to be plotted on Google Maps (or similar) without altering:

Book1

The locations aren’t detailed enough for plotting by the tools we might use quickly, for use internally in Birmingham City Council they’re fine. As things readable by humans they’re fine. But to quickly pop something on a map there’s no tool I know of that will let you say “all these roads are in Birmingham, UK” — so the mapping software can’t plot.

Of course you can write a script to add “, Birmingham, UK” to each (or do it by hand) but that’s not simple — it becomes the work of coders rather than “the public”, will enough people be interested?

Yes data in public is better than private, yes get any data out there so some people can use it; but to really unleash the power then that data will need to be collected and stored with “free” use in mind. Are organisations ready for that?

Will Perrin on Digital Britain

POSTED IN Conferences & Talks, future web, social media | TAGS : , , , 18 June 2009

Someone I did get to talk to at the Digital Britian was Will Perrin of Talk About Local — a new project to : ” give people in their communities a powerful online voice.  We want to help people communicate and campaign more effectively and influence events in the places in which they live, work or play.”

Will is at the forefront of work in hyperlocal blogging, so I was interested in his view on how the report talked about local news as a priority worth paying for:

Audio clip: Adobe Flash Player (version 9 or above) is required to play this audio clip. Download the latest version here. You also need to have JavaScript enabled in your browser.

There are podcasts of all speaches and panels and more interviews over at Rhubarb Radio.

BrumEmoMap

POSTED IN future web, geodata, social media, twitter | TAGS : , , , , , 1 June 2009

A wonderful and quick mash-up of Twitter and Flickr data, mapped with Yahoo, BrumEmoMap is a great example of the first thread of my Conversational Psychogeography idea. It allows people to tag place with emotion by using tweets or Flickr photos:

How are you feeling? #brumEmoMap | BARG
Uploaded with plasq‘s Skitch!

While it isn’t doing anything with the data — yet — the power comes from the way a folksonomy of location can evolve even on services (like Twitter) that don’t offer it directly. The deliberate placement and collaborative ethos may open the path to really usable data being collected. Very interesting.

Loading