Yahoo started as a list of “cool sites” on the web, the founders categorised all the stuff they found on a site called “Jerry and David’s Guide to the World Wide Web” â€” they just linked to things they liked. As web search was in it’s infancy (remember Lycos and Alta-Vista?) people liked the idea that someone was picking out the good stuff for them so they didn’t have to wade through page after page of barely relevant tosh to find something good.
It wasn’t scalable, though, and with the size of the web Yahoo eventually became a search engine â€”Â crucially it left the job of assessing relevancy to a machine, which allowed much more content to be looked at.
I’m watching the internet go through the same processes again both with people and with local social media content â€”Â there are directories of people to follow, some generated as lists, very much like Jerry and David did (“Cool people to follow on Twitter”), others more like submitting yourself to a “site directory”. We’d think this crass and inefficient, and open to spam, for websites now but a lot of social media is still at this stage.
This is because “people search” isn’t quite there yet â€” although searching people’s tweets or Twitter profiles (the stuff they create and keep up to date â€” no human categorising required) is much more efficient. Check out TweepSearch, and you’ll see it’s getting there â€” if it were to add in the “recommendation” part or search (links to account Google-style, or @messages to with relevant subjects in the tweet) it would be very useful.
We’re also seeing the idea of local aggregators gain a lot of traction, but aggregators lose out to search in almost all cases. It’s just that the aggregators rely on you having the exact same needs and as the person building the. You build your own “aggregator” by subscribing to the feeds and content you like in whatever reader you wish.
If an aggregator works to becomes useful by taking out the content, then it’s a filter mechanism not an aggregator â€” and a very different thing â€” you’re using the human as your filter, which is valid but again not scalable.
Aggregators in themselves are quite an old fashioned way of thinking about content from different places, there’s more value in creating mechanisms (search, sorting, filter, recommendation) to help people select only the relevant parts from different sources than there is collecting content together. We’ve not yet seen the breakthrough in technology that will make this simple and automatic (maybe attention profiling – & APML – will, but it’s not there yet at least in terms of adoption), we still need to rely on a human factor â€” sharing, “digging”, etc.
For local sites we’re going to need more information about where sites/information is geographically interesting â€” fuzzy because not everything has an epicentre â€” and this can only come from users (I’ve been dancing round the idea of Geo-attention data for a while, and think there are ways to capture this).
While it may work for some people, it’s unlikely that anyone’s selection of feeds is the exact right one for many other people â€” at the very most pulling together an OPML of feeds allows people to conveniently subscribe to the lot and then add/remove as they wish.
So, create your own lists by all means but it’s unlikely that your list or aggregation is going to be the last word â€” so make sure it’s what you want rather than what you think others will need.