Thursday, 1 September 2011
It's a utopian vision of data gathering. But how many of us truly trust their own networks? Of course, they may be full of smart people with great connections of their own, people to whom we outsource the delivery of breaking news hours before the major news networks manage the same feat. But information must be broad as well as fast and smart. Does your network teach you things you never knew, or does it teach you things you could never know?
The great joy of the web is in serendipitous discovery; finding those nuggets of knowledge we never knew existed. Great projects to expand the sum of human knowledge are now, thanks to the web, well within our grasp, from efforts to digitise the previously unpublished words of Isaac Newton to attempts to document the Salem witch trials of the 17th century. Will your networks deliver information of this breadth? Unlikely. How successful can we ever be in finding experts on subjects in which we have no expertise? The filter is only effective if we can work out how to build it.
Filtering a firehose turns out to be just as challenging as drinking from one. What if the digital age's spikiest irony is that as information becomes less scarce, its very abundance becomes the biggest barrier to access? Consider Google. Its dominance of search has produced an unintended and damaging consequence in the organisation of the world's information. The highest ranked articles achieve their position as a result of the number of web links to that content. The objective is relevance, but the system often promotes the mundane and popular over the rare and treasured. As Nieman Journalism Lab's Maria Popover writes:
"An esoteric piece of content, however valuable and interesting, will remain confined to the niche community of scholars and hobbyists who have linked to it, ranking it low enough in Google's search results to prevent all but those actively seeking it out from accessing it and engaging with it. Instead, the trivial thrives and the remarkable remains rare."
Through Google's lens, all of the information we require is available, but how accessible is it? How many of us does it reach? It's one thing to organise the world's information, to claim it as "universally accessible", but it's quite another to propagate a system of accessibility that rewards popularity over value. In Google's glass, the trivial floats right to the top, an oily viscous of marginal data.
Shirky may have been right about the filter, but that filter is in need of a good clean. Google's attempt to socialise the processing of content is a good start. Google+ divides relevance into circles, making it easier for the right content to be promoted to the right people. Another way is to take the filter straight to the search engine. Bing's ambitions to become a "decision engine" are underpinned by its links with Facebook, which allow it to deliver search results that are punctuated with the recommendations of friends.
This so-called "socialisation of search" returns us to our original question: how much faith can we really have in our own networks? It's an apposite question for those of us obsessed with the numbers. If SEO is largely a popularity contest, what does that make our quest for social capital? While the ultimate goal is more friends, more clicks and more visibility, social search will always struggle for relevance. How do you decide what's important with 5,000 friends clamouring for attention?
But social search does have its own built-in editor. Perhaps over time, the links between our search queries and our friends' recommendations may come to influence the very make-up of our networks. After all, bad advice is an effective filter, too.
Pic credit: catspyjamasnz
Friday, 6 May 2011
There was an interesting Twitter poll posted by Nixon McInnes yesterday, based on the number of Tweets featuring yes2av or no2av hashtags. It was interesting largely because the graphic underscored one of the main reasons why the medium is still a poor conduit for market research.
In case you missed it, the No to AV camp triumphed. But using Twitter to try to predict that result would have proved disastrous. According to Nixon McInnes’s data, there were over 28,000 tweets containing the yes2av hashtag but only 11,000 containg no2av. In other words, almost three times as many yes campaigners than no campaigners.
Social media sentiment tools claim to offer a whole range of useful services, from stock market predictions to product-launch analysis. But the viability of these services exists both on the assumption that sentiment analysis can be predicted accurately on a large scale and that the sample is an accurate representation of the whole population. Research by Daniel Gayo-Avello, Panagiotis T. Metaxas and Eni Mustafaraj at the Department of Computer Science, Wellesley College, Massachusetts, suggests neither, showing that in the last US congressional elections “Twitter did no better than chance in predicting results”.
Twitter’s UK population has long been associated with liberal tendencies. Telegraph writer and polemicist Milo Yiannopoulos worries that the site’s “hegemony of the Left” raises questions about “the contribution of social media to the national debate.”
“…doesn't the echo chamber on Twitter risk distorting discussion in the public square, giving a faulty impression of what most people actually think? And, with a media increasingly taking its cues from Twitter and Facebook, platforms colonised by spoilt, urban liberals, won't the so-called silent majority of Middle England become even more disenfranchised, and, consequently, suffer still further from under-representation and ridicule?”
Yiannopoulos oversimplifies the argument. To claim that Twitter’s “spoilt, urban liberals” are able to bend Middle England’s silent majority to their will ignores the openness of the medium. Twitter is a conversation open to everybody. It democratises social media like no other platform. But the numbers are indeed demographically skewed. As a result, Twitter’s left-leaning politics is out of kilter with the society it purports to represent. Until the Daily Mail readers join the debate, Twitter will remain an unreliable barometer of public mood.
Pic credit: inckognito
Friday, 25 March 2011
Ask any newspaper editor about reader appetites for long-running stories. Many intuitively feel that public interest begins to wane far sooner than coverage tails off. It’s a tough call. When does war stop being interesting?
The movement of print to online offers journalists a far better idea of reader engagement. Although an online front page is hardly the most democratic of content aggregators: important stories, as decided by editors, go at the top and are replaced hours, and in some cases minutes, later.
With certain stories, social media can provide a more accurate depiction of public mood. The magazine Fast Company employs data supplied by Crimson Hexagon, which uses a statistical human-assisted approach to monitoring tweets, to measure how conversations change once a new event happens. “On March 23, conversations about Elizabeth Taylor dominated some 500,000 tweets. Japan? Just 119,397. And Libya? Around 97,499.”
Tweets in general have an incredibly short shelf life. According to a report by Sysomos, only six per cent of tweets are retweeted, and nearly all of those retweets occur in the first hour. Just 1.63 per cent of tweets are retweeted in the second hour, and only 0.94 per cent in the third. Of course, the combined immediacy and brevity of Twitter make it an ideal medium for transient information. But it’s interesting to note that stories of all sizes are prone to reader fatigue.
Editors are paid for their judgment. In the US, during the first 10 weeks of 2007, stories about the Iraq war accounted for 23 per cent of TV network news. In the first 10 weeks of 2008, that share had fallen to 3 per cent. On cable networks coverage fell from 24 per cent to one per cent, according to a study by the Project for Excellence in Journalism.
A daily tracking of 65 newspapers by the Associated Press confirms the trend. In September 2007, the AP found 457 Iraq-related stories on front pages. Over the following months, that number fell as low as 49. The cost of foreign correspondents, allied with falling ad revenues, is a key determinant, but editors intuitively understand the shelf-life of their biggest stories. Social media measurement provides the ability to back up those assertions.
Thursday, 10 March 2011
Advertising drives consideration, conversations drive sales. It’s the mantra of social commerce, and the central tenet of Facebook’s plan to monetise its enormous user base. While the social networking site’s current valuation is a juicy bet on the future value of its consumer data, the firm is likely to see a more immediate return from e-commerce: providing a platform for online retail brands to segment and target customers more effectively.
What Facebook wants to replicate, says head of international business development Christian Hernandez, is the mall effect, where groups of mostly young shoppers congregate, share information and offer mutual advice. Facebook allows brands to create virtual malls on its platform by allowing users to “like” particular products, automatically alerting friends via status updates. And offers can be delivered in real time, adds Hernandez. “Because you liked that red dress on Asos but didn’t buy it, Asos can send you a message through Facebook to say it’s now 30 per cent off.”
E-commerce has traditionally been a relatively lonely pursuit. More than one person can in theory hunch around a laptop, but with social media’s help, large groups of buyers can combine to drive deals (Groupon) share content (Twitter) and influence buying decisions (Facebook) much more easily.
Social proof is an important influencer of purchasing decisions, hitherto missing from the online retail experience. Facebook’s partnerships with brands such as Levis represent a new opportunity for low-cost, highly effective marketing. The future of search is human.