Archive for the ‘Web 2.0’ Category
You tinker with national institutions at your peril! The redesign of the BBC News site has certainly got people talking, and the general reaction is not particularly good.
I didn’t realise quite how badly the new design was going down until I saw the latest BBC Editor’s Blog post which kicks off its response to readers’ comments with:
Reverting to the old design is not something we’re considering.
Ouch! Never good news when you’re having to directly rule that out at the start of your response. On top of that I found myself reading a blog post bookmarked by the BBC’s delicious feed that archly speculates “could the BBC News redesign be the saviour of newspapers” on the back of comments such as:
Day 2 of trying to use this new site has left me feeling ill and thoroughly grumpy and bad-tempered … If we don’t get old site back – perhaps as an alternative to the new one – then I’m seriously considering buying a daily newspaper again.
I’m finally cured of my addiction to the BBC News pages. I think it is called aversion therapy … Definitely time to start buying real newspapers again to read at my leisure in the sunlit garden.
Is this just a case of innovation unsettling people, and it will all settle down in a few weeks’ time like these storms in a teacup usually do? Remember the fuss about Facebook’s redesign last year, with reports that masses of users were about to quit the site in protest? Well, they never did – and Facebook has gone from strength to strength and just surpassed 500 million users. No one’s complaining about the design now – although you can bet that next time it’s changed, we’ll go through the whole cycle again.
So perhaps that’s just the same situation all over again with the BBC News site, exacerbated by the sense of ownership that many people have toward anything related to the BBC due to the license fee model of funding that makes us all “shareholders” to some degree.
As for myself, I can’t say I hate the new BBC site at all. It’s perfectly okay in many ways, certainly quite usable. And yet, as I tweeted last week:
Wondering why the new BBC news site isn’t sitting well w/me. It should do, but it’s just not; even tho’ it makes the old design look…old.
Certainly looking back at the old site that it replaced, the new design makes the previous one look cramped, dated and unambitious. But there’s something which even now just isn’t sitting well with me, and I have no sense of movement to indicate that I’m “adjusting” to it and getting close to reconciling my doubts.
That tweet generated more direct replies than almost single thing that I’ve put on Twitter before – again, indication of the strength of feeling and sense of ownership involved. Among the things that people seemed to rail against were the strident, tabloid-esque red banner (there’s a Safari extension you can get to remove that, courtesy of Robert Brook) and the typography which seems to attract particular ire.
Others say that it’s harder to find things on the page: I certainly find that, but it’s surely natural and inevitable with changes to any information-heavy site, and something I’m sure I’ll get used to. I’m finding I’m reading fewer stories on the site at the moment and that the editorial content seems far lighter and full of fluff than usual, but I rather thank that’s the summer slump in interesting and heavyweight material – we’ll check back on that again in the autumn.
So what is it about the site that isn’t working for me? Partly I think it’s because the redesign makes the BBC site look similar to so many others, such as CNN or Salon. It no longer looks distinctive or special, it’s just one of the herd.
This extends into the individual elements of the page – the right hand panels, the typography, the style of the footer – which all look familiar from other appearances: “you may also remember me from …”, like a web design Troy McClure. They’re all elements that within themselves have a great deal of style and panache, but brought together like this it feels more like a greatest hits rather than a coherent website.
Much has been made about the user testing done on the designs to get the BBC site to this point, and I can well believe it. I can see exactly where individual items have been steered and influenced by user experience best practice memes, almost as though I’m reading a text book on the subject. But I’m always distrustful of being completely reliant on usability tests and focus groups: you can so easily end up with something designed by committee that looks good in pieces but as a whole is so much less than the sum of these parts.
And that’s where I think the BBC News site is for me at the moment: it’s something of a Frankenstein’s monster of a design, where you can still see the individual joins and where the pieces don’t comfortably fit into an overall aesthetic. Scrolling down just the left hand side of the home page for example and the initial run of headlines in two-column measure is then broken into by an extremely domineering blue panel using a tabbed metaphor for UK and world regionalisation; and then as soon as you’ve absorbed this, you’re suddenly back to news headlines but now in a cramped four-column measure, while over on the right hand side the content’s run out and we suddenly have an awkward block of white space. And then finally we get into three different sections of footer content, all with their own different column measures, before the page is finally allowed to come to a rest.
Every part of the page seems to shout “Look at me, I’m most important, forget the rest!” and so it can feel a little like being visually mugged. I do find it most strange how loud and busy the site has become, how tabloid-brash: compare that against something like Facebook, which should be a lot less conservative than the BBC, but which is styled in a low-key, quiet way that makes it feel more like the Daily Telegraph than a hyperactive social network site that appeals even to teens.
It’s not just the home page – this design aesthetic continues into the news reports themselves, with the designers having fun with pull-out quotes and pictures into a central column of white space that’s suddenly appeared; sidebars are pulled in either overlapping into this column, or else the article itself. It’s the equivalent of pulling out all the stops and using every trick in the book, and if it’s not done with huge discipline or an overarching sense of direction, purpose and vision then it can just result in the page looking frantically busy, and frankly exhausting. It’s “shouty, shouty” style is the web-design equivalent to the “pointy pointy” use of 3D-for-the-sake-of-it in movies so loathed by film critic Mark Kermode.
There are few other tangible changes I dislike, such as the removal of links to the RSS feeds for the various sections (they’re still around, but at perversely illogical places – most of all, it’s the fact that the BBC seems to be signalling that it’s giving up on RSS promotion as a whole that irks me.) The switch to horizontal global navigation is also a problem as the sections headings simply don’t scan very well – and there’s too many rows of links in that header now.
Of course, any site is a work in progress: there will be changes, improvements and enhancements to the site as it beds in. Already, unless memory deceives, the layout of the “above the fold” part of the left hand part of the home page has been considerably improved, with more stories cleanly laid out which gets rid of the odd patches of non-design white space that the page initially seemed to have lying around. That’s made a huge difference.
Still, design is a very personal thing. While I and others might be struggling to adjust to the new style, others already love the new site. And certainly, sceptical as I am of the new design, I’ve already conceded that it makes the previous design look tired and dated, so the BBC is right to say there’s no going back. You can’t go home again once you’ve been to the bustling big city, even if the big city is loud and messy and dangerous, rather than paved with gold.
Most of the rest of the “issues” with this site design will doubtless be addressed as time goes on, especially if the BBC listens to its users (which it’s very good at doing, sometimes to a fault.) In the meantime, those of us who are finding the transition difficult should take heart that change, while difficult, is also a fascinating experience – and an endless source of material for opinionated bloggers!
I’ve always been just a little bit dubious and sceptical of these “grass roots”, “from the ground up”-type mashups. Do they really, genuinely happen? Can they ever achieve anything of real value, or do they just create toys for the geeks? Well, this week I saw such a mash-up evolving close up, and I admit – I’m a convert.
It started on Sunday afternoon, when the forecasters were issuing warnings of a “severe weather event” (honestly, is no corner of our daily lives free from corporate speak?) All we were seeing was a few isolated flakes, or the briefest of flurries instantly clearing up to be replaced by some glorious sunshine, which we shard with a few isolated tweets on Twitter. “Is this it?” we wondered; “Have the forecasters just been crying wolf once too often.
Then a few people in places like Cambridgeshire started reporting more sustained flurries; and the #uksnow hashtag started to appear, which I cheerfully adopted. Apparently I was one of the very early adopters, because I got a message from twitter trends aggregator twopular pronouncing me to be “among the top trend setter for trend ‘#uksnow’”. Oooh, the accolade.
And then up popped Paul Clarke who commented that it would be a good idea if these snow findings were recorded in some sort of usable format so that – if anyone fancies it – they could be mashed up with Google Maps to paint a real-time picture of the snow front’s progress.
I was sceptical that this would actually catch-on or get picked up by anyone, but I was game and started adding #uksnow KT6 1/10 to record a very light trace of snow in KT6 (Surbiton, South-West London) because it seemed like a nice idea to format posts if nothing else. And then whaddaya know, the #uksnow [Postcode] [scale out of 10] format was everywhere I looked.
The big test was whether anyone would actually take any of this spontaneously organising data and make something of it. Lo and behold, developer Ben Marsh did exactly that – creating the #uksnow Tweets mashup. And it was … very good. Really, very impressive for something thrown together on the spur of the moment. It even got a mention from the BBC on Rory Cellan-Jones’ dot.life blog. And what was really cool about this was that it had delivered a system that millions of pounds worth of space satellite was struggling to achieve: a real time picture of the progression of snow across the UK.
Since I procure web projects for a living, I couldn’t help but wonder how long a project like this would have taken to do commercially. Nothing fancy, just exactly this sort of project. I immediately envisaged hours of briefing and scoping meetings, discussions about functionality and design and interface. The procurement would easily have taken the better part of a month, and the project itself probably getting on for as long – even if anyone had found a workable solution. And yet instead, the whole thing had manifested itself – without project management, without a brief, without a client – and it had simply happened out the time and contributions of tens of thousands of people. It really was the most perfect jewel of an example of crowd-sourced mash-up potential being realised, enough to convince even the most hardened sceptic (as I had been) of its true benefits.
As a follow-up to this, on the Monday there was a Twitter challenge from Tom Watson MP, the Minister for Transformational Government, to reconfigure the Directgov site to carry news about school closures. He even bought a URL for the purpose, www.schoolclosures.org.uk, telling them that “if you did it for tomorrow am, you’d be heroes“. And could they do it? Well – yes they could, a feather in the cap for Directgov’s newly unveiled Innovate microsite.
Reactions to this have varied. Tom Watson himself was thrilled, Emma Mulqueeney put aside any quibbles to delight in the fact that it had simply been done, Simon Dickson gave a characteristically informative, interesting and even-handed account – and “Irish opportunist” Paul Walsh hated it.
The problem with it – compared with the #uksnow example – was that it just wasn’t organic. instead of evolving from a group of people, it had been the (admirable) work of a couple of developers in response to a Minister’s suggestion. There was no crowd-sourced information to rely on – all that could be done was to link back to the various council websites for information, so it came across as a rather unnecessary extra search engine layer which still resulted in frustration when the destination council site contained nothing more about what was going on than you already knew sitting at your keyboad 5 minutes ago.
The beauty of #uksnow was that it produced information where none was available before, and then used a Google Map mashup to deliver the results visually. But there was no new information for school closures, and it was the lack of on-the-ground information together with a centralised approach that meant School Closures lacked some of the “gee, wow” sense of the #uksnow adventure.
So why can’t we crowd-source information on school closures? Well, you can imagine how quickly that would be overrun and corrupted by hoards of school kids Twittering false information that their school was closed so that mum and dad aborted the school run. There was no such self-interest involved in mapping #uksnow, and any variations in reports quickly evened out statistically. But it’s a shame to have started a project like School Closures and then not think it through further – how to analyze the incoming tweets for frequency, reliability, contradictory reports, “trusted” accounts and the like to make it possible to gather this information from the ground and not rely on local council websites to get staff in to update their websites.
SchoolClosures.org.uk is a decent first step for Directgov’s innovate team, make no mistake – especially since it was just hours after their official unveiling. But let’s hope for some follow-through and sustained innovation on this and many other fronts, and not just quick “stunt” bursts that are quickly shelved and forgotten.
Three times in the last 24 hours, I’ve learned about the big stories first through Twitter. And when I’ve gone in search for confirmation on the story from a mainstream website, it hasn’t been there for another 10-15 minutes. It seems Twitter’s changing the pace of online news.
The first story was about Steve Jobs stepping down as Apple CEO for leave of absence. Such a story immediately cried out for a pinch of salt as there have been plenty of false stories like this in the past six months, some by hoaxers looking to make a quick buck on the stock market and others accidentally triggered by reputable news services like Bloomberg mispublishing. The Steve Jobs story this week was ‘broken’ on the Twitter stream of Mashable within 6 minutes of the story going to Apple employees, and quickly went around everyone on Twitter: the problem being, how do you know such a story is true and not just the echo chamber of everyone repeating an incorrect rumour or hoax? Well, it turns out that Twitter – or at least the people I follow – are pretty good at distinguishing fact from fiction and are discerning folk, and while there was a healthy trace of scepticism from everyone until the official word got through, it was equally clear that something was happening and on the whole people were trusting it and digging out supporting evidence as it emerged. In short, Twitter behaved like the best top-skilled and hyper-fast professional newsroom you could hope for.
The second story was the early leak of the UK Government’s decision to green light a third runway at Heathrow. Here, there was little doubt about the credibility of the story – the BBC broke it on their website – and the role of Twitter was to spread the word. I wasn’t checking news sites at the time and the official news alerts didn’t go out for some time, but Twitter was abuzz with howls of dismay and protest at the decision within minutes and so it was hard not to notice the story on my Twitterific app that runs in the background on my desktop as I work. Who needs alert emails and RSS feeds when it’s going to land on your desktop in seconds though your Twitter contacts?
And then today there was the story of the US Airways plane that crashed into the Hudson River in late afternoon EST. Twitter was blazingly fast on this one, not least because the jet couldn’t have picked a more public place to have to ditch. Again, I found instant Twitter messages cascading into my app window and there was no doubt about this one: too many unrelated people were coming up with too many different angles and accounts. It was the journalistic gold standard, not just story confirmation but confirmation times a hundred. Twitter even managed to get the first photo of the plane online, thanks to a user on a ferry in the Hudson that was co-opted to pick up survivors, who took the photo with his iPhone and uploaded it as he sent his Twitter post – and was subsequently interviewed on the major news channels including BBC News for his troubles. Robert Scoble calculated that there were initially “about two tweets every 10 seconds coming in. Within minutes that went up to 200 to 400 Tweets every few seconds.”
Once again I turned to the BBC and CNN websites for more coverage – only to find none for almost half an hour. The best information continued to be from Twitter, with a special mention for BNO News which has been consistently fast – and reliably accurate – in bringing breaking news stories to the Twitterverse for some months now. The story developed with incoming Tweets describing how the passengers escaped on the wing and waited on the sinking plane until rescue craft came to take them off; you could even find a website that showed the aborted flightpath of the jet before it ditched in the Hudson. But fortunately it was a happy outcome: despite some injuries, it seems everyone has survived, a quite remarkable escape for all concerned – and proof that those stupid floatation devices on planes really can and do work when they need to. And also proof that airline pilots can be and are genuine heroes.
The Guardian and Silicon Alley Insider both have stories on the Twitter coverage, showing how Twitter seems like crowd sourcing at its best. Even BBC staff seem to be using Twitter for news, with their technology correspondent Rory Cellan-Jones describing Twitter as “like a very fast, but not entirely reliable news agency.” But we’d better enjoy these admiring reports on Twitter from the mainstream media while they last, because the traditional media playbook dictates that now they’ve set Twitter up on high, it’s surely about time to tear it down again. A big false alarm story will be next; maybe a flash meme about an assassination attempt on Barack Obama will spread like wildfire and be used by traditional outlets to show how unreliable and gullible Twitterers are. Or failing that, the old Daily Mail play – how Twitter is a breeding ground for paedophiles grooming kids, or maybe that it’s an evil den of sin where the terrorists are all meeting and plotting.
Oh, no, wait – the US Army already tried that last one, didn’t they?
An article that particularly annoyed me this weekend was Fortune Magazine’s piece on why “Web 2.0 is so over. Welcome to Web 3.0.” It’s wrong, misguided, ill-founded – and deeply damaging. In other words, exactly what we have come to expect from captalist speculators and bankers.
Fortune’s argument seems to be that Web 2.0 is dead because financially speaking it’s been a total bust. Hence, time to move on to the next wave – Web 3.0, whatever that proves to be. That seems to radically miss the point of Web 2.0, and indeed underlines how bankrupt modern capitalism seems to be in thinking that the only point of anything is how much money they can make: they know “the price of everything and the value of nothing” as the cliché goes. It misses the biggest point of all: no one is making money in 2008 and 2009. With the advertising collapse we’re seeing media networks (such as ITV, Channel 4 and the New York Times) and retail outlets (such as Woolworths and Zavvi) unravel and collapse. Even banks have come perilously close to utter wipe out. So why pick on the nascent Web 2.0 industry because, like everyone else, it is struggling to make money in this environment?
Such blinkered thinking would never have created the Internet in the first place, and these people are very poor inheritors of the title ‘capitalist’ from the days of true visionaries in the past who had to try, fail, and try in the long term rather than expect a revolution and big profits in a matter of months.It seems particularly perverse to declare Web 2.0 dead just when Facebook is reporting a massive rise in numbers, and all the experts are predicting that 2009 will be the year of Twitter’s explosive rise to prominence as prefigured in the spate of press articles about this or that aspect of the microblogging service. In the UK even The Sun and the Daily Mail – hardly tech-savvy newspapers – are becoming obsessed, at least by the celebrity-users of the syste.
As Steve Wheeler puts it in his blog Learning with ‘e’s:
The whole point of Web 2.0, is that it’s not about making profit or screwing over the opposition. It is not about creating killer applications either. That’s because Web 2.0 is not and has never been about tools or services, many of which have been around almost as long as the Web itself. No, Web 2.0 is more about how people are connecting, sharing and communicating using the tools and services. There never was a revolution on the Web. It was always an evolution.
That sentiment is so anti-modern capitalist thinking that it’s no wonder that magazines like Fortune don’t “get it.” What is more troubling is how leading figures in the Internet/web/technology fields seem equally quick to want to declare Web 2.0 dead and buried. We already had Wired Magazine’s article on the death of blogging back in October, but now even experts like Simon Dickson over at Puffbox.com are taking similar “Web 2.0 is just so 2008” lines.
It’s not helpful, in fact it’s positively damaging – not just to current Web 2.0 hopes but to future Web 3.0, 4.0 et al as well. We’re just getting to the stage in industry and public sector where key opinion formers are looking up and noticing Web 2.0 and considering dipping their toes in the water. There feels to have been a real shift in the last six months and finally it has seemed that the interest in Web 2.0 was reaching out beyond the small beach head of the natural innovators in organisations, out to rank and file managers and workers. But headlines declaring blogs and Web 2.0 dead will send these people scuttling back under cover once more, and give them a reason to ignore it the way they tried to ignore the web back in 1999 – and make no mistake, many of them still believe that the IT bubble bursting back then proved them right in thinking all this web stuff would just go away. They’re only just noticing that actually, under their noses, the Internet crept back in and changed literally everything. If we lose these key opinion makers again and give them a reason to go to ground, then it will be that much harder to get them interested in Web 3.0. And 4.0, and anything else, because they’ll always come back with the argument “we were right about Web 2.0 so you can’t convince us.”
I can completely sympathise with experts who have been working in Web 2.0 who are getting frankly a little bored and impatient. They want to move on and find shiny new exciting things: I get it, I really do, I’ve been there myself. But no matter how excited an architect is to get the dream penthouse built, he’s still got to ensure floors 1 through 20 are built first otherwise the whole thing is going to come crashing down around his or her ears. So come on, people, let’s not rush to the graveside of Web 2.0 just yet: let’s do the job and finish it properly first.
[Oh, and by way of P.S., let me just add: I've never liked the term 'Web 2.0.' It's pretentious and empty and collects together a whole bunch of disconnected technologies, some of them 15 years old and some of them really new without any sort of sense of internal logic. I'm not wild about 'social media' either but right now it's the least worst common term available, so I guess it'll have to do. But the sooner the generic 'Web 2.0' term dies the happier I'll be.]
Until quite recently I’d never used the iTunes Store, and had very little exposure to ‘micropayments’. Now, I have to confess – I’m a huge fan and convert, and wondering how the micropayment system can be used in the future.
I hadn’t bought anything from iTunes until the middle of this year. For one thing, I had an antique iPod that was already stuffed full; and for the other, I didn’t even have broadband until the start of this year and so downloading MP3s just hadn’t been on the agenda.
With the arrival of broadband, I did finally make my first MP3 purchase, which for the record was “Eyes” by Rogue Wave for 79p, a track I’d really liked from numerous outings on the TV series “Heroes”. Even then, I probably wouldn’t have just bought one track – it’s just that it wasn’t available on any album in the shops, so it was because of not having any alternative that I was manoeuvred into breaking my iTunes duck.
That got me through the hurdle of registering and entering my credit card details, but it didn’t exactly open the flood gates and I didn’t return to the iTunes store until well into the autumn. The trigger was undoubtedly getting a new iPod – or iPhone in fact – since that suddenly made getting new music that much more appealing as a way of playing with my new shiny toy.
And of course the other thing about the iPhone is … all those enticing little Apps to download. I originally decided I’d only ever try the free ones, a philosophy that I did indeed stick to for literally a day. Maybe two. But in the end, I found it hard to resist or argue against getting an app costing 59p. The most expensive app I’ve bought from iTunes is Bylines for £3, an RSS reader with the specific feature of working through my Google Reader account and keeping synchronised.
It seemed positively churlish to start wavering about a few odd pennies here or there, and the amounts were negligible compared to almost every other purchase I make during the day. For the price of a Starbuck’s coffee I could get three MP3s or a couple of pretty decent paid apps, after all – and how much thought do I give to a morning coffee?
So at some point in the past month, my subconscious mind had come to the conclusion that buying things on iTunes of this sort of cost is just not worth bothering my conscious mind with. Go ahead, do it, the time spent hesitating and thinking about it is worth more to you than the pennies we’re contemplating.
Which is of course the power of micropayments – and especially so in the deadly combination of one-click payments used by the iTunes Store.
But it certainly surprised me how I was turned around from being disinterested in the iTunes Store, to being so casual that I’d buy a specific track just because I’d heard it on a TV soundtrack seconds earlier, or an app for my phone because someone had just shown it to me on theirs.
This kind of casualness with online payments is the Holy Grail of online commerce. At the start of internet commerce, online shops first had to overcome the basic hurdle of persuading people that any sort of transaction online were safe, but now that battle has been one and people will book hotels, hire cars, and buy goods from books, CDs and DVDs right up to washing machines, furniture and cars without too much concern.
But the small, casual purchase is still a problem – partly because of the economics of micropayments. The costs of credit and debit card transactions make charges of less than a pound uneconomical in general. (iTunes gets around this by aggregating a week’s worth of purchases and charging for them in one go, hoping that by this point you’ll have made up enough sales fees to make the transaction worthwhile. It’s not ‘real’ micropayments, but it’s the nearest, best equivalent around at the moment.)
The experience of the iTunes Store shows that – if a retailer can make the economics work for them – then micropayments are by far the best way of getting people to buy things online, because they’ll quickly demote the purchasing to their subconscious brain and not have the same conscious resistance to pressing the ‘buy’ button that comes with larger purchases.
In other words: think of how many things you can sell if you can persuade people to buy online as casually as they buy a coffee or a newspaper at the train station.
Ahh yes, newspapers. Here’s a very real and very specific case in point. The newspaper industry is in crisis worldwide: people don’t need to buy a newspaper any more (the news is out of date before the paper gets to the stand) and they’re not paying for the information online either. So how are newspapers going to keep afloat?
You can argue that newspapers are relics of the mid 20th century and really won’t be missed if they disappear for good, but I have to disagree. Even in these days of Web 2.0 user generated content, an awful lot of the most interesting stuff that gets passed around via social media is material produced by journalists for one publication or another. Without the journalists doing their investigative work we’d all be poorer and have a lot less to Twitter about – and at the end of the day someone has to pay for it.
But how, when experiments have repeatedly shown that people just won’t stump up for online content?
I think they would, if access to their day’s news content cost no more (and preferably rather less) than the kind of money they are used to forking out for a daily paper. Trouble is, the economics against micropayments have meant that online news sites have only ever been able to try out the long-term subscriptions – asking $99 for 6 month’s commitment. That’s a very big barrier to most people, and certainly engaged the conscious brain which is immediately hostile to the idea.
But it’s not quite as easy as each site charging 50p (or 50¢) for access to today’s online site. Because the deeper problem is that people don’t want to buy an entire site for a day, they may only be interested in one or two articles. And they’ll be interested in one or two articles from maybe a dozen different sites – and if you’re asking people to shell out £6 (of $6) for one day’s reading, then once again you’re starting to rouse the conscious mind which is going to start getting deeply irked about this.
The iTunes system realised this early on and its why Apple insisted that they should be able to sell individual tracks and not just whole items. The difference is huge: when I hear a track on TV, I don’t think about buying the entire album – and more often than not decide against it because I don’t know if I’d like any of the other tracks and don’t want to waste the money just for the one I know I do like.
It’s the same with newspapers. I no longer want to pay for an entire paper just because there are one or two articles in it; I want to just read those specific articles. And if a whole paper is worth 50¢ or 50p then a couple of articles can only cost 1¢/1p tops, right? Well, newspapers have never tried that because there’s no way that they can make 1¢ or 1p charges work financially. But it’s about the only way that users are ever going to accept the need to pay for their online content.
So okay, here’s a suggestion: let’s get the newspapers to join up and create an iNews Store. All of them. Then you can either sell them a day’s access to all the partner newssites for one 50¢/50p payment triggered the first time you try and access a pay-for article, or else you count up and aggregate all the 1¢/1p charges through the day and the week and bill at the end.
It won’t be popular at the start because people are used to getting something for nothing. And the longer this problem is left hanging in the air, the harder it will be to get people to come back into the spirit of paying for content of value at all – even subconsciously through micropayments and one-click.
This weekend’s biggest internet story is without doubt about an album cover deemed obscene leading to content on Wikipedia being banned by British ISPs. While a minor story in itself, it has some profound implications for the internet and for freedom of speech in the UK.
The album cover – from a 1970s German rock band called the Scorpions – apparently features a naked young girl on it. Despite having been around for over three decades, it came to the attention of the self-regulated Internet Watch Foundation on Friday, which placed it on its blacklist which is used by at least six British ISPs.
As a result, anyone trying to access the page on Wikipedia containing the album art gets a message “we have blocked this page because, according to the Internet Watch Foundation (IWF), it contains indecent images of children or pointers to them; you could be breaking UK law if you viewed the page.” Or they may just get 404 errors or blank pages.
It seems extraordinary that an album made 32 years ago could suddenly be found to fall under the auspices of the Protection of Children Act 1978 as amended in the Sexual Offences Act 2003, which:
makes it an offence to take, make, permit to be taken, distribute, show, possess with intent to distribute, and advertise indecent photographs or pseudo-photographs of children under the age of 18. The ‘making’ of such images includes downloading, that is, making a copy of a child sexual abuse image on a computer, so, in the UK, accessing such content online is a serious criminal offence
according to the IWF on their website in an article updated on November 28. The image is not illegal in the United States, and the album has never been banned in the UK and is still available in shops.
According to the Guardian: Sarah Robertson, director of communications for the IWF, said the decision to ban the page, taken after consulting the UK’s Child Exploitation and Online Protection (CEOP) agency, was being reviewed. “The assessment was done in partnership with law enforcement … the Scorpions image was deemed to be one on a scale of one to five, where one is the least offensive.”
“The question is how far this episode challenges current UK practice around censoring content online,” th Guardian article goes on to quote Becky Hogge of campaign organisation the Open Rights Group.
The BBC dot.life blog also has an interesting piece on the ban by Rory Cellan-Jones.
The placing of Wikipedia – one of the internet’s most visited sites – on the British ISPs’ watchlist has already had major impacts, as now all traffic to Wikipedia from the ISPs is being channelled through a very small number of firewalls. In essence that means that most UK traffic to Wikipedia now appears to come from just six or seven IP addresses, which is the form of identification used by Wikipedia to control anonymous access to the editing controls of the online encyclopaedia. Because of that, those IP addresses – and hence most UK users – are now shut out of the editing (although presumably they can still create accounts and edit that way – assuming the IWF action doesn’t interfere with the login process.)
Ironically, some users working in the public sector are blocked from seeing the image through their domestic ISP … but can access it from work via the Government’s internet access.
I’ve seen the image (I didn’t go looking for it, but it was in one of the news stories about the fuss) and frankly it’s not worth a fraction of the fuss. I’m obviously not in any way in favour of child porn, but this is very far away from the type of ‘porn’ that the Act had in mind – or should have had in mind – when it was drafted. If this is child porn actionable under the Act, then parents everywhere face mass arrests for taking any sort of pictures of their children in any state of undress.
Leaving aside the argument about what is and what isn’t child porn, and whether or not people should be able to access it, it’s the implications of the IWF and ISPs’ weekend actions that really concern me. Think it’s not going to affect you? Well it’s already reported this morning that Amazon now faces similar action and blacklisting because of the presence of the album for sale. If Amazon – and presumably other music retailers – is locked out at the peak time of year then we’ll see a mass of complaints form the public – and quite possibly the mother of all lawsuits for unlawful infringement of trade issued against all and sundry.
Rather naively, I’ve always thought that in the UK, freedom of speech was such that we could never have the type of draconian situation that we saw in the summer Olympics where China could decide which sites were and were not accessible to athletes, journalists, politicians and of course ordinary people. I thought the internet – designed for surviving a nuclear war, don’t forget – was robust enough that any attempts to block or control would be hopelessly ineffectual.
So the real shock of this to me is not about a questionable image or the technical problems of Wikipedia, but instead is the realisation that we have sleepwalked into a set-up where there is a system in place for controlling access to any or all information online at the flick of a switch. The controls have been progressively put in place as a result of slowly incremental legislation passed by a sleepwalking House of Commons with scant technical understanding of what they’re doing, under the misleading crusading banners of “decency” and “anti-terrorism”. We’re starting to see these measures link up and assert themselves and finally beginning to see what they are capable of when used in the real world. And it’s not a pretty sight.
You can argue that the situation is not the same as China, because it’s a charitable organisation like the IWF and not a government bureaucrat in charge; and no one is forcing the ISPs to comply. Well – why is it better that our national internet access is controlled by an unaccountable, unregulated bunch of moralistic do-gooders? And as for no one forcing the ISPs to comply – that’s set to change in the next year with the government already planning to enact new laws to put the IWF into exactly that mandatory gatekeeper position.
The Government probably thinks it can get away with it as long as it doesn’t look as though politicians’ fingerprints are anywhere too close, but the IWF will respond to government edicts about what’s right and proper with alacrity. We’ve already heard Hazel Blears attack political blogs as “a dangerous corrosion in our political culture” so how long before the IWF decrees those to be against the law or corrupting our morals and do a blanket ban of any such blogs? Sounds like a perfectly proper, moral argument being presented to do just that, after all. Which could be any blog disagreeing with the party of the day … Now is it starting to sound just a little bit like China?
Yes, this is all getting a little overcooked and alarmist. But then I’ve always been something of a zealot about freedom of speech and about loathing the surveillance society, to the point where I thought that my occasional rants were getting a little over the top even for me – don’t get me started about CCTV and identity cards, for example. And yet here I am today, caught out and aghast at how far it’s alreadt gone: it’s far beyond my own over-the-top rants. Now I’m thinking that I’ve been too quiet, laid back and ambivalent about this, rather than the opposite.
So I ask you: think of the number one thing you would hate to lose online. And now realise, there’s a very good chance that it can and will be taken away because of the situation we’re sleepwalking into.
Want to wait till it happens? Or do something about it now?
UPDATE: IWF has reversed its decision and removed the Wikipedia page from its blacklist. The IWF states:
“in light of the length of time the image has existed and its wide availability, the decision has been taken to remove this webpage from our list.”
Just as well, because it was reportedly getting swamped with “helpful” tips about where the same image was available on a load of other retail and music sites. Amazon US had previously removed the image from the product page of the Scorpions album to ward off the danger of getting blacklisted at the busiest time of year, but many sites remained that hadn’t.