Eli Pariser, of MoveOn.org, gave a TED talk on why some people walk through a city looking for the same thing over and over again (the familiar comfort of a Starbucks) while others might seek out new and different tastes and ideas. In his presentation the former represents the opportunity for personalised service while the latter does not.
Ask yourself (especially if you are sipping on the Frappa-Mocha-Latte-Lowfat-Vente made just the way you like it) how a restaurant you have never seen before could give personalised service. Is it even possible? It might seem like a legitimate question today, but it used to be the other way around. Restaurants used to have trained staff to guide you to a meal selection. People used to ask how a restaurant that serves people over and over again (at least in name — the franchise) from an industrialised menu could ever feel personal.
To be fair he is speaking only about search engine results, not the food examples I give above. So the presentation is billed as a new issue with information technology. However, I do not see why this warning would not fit within any service industry.
Some people seek out a genuinely personal experience despite the risk while others actually want to be identified and served in a consistent and predictable manner. If you want something that is not “highly tailored”, you actually do still have options. You can choose not to use Facebook just like you choose not to eat at KFC.
An example he gives of online personalisation is NetFlix, so here’s my view on that specific site: I struggled so much with their suggestion system it made me quit. It not only guessed preferences incorrectly but it started to annoy me to the point where I ended up researching why I was given some discs quickly but others took weeks to arrive.
What I figured out over a few months of opening different accounts was that if I sent movies back too quickly (more than four a month) an algorithm started to “throttle” my next selections. They were pushing suggestions to me not only on an “interest” level, but also on a demand model of profit — the slow accounts (e.g. higher-profit) were given priority on popular titles and the fast accounts received suggestions of unpopular ones. Pariser does not bring this profit element of personalisation up at all, although you might guess it by looking at his Egypt news example.
Omitted from his presentation also is the fact that preferences always shift over time. It may be highly desired by the first generation to experience the latest ideas of personalisation, yet the next generation could easily reject it as a boring, lame or lazy habit and seek out the opposite — non-personal stuff is cool again. “Check me out, I’m anonymous”
The adoption of personalisation therefore may not be assumed to be a constant linear risk at all. We no sooner would approach a “web of one” than we should see all grandparents and grandchildren singing the same tune.
Consistency too could grow out of fashion as people seek sources of information that push self-challenging or contradictory view points.
So while I sympathise with Pariser’s lament and warning about search engines, I don’t think he explains the risk in a historic or broad social context.
Finally, although his Google example mentions 47 points for user identification, there is no mention of why it grew to such a high number. They obviously were not satisfied with fewer data points as users became more mobile, more virtual, more multi-user, multi-platform and harder to recognise. It seems to be intended to be a shocking example of how a search engine really know it’s you, but it also could be seen as a hopeful statistic. It’s hard now and there are many ways it could get even more complex for a site to figure out who is really who — he could have suggested we build a reverse filter bubble.
Regardless of what else this may or may not be, it is censorship. Just because a piece of code is doing the blocking does not mean it is any less than that. I think the historic and social aspects to this were very well presented by Mr. Pariser. At the very least it is a lulling of the recipient requesting information without their knowledge. Information technology is ever evolving and Mr. Pariser is bringing up an issue that may have only been vauging warned about before. But here it is hitting us square in the face from two of the biggest internet companies out there. Mr. Pariser is right. The internet has amazing promise for so many things. But not if the gatekeepers have the power to funnel information “for the good of the individual”. How many times have I heard that………………
Hi Const., thanks for your thoughtful comment.
A newspaper will filter information. It can be editorship more than censorship and there are many more examples of useful filters, so I would not jump straight to the same conclusion as you.
While we may take umbrage about “lulling of the recipient” and companies operating “without their knowledge” we must also keep this in perspective of the source: low-cost low-hassle services.
So the question is not about whether something is done with our knowledge (we all like when things are done for us and in our interest).
The question is whether a service can be trusted to do what we consider in our best interest.
Pariser does not like how Google and Facebook filter information. He does not trust their management. So he should quit using them and start a better service with better management — more aligned with his values/ethics.
Or he should use a reverse filter to manipulate those services before they can manipulate him — that’s the real future of the consumer. People have been gaming systems for as long as they have existed. We don’t accept bubbles naturally, and not for very long. Virtual environments for ourselves make the gaming of the system easier than ever….
Finally, I wouldn’t call Facebook or Google the gatekeepers of the Internet any more than I would call KFC and McDonalds gatekeepers of agriculture.