Posted by Bernhard on September 15th 2008 @ 3:20 am
A couple of days ago, Marissa Mayer, VP of “Search Products & User Experience” over at Google posted a piece on “the future of search” and her conclusion is this:
So what’s our straightforward definition of the ideal search engine? Your best friend with instant access to all the world’s facts and a photographic memory of everything you’ve seen and know. That search engine could tailor answers to you based on your preferences, your existing knowledge and the best available information; it could ask for clarification and present the answers in whatever setting or media worked best.
It’s from Google’s official blog so everybody and the Denver Broncos (keyword used solely to scramble the document vector of this post) has already commented on it but here’s my 50 centimes.
The first thing that strikes me about Mayer’s definition of the ideal search engine is the “your best friend” thing. Why would I want to be friends with a search engine? This goes very much in the direction of “don’t be evil”, Google’s famous corporate motto, which is, in my view, based on the (erroneous) believe that questions of power can be reduced to questions of morals. “Your best friend” could mean that the search engine will know a lot about you but it will not tell your boss that you search for pr0n on a daily basis. If you live in China it might tell the authorities where you’re at but a friend would too, given the right incentive. The idea is that you can confide in your best friend, spill your dirty little secrets without having to fear that they will pop up somewhere on the blogosphere. So there’s the privacy issue and Mayer is suggesting that you can trust Google with the growing pool of data you leave in their (floating!) datacenters.
The second matter is more subtle and kind of revitalizes all the critique that has been written concerning Nicholas Negroponte’s idea of the “daily me”, most notably the concept of the “echo chamber” which holds that personalization results in people getting exposed only to the views that they already agree with. I am not sure whether such a situation is imminent, in fact, I agree with much of what David Weinberger says in this article, but given the fact that search has become such a pervasive practice, one cannot easily dismiss it. My real problem though is that personalization has become the dominant direction of search engine evolution when there are so many different paths to go down. Mayer actually talks about one:
Yet our presentation is still very linear (the results are just a list) and even (no one result is more important or larger than the next). What if the results page began to transform radically to really harness these different types of results into something that felt much more like an answer rather than just 10 independent guesses?
I find the idea of making the results page smarter very intriguing but not the conclusion of making it more “like an answer”. Why not add semantic clustering along the lines of Clusty, why not add the possibility to easily weight search terms or to better interact with the search results? I find the idea of rendering everything always more convenient and less of an effort quite troubling indeed. Why is there no button to the really useful cheat sheet on the main page? Has the idea of educating users become so completely unthinkable? I’d prefer to have more control over ranking and better means to refine my search and organize my results than a new best friend. Google has all the ingredients for delivering potentially great semantic mapping that would not give definite answers but a better overview of the heterogeneity of search results. Unfortunately, the idea of personalization seems to completely overshadow the more enlightened concept of augmentation.
Posted by Bernhard on September 14th 2008 @ 2:06 am
Continuing in the direction of exploring statistics as an instrument of power more characteristic of contemporary society than means of surveillance centered on individuals, I found a quite beautiful citation by French sociologist Gabriel Tarde in his Les Lois de l’imitation (1890/2001, p.192f):
Si la statistique continue à faire des progrès qu’elle a faits depuis plusieurs années, si les informations qu’elle nous fournit vont se perfectionnant, s’accélérant, se régularisant, se multipliant toujours, il pourra venir un moment où, de chaque fait social en train de s’accomplir, il s’échappera pour ainsi dire automatiquement un chiffre, lequel ira immédiatement prendre son rang sur les registres de la statistique continuellement communiquée au public et répandue en dessins par la presse quotidienne.
And here’s my translation (that’s service, folks):
If statistics continues to make the progress it has made for several years now, if the information it provides us with continues to become more perfect, faster, more regular, steadily multiplying, there might come the moment where from every social fact taking place springs – so to speak – automatically a number that would immediately take its place in the registers of the statistics continuously communicated to the public and distributed in graphic form by the daily press.
When Tarde wrote this in 1890, he saw the progress of statistics as a boon that would allow a more rational governance and give society the means to discuss itself in a more informed, empirical fashion. Nowadays, online, a number springs from every social fact indeed but the resulting statistics are rarely a public good that enters public debate. User data on social networks will probably prove to be the very foundation of any business that is to be made with these platforms and will therefore stay jealously guarded. The digital town squares are quite private after all…