Category Archives: metatechnologies

A couple of days ago, Marissa Mayer, VP of  “Search Products & User Experience” over at Google posted a piece on “the future of search” and her conclusion is this:

So what’s our straightforward definition of the ideal search engine? Your best friend with instant access to all the world’s facts and a photographic memory of everything you’ve seen and know. That search engine could tailor answers to you based on your preferences, your existing knowledge and the best available information; it could ask for clarification and present the answers in whatever setting or media worked best.

It’s from Google’s official blog so everybody and the Denver Broncos (keyword used solely to scramble the document vector of this post) has already commented on it but here’s my 50 centimes.

The first thing that strikes me about Mayer’s definition of the ideal search engine is the “your best friend” thing. Why would I want to be friends with a search engine? This goes very much in the direction of “don’t be evil”, Google’s famous corporate motto, which is, in my view, based on the (erroneous) believe that questions of power can be reduced to questions of morals. “Your best friend” could mean that the search engine will know a lot about you but it will not tell your boss that you search for pr0n on a daily basis. If you live in China it might tell the authorities where you’re at but a friend would too, given the right incentive. The idea is that you can confide in your best friend, spill your dirty little secrets without having to fear that they will pop up somewhere on the blogosphere. So there’s the privacy issue and Mayer is suggesting that you can trust Google with the growing pool of data you leave in their (floating!) datacenters.

The second matter is more subtle and kind of revitalizes all the critique that has been written concerning Nicholas Negroponte’s idea of the “daily me”, most notably the concept of the “echo chamber” which holds that personalization results in people getting exposed only to the views that they already agree with. I am not sure whether such a situation is imminent, in fact, I agree with much of what David Weinberger says in this article, but given the fact that search has become such a pervasive practice, one cannot easily dismiss it. My real problem though is that personalization has become the dominant direction of search engine evolution when there are so many different paths to go down. Mayer actually talks about one:

Yet our presentation is still very linear (the results are just a list) and even (no one result is more important or larger than the next). What if the results page began to transform radically to really harness these different types of results into something that felt much more like an answer rather than just 10 independent guesses?

I find the idea of making the results page smarter very intriguing but not the conclusion of making it more “like an answer”. Why not add semantic clustering along the lines of Clusty, why not add the possibility to easily weight search terms or to better interact with the search results? I find the idea of rendering everything always more convenient and less of an effort quite troubling indeed. Why is there no button to the really useful cheat sheet on the main page? Has the idea of educating users become so completely unthinkable? I’d prefer to have more control over ranking and better means to refine my search and organize my results than a new best friend. Google has all the ingredients for delivering potentially great semantic mapping that would not give definite answers but a better overview of the heterogeneity of search results. Unfortunately, the idea of personalization seems to completely overshadow the more enlightened concept of augmentation.

Marriage is a form of social glue: it creates connection, often between groups (circles of friends, family, etc.) that where not connected before. In that sense, our amorous entanglements can play a similar role as Granovetter’s weak ties, that is, they may establish bridging connections between groups that can be quite dissimilar concerning the habitual social parameters (class, ethnicity, religion, etc.) thereby effectively contributing to social cohesion. This article from German news magazine Die Zeit argues that marriage used to be an important means for social mobility:

“One in two men used to marry below their social status and one in two women above.”

The fact that secretaries married their bosses and nurses the doctors at their hospital contributed greatly to the mixing of milieus and even played a role in producing a more even distribution of riches. What I found really interresting though is that this seems to be changing and that online dating is one of the reasons.

Dating websites establish what could be described as a regime of data based perception. Profiles contain a lot of information about a person, some of which allow for an easy and quick categorization on the scale of social status: job and income, level of education, cultural preferences, choice of language, etc. make it easy to compare one’s own social position to that of the millions of profiles that we have access to. According to the cited article, online dating significantly reduces interaction between social strata. Both women and men now strive to find mates that have a similar educational and socioeconomic background and the data offered by most dating sites allows filtering out potential candidates – down to a very fine degree – and still get access to hundreds of profiles. The haphazard component of interaction in physical space where attraction might emerge in unexpected ways is greatly reduced when every single person from the pool of available mates is a profile competing in a handful of very large markets (= dating sites) that are based on specific data constructs and algorithms that allow to project that data according to one’s own wishes.

One could say that in a sense, digital space implies a shifting of perceptual categories: the physical attributes of a person are compressed into pictures and a small number of descriptors (weight, height, body shape) while information that would traditionally have to be uncovered in a conversation (job, education, etc.) is magnified and pushed to the front. With economic insecurity and status angst on the rise, this shift perfectly fits a public increasingly eager to include rational arguments when it comes to choosing a mate.

As always, the chain link that interests me most is the designer of such information systems that, in this case as least, becomes a full fledged social engineer whose choices might have larger consequences for social cohesion than he or she would probably be willing to admit…

I have recently been thinking quite a lot about what it means to be “critical”. At a lot of the conferences I go to, the term is used a lot but somehow it remains intuitively unintelligible to me. The dictionary says that a critical person would be “inclined to judge severely and find fault” and a critical reading “characterized by careful, exact evaluation and judgment”. I cannot shake the impression that a lot of the debate about the political and ethical dimension of information systems is neither careful, nor exact. Especially when it comes to analyzing the deeds of big commercial actors like Google, there has been a pointed shift from complete apathy to hysteria. People like Siva Vaidhyanathan, whose talk about the “googlization of everything” I heard at the New Network Theory Conference, are, in my view, riding a wave of “critical” outrage that seemingly tries to compensate for the long years of relative silence about issues of power and control in information search, filtering, and structuration. But instead of being careful and exact – apparently the basis of both critical thought and scholarly pursuit – many of the newly appointed Emile Zolas are lumping together all sorts of different arguments in order to make their case. In Vaidhyanathan’s case for example, Google is bad because its search algorithms work too well and the book search not well enough.

Don’t get me wrong, I’m not saying that we should let the emerging giants of the Web era off the hook. I fully agree with many points Jeffrey Chester recently made in The Nation – despite the sensationalist title of that article. What I deplore is a critical reflex that is not concerned with being careful and exact. If we do not adhere, as scholars, to these basic principles, our discourse loses the basis of its justification and we are doing a disservice to both the political cause of fighting for pluralism of opinion in the information landscape and the academic cause of furthering understanding. Our “being critical” should not lead to obsession with the question of whether Google (or other companies for that matter) are “good” or “bad” but to an obsession about the more fundamental issues that link these strange systems that serve us the Web as a digestible meal to matters of political and economic domination. I’ve been reading a lot recently about how Google is invading our privacy but very little about the actual social function of privacy, seen as a historical achievement, and how the very idea could and should be translated into the information age where every action leaves a footprint of data waiting to be mined. We still seem to be in a “1984” mindset that, in my view, is thoroughly misleading when it comes to understanding the matters at hand. If we phrase the challenges posed by Google in purely moral terms we might miss the ethical dimension of the problem – ethics understood as the “art of conduct” that is.

This might sound strange, but under the digital condition the protection of privacy faces many of the same problems as the enforcement of copyright, because they both concern the problem of controlling flows of data. And whether we like it or not, both technical and legal solutions to protecting privacy might end up looking quite similar to the DRM systems we rightfully criticize. It is in that sense that the malleability of digital technology throws us back to the fundamentals of ethics: how do we want to live? What do we want our societies to look like? What makes for a good life? And how do we update the answers to those questions to our current technological and legal situation? Simply put: I would like to read more about why privacy is fundamentally important to democracy and how protection of that right could work when everything we do online is prone to be algorithmically analyzed. Chastising Google sometimes look to me like actually arguing on the same level as the company’s corporate motto: “don’t be evil” – please?

We don’t need Google to repent their sins. We need well-argumented laws that clearly define our rights to the data we produce, patch up the ways around such laws (EULAs come to mind) and think about technical means (encryption based?) that translate them onto the system level. Less morals and more ethics that is.

Most of the political potentialities of automated surveillance depend on two elements. The debate has generally concentrated on the first: data acquisition. But “digital wiretapping”, in whatever technical shape it may come, is only one part of the issue. Making sense of the data collected is the more complicated, albeit often neglected, half of the equation. Data mining technologies have certainly much advanced over the last couple of years but the commercial applications are not necessarily adapted to the demands that government agencies might have. This article makes the claim that scientist have developed a software that can “help detect terrorists before they strike”. It reads:

Computer and behavioral scientists at the University at Buffalo are developing automated systems that track faces, voices, bodies and other biometrics against scientifically tested behavioral indicators to provide a numerical score of the likelihood that an individual may be about to commit a terrorist act.

Although I’m quite skeptical about the actual performance of such software in the field (the real world is pretty messy after all) it shows the direction things are heading. The actual piece of software comes pretty close to what Virilio has described as “machine de vision” (vision machine) – a device that not only records reality (camera) but also interprets it (the man on the subway platform walking around nervously is not just a heap of pixels but a potential terrorist). Virilio talks about the “delegation of perception” and this is perhaps the most interesting aspect of the increasing technologizing of control: part of the process of decision-making (and therefore part of the responsibility) is transferred to algorithms and questions of professional ethics become matters of optimizing parameters and debugging.