Monthly Archives: July 2008 has a piece on Google’s expanding media empire and there is one observation that is actually quite obvious but which I’ve never really thought about:

It becomes pretty clear how Google is going about launching new products or acquiring others: analyzing the most popular topics within its search engine.

People are searching a lot for second life? All right, let’s launch our own 3D virtual world then. Google Trends already exploits search statistics for really simple trend / market analysis but in a dynamic marketplace like the Web the vast amount of search queries Google registers can really be a much more formidable tool for taking society’s pulse. There is no doubt that Google uses this data internally for some heavy market research and I could imagine that the company might license these tools or data to third parties in the future. Nielsen would get some serious competition.

The point I find really interesting about this matter is that Google is mostly criticized for commercially biases search results, their monopoly on online search and the gathering of data that might be used to spy on citizens – I have yet to read something that reflects data collection on users’ search behavior not only as potentially dangerous to individual rights but as a unique tool for corporate strategy. Mining their all knowing logfile might give Google a competitive advantage that other companies simply cannot emulate. Spotting shifts in cultural trends early could give their business planning an asset that money (currently) cannot buy. It would be prudent to convert to Googlism while they still accept new members.

The concept of self-organization has recently made quite a comeback and I find myself making a habit of criticizing it. Quite generally I use this blog to sort things out in my head by writing about them and this is an itch that needs scratching. Fortunately, political scientist Steven Weber, in his really remarkable book The Success of Open Source, has already done all the work. On page 132 he writes:

Self-organization is used too often as a placeholder for an unspecified mechanism. The term becomes a euphemism for “I don’t really understand the mechanism that holds the system together.” That is the political equivalent of cosmological dark matter.

This seems really right on target: self-organization is really quite often just a means to negate organizing principles in the absence of an easily identifiable organizing institution. By speaking of self-organization we can skip closer examination and avoid the slow and difficult process of understanding complex phenomena. Webers second point is perhaps even more important in the current debate about Web 2.0:

Self-organization often evokes an optimistically tinged “state of nature” narrative, a story about the good way things would evolve if the “meddling” hands of corporations and lawyers and governments and bureaucracies would just stay away.

I would go even further and argue that especially the digerati philosophy pushed by Wired Magazine equates self-organization with freedom and democracy. Much of the current thinking about Web 2.0 seems to be quite strongly infused by this mindset. But I believe that there is a double fallacy:

  1. Much of what is happening on the Social Web is not self-organization in the sense that governance is the result of pure micro-negotiations between agents; technological platforms lay the ground for and shape social and cultural processes that are most certainly less evident than the organizational structures of the classic firm but nonetheless mechanisms that can be described and explained.
  2. Democracy as a form of governance is really quite dependent on strong organizational principles and the more participative a system becomes, the more complicated it gets. Organizational principles do not need to be institutional in the sense of the different bodies of government; they can be embedded in procedures, protocols or even tacit norms. A code repository like is quite a complicated system and much of the organizational labor in Open Source is delegated to this and other platforms – coordinating the work effort between that many people would be impossible without it.

My guess is that the concept of self-organization as “state of nature” narrative (nature = good) is much too often used to justify modes of organization that would imply a shift power from traditional institutions of governance to the technological elite (the readers and editors of Wired Magazine). Researchers should therefore be weary of the term and whenever it comes up take an even closer look at the actual mechanisms at work. Self-organization is an explanandum (something that needs to be explainend) and not an explanans (an explanation). This is why I find network science really very interesting. Growth mecanism like preferential attachment allow us to give an analytical content to the placeholder that is “self-organization” and examine, albeit on a very abstract level, the ways in which dynamic systems organize (and distribute power) without central control.

This is not a substantial post just a pointer to this interview with Digg lead scientist Anton Kast on Digg’s upcoming recommendation engine (which is really just collaborative filtering but as Kast says the engineering challenge is to make it work in real time – which is quite fascinating given the volume of users and content on the site). Around 2:50 Kast explains why Digg will list the “compatibility coefficient” (algorithmic proximity anyone?) with other users and give an indication why stories are recommended to you (because these users dug them): Digg users hate getting stuff imposed and just showing recommendations without a trail “looks editorial”. Wow, “editorial” is becoming a swearword. Who would have thought…