The concept of self-organization has recently made quite a comeback and I find myself making a habit of criticizing it. Quite generally I use this blog to sort things out in my head by writing about them and this is an itch that needs scratching. Fortunately, political scientist Steven Weber, in his really remarkable book The Success of Open Source, has already done all the work. On page 132 he writes:

Self-organization is used too often as a placeholder for an unspecified mechanism. The term becomes a euphemism for “I don’t really understand the mechanism that holds the system together.” That is the political equivalent of cosmological dark matter.

This seems really right on target: self-organization is really quite often just a means to negate organizing principles in the absence of an easily identifiable organizing institution. By speaking of self-organization we can skip closer examination and avoid the slow and difficult process of understanding complex phenomena. Webers second point is perhaps even more important in the current debate about Web 2.0:

Self-organization often evokes an optimistically tinged “state of nature” narrative, a story about the good way things would evolve if the “meddling” hands of corporations and lawyers and governments and bureaucracies would just stay away.

I would go even further and argue that especially the digerati philosophy pushed by Wired Magazine equates self-organization with freedom and democracy. Much of the current thinking about Web 2.0 seems to be quite strongly infused by this mindset. But I believe that there is a double fallacy:

  1. Much of what is happening on the Social Web is not self-organization in the sense that governance is the result of pure micro-negotiations between agents; technological platforms lay the ground for and shape social and cultural processes that are most certainly less evident than the organizational structures of the classic firm but nonetheless mechanisms that can be described and explained.
  2. Democracy as a form of governance is really quite dependent on strong organizational principles and the more participative a system becomes, the more complicated it gets. Organizational principles do not need to be institutional in the sense of the different bodies of government; they can be embedded in procedures, protocols or even tacit norms. A code repository like SourceForge.net is quite a complicated system and much of the organizational labor in Open Source is delegated to this and other platforms – coordinating the work effort between that many people would be impossible without it.

My guess is that the concept of self-organization as “state of nature” narrative (nature = good) is much too often used to justify modes of organization that would imply a shift power from traditional institutions of governance to the technological elite (the readers and editors of Wired Magazine). Researchers should therefore be weary of the term and whenever it comes up take an even closer look at the actual mechanisms at work. Self-organization is an explanandum (something that needs to be explainend) and not an explanans (an explanation). This is why I find network science really very interesting. Growth mecanism like preferential attachment allow us to give an analytical content to the placeholder that is “self-organization” and examine, albeit on a very abstract level, the ways in which dynamic systems organize (and distribute power) without central control.

This is not a substantial post just a pointer to this interview with Digg lead scientist Anton Kast on Digg’s upcoming recommendation engine (which is really just collaborative filtering but as Kast says the engineering challenge is to make it work in real time – which is quite fascinating given the volume of users and content on the site). Around 2:50 Kast explains why Digg will list the “compatibility coefficient” (algorithmic proximity anyone?) with other users and give an indication why stories are recommended to you (because these users dug them): Digg users hate getting stuff imposed and just showing recommendations without a trail “looks editorial”. Wow, “editorial” is becoming a swearword. Who would have thought…

This morning Jonah Bossewitch pointed me to an article over at Wired, authored by Chris Anderson which announces “The End of Theory”. The article’s main argument in itself is not very interesting for anybody with a knack for epistemology – Anderson has apparently never heard of the induction / deduction discussion and a limited idea about what statistics does – but there is a very interesting question lurking somewhere behind all the Californian Ideology and the following citation points right to it:

We can stop looking for models. We can analyze the data without hypotheses about what it might show. We can throw the numbers into the biggest computing clusters the world has ever seen and let statistical algorithms find patterns where science cannot.

One could point to the fact that the natural sciences had their experimental side for quite a while (Roger Bacon advocated his scientia experimentalis in the 13th century) and that a laboratory is in a sense a pattern-finding machine where induction continuously plays an important role. What interests me more though is Anderson’s insinuation that statistical algorithms are not models. Let’s just look at one of the examples he uses:

Google’s founding philosophy is that we don’t know why this page is better than that one: If the statistics of incoming links say it is, that’s good enough. No semantic or causal analysis is required.

This is a very limited understanding of what constitutes a model. I would argue that PageRank does in fact rely very explicitly on a model which combines several layers of justification. In their seminal paper on Google, Brin and Page write the following:

PageRank can be thought of as a model of user behavior. We assume there is a “random surfer” who is given a web page at random and keeps clicking on links, never hitting “back” but eventually gets bored and starts on another random page. The probability that the random surfer visits a page is its PageRank.

The assumption behind this graph oriented justification is that people do not randomly place links but they do so with purpose. Linking implies attribution of importance: we don’t link to documents that we’re indifferent about. The statistical exploration of the huge graph that is the Web is indeed oriented by this basic assumption and adds the quite contestable ruling according to which shall be most visible what is thought important by the greatest number of linkers. I would, then, argue that there is no experimental method that is purely inductive, not even neural networks. Sure, on the mathematical side we can explore data without limitations concerning their dimensionality, i.e. the number of characteristics that can be taken into account; the method of gathering data is however always a process of selection that is influenced by some idea or intuition that at least implicitly has the characteristic of a model. There is a deductive side to even the most inductive approach. Data is made not given and every projection of that data is oriented. To quote Fernando Pereira:

[W]ithout well-chosen constraints — from scientific theories — all that number crunching will just memorize the experimental data.

As Jonah points out, Anderson’s article is probably a straw man argument whose sole purpose is to attract attention but it points to something that is really important: too many people think that mathematical methods for knowledge discovery (datamining that is) are neutral and objective tools that will find what’s really there and show the world as it is without the stain of human intentionality; these algorithms are therefore not seen as objects of political inquiry. In this view statistics is all about counting facts and only higher layers of abstraction (models, theories,…) can have a political dimension. But it matters what we count and how we count.

In the end, Anderson’s piece is little more than the habitual prostration before the altar of emergence and self-organization. Just exchange the invisible hand for the invisible brain and you’ll get pop epistemology for hive minds…

A couple of weeks ago, Google released App Engine a Web hosting platform that makes the company’s extensive knowledge in datacenter technology available to the general public. The service is free for the moment (including 500MB in data storage and a quite generous contingent in CPU cycles) but there is a commercial service in preparation. Apps use Google Passport Google’s account system for user identification and are currently limited to (lovely) Python as programming language. I don’t want to write about the usual Google über alles matter but kind of restate an idea I proposed in a paper in 2005. When criticizing search engine companies, authors generally demand more inclusive search algorithms, less commercial results, transparent ranking algorithms or non-commercial alternatives to the dominant service(s). This is all very important but I fear that a) there cannot be search without bias, b) transparency would not reduce the commercial coloring of search results, and c) open source efforts would have difficulties mustering the support on the hardware and datacenter front to provide services to billions of users and effectively take on the big players. In 2005 I suggested the following:

Instead of trying to mechanize equality, we should obligate search engine companies to perform a much less ambiguous public service by demanding that they grant access to their indexes and server farms. If users have no choice but to place confidence in search engines, why not ask these corporations to return the trust by allowing users to create their own search mechanisms? This would give the public the possibility to develop search algorithms that do not focus on commercial interest: search techniques that build on criteria that render commercial hijacking very difficult. Lately we have seen some action to promote more user participation and control, but the measures undertaken are not going very far. From a technical point of view, it would be easy for the big players to propose programming frameworks that allow writing safe code for execution in their server environment; the conceptual layers already are modules and replacing one search (or representation) module with another should not be a problem. The open source movement as part of the civil society has already proven it’s capabilities in various fields and where control is impossible, choice might be the only answer. To counter complete fragmentation and provide orientation, we could imagine that respected civic organizations like the FSF endorse specific proposals from the chaotic field of search algorithms that would emerge. In France, television networks have to invest a percentage of their revenue in cinema, why not make search engine companies dedicate a percentage of their computer power to algorithms written by the public? This would provide the necessary processing capabilities to civil society without endangering the business model of those companies; they could still place advertising and even keep their own search algorithms a secret. But there would be alternatives – alternative (noncommercial) viewpoints and hierarchies – to choose from.

I believe that the Google App Engine could be the technical basis for what could be called the Google Search Sandbox, a hosting platform equipped with either an API to the company’s vast indexes or even something as simple as a means to change weights for parameters in the existing set of algorithms. A simple JSON input like {“shop”:”-1″, “checkout”:”-1″,”price”:”-1″,”cart”:”-1″,”bestseller”:”-1″} could be enough to e.g. eliminate amazon pages from the result list. SEOing for these scripts would be difficult because there would be many different varieties (one of the first would be bernosworld.google.com – we aim to displease! no useful results guaranteed!). It is of course not in Google’s best interest to implement something like this because many scripts might direct users away from commercial pages using AdSense, the foundation of the company’s revenue stream. But this is why we have governments. Hoping for or even legislating more transparency and “inclusive” search might be less effective than people wish. I demand access to the index!

There are many things to be said about Clay Shirky’s recent book “Here Comes Everybody: The Power of Organizing without Organizations” and a lot has already been said. The book is part of an every growing pile of Web 2.0 literature that could be qualified as “popular science” – easily digestible titles, generally written by scholars or science journalists, which are generally declaring the advent of a new age where old concepts no longer apply and everything is profoundly transformed (knowledge, education, the economy, thinking, wisdom, organization, culture, journalism, etc.). The genre has been pioneered by people like Alvin Toffler and Jeremy Rifkin and it does now dominate much of the debate on the social, cultural and political “effects” of recent developments in ICT. There is of course merit to a larger debate on technology and the sensationalist baseline is perhaps needed to create the audience for such a debate. At the same time, I cannot help feeling a little bit unsettled by the scope the phenomenon has taken and the grip these books seem to have on academic discourse. Here are a couple of reasons why:

  1. There are actually very few thoughts and arguments in the whole “Web 2.0 literature” that have not already been phrased in Tim O’Reilly’s original essay. Granted, the piece was quite seminal but shouldn’t academia be able to come up with a stronger conceptual viewpoint?
  2. The books in question are really lightweight when it comes to anchoring their thoughts in previous scholarly effort. A lot of room is given to metaphorical coupling with the natural sciences (some keywords: swarms, ecologies, auto-organization, percolation, critical thresholds, chaos, etc.) but although most of these books talk about the future of work (prosumers performing collective wisdom, in short), there is very little interaction with the sociology of labor or economic theory. Sure, a deeper examination of these topics would be difficult, but without some grounding in established work, the whole purpose of scholarship as a collective endeavor is meaningless – which is especially ironic given the celebration of cooperation one can find in Web 2.0 literature
  3. As I’ve already written in another post, I find the idea that “participation” and “leveling of hierarchies” equates democracy deeply troubling. Richard Sennett’s argument that stable social organization and work relations are necessary prerequisites for true political discourse – politics that go beyond the flash mob activism often presented as prove for the new, more democratic age that is upon us – is ringing louder than ever.
  4. Much of the Web 2.0 literature is basically antithetical to the purpose of this blog. Shirky’s idea that the new social tools allow for “organizing without organizations” is largely ignoring the political power that is transferred to the 21st century tool maker and the companies that he or she works for. I’m not advocating paranoia here, but the fact that many of the tools that power mass sociability online are produced and controlled by firms that are accountable to their shareholders only (or the people that got them venture capital) is at least worth mentioning. But the problem really goes beyond that: the tools we currently have incite people to gather around common interests, creating and activating issue publics than can indeed take influence on political matters. But politics is much more than the totality of policy decisions. The rise of issue publics has coincided with the demise of popular parties and while this may look like a good thing to many people, parties have historically been the laboratories for the development of politics beyond policy. Europe’s social market economies are unthinkable without the various socialist parties that worked over decades to make societies more just. One does not have to be a left winger to recognize that the loss of the stable and accountable forum that is the political party would be at least ambiguous.
  5. While Web 2.0 literature is light on politics and serious political theory it is not stingy with morals. The identification of “good” and “bad” effects that 2.0 ICT will have on society often seems really at the core of many of the texts published over the last few years. But as point 4 might have shown, the idea of “good” and “bad” is really meaningless outside of a particular political (or religious) ontology. What actually happens is the understatement of a vague political consensus that takes a position antithetical to the premises of critical sociology, i.e. that conflict is constitutive to society.
  6. An essay stretched over 250 pages does not make a book. (I know, that’s a little mean – but also a little true, no?)

Don’t get me wrong, many of the books I’m referring to have actually been quite interesting to read. What worries me is the lack of more scholarly and conceptually demanding works but perhaps I’m just impatient. In a sense, “Digital Formations” by Robert Latham and Saskia Sassen already shows how sophisticated Internet Research could be if we switch off that prophet gene.

This is a very general question and there is no way to answer it in a rigorous way. But after reading many of the books and articles on “participatory culture” I cannot shake the feeling that the idea of non-organized organization will very soon be confronted with a series of limits and problems inherent to auto-organized social aggregation – inequality, intercultural strife, visibility of minority opinion, etc. – that will be difficult to ignore.

But there is a more practical reason why I ask myself this very question. Pierre Lévy actually used to work at my department and my laboratory has recently stuck up a cooperation with his research unit in Ottawa. We’ve been organizing a little seminar here in Paris where Lévy will be giving a talk later this month. When Lévy wrote “L’intelligence collective” in 1994, many people saw his proposals as sheer blue-eyed utopia and dismissed it rather quickly. The American reading of that text has since then become something like the bible of research on participatory culture, user-generated content movements, and so on. Interestingly, Lévy himself has been pretty silent on all of this, leaving the exegesis of his thoughts to Henry Jenkins and others. Why? Because Lévy probably never imagined collective intelligence as photo-sharing on Flikr or Harry Potter fanfiction. What he envisioned is in fact exemplified by his work over the last couple of years, which was centered on the development of IEML – Information Economy Meta Language:

IEML (Information Economy Meta Language) is an artificial language designed to be simultaneously: a) optimally manipulable by computers; and b) capable of expressing the semantic and pragmatic nuances of natural languages. The design of IEML responds to three interdependent problems: the semantic addressing of cyberspace data; the coordination of research in the humanities and social sciences; and the distributed governance of collective intelligence in the service of human development.

IEML is not another syntax proposal for a semantic web like RDF or OWL. It is a philosopher’s creation of a new language that allows mainly two things: facilitate the processing of data tagged with IEML sentences and help cross-language and intercultural reasoning. This page gives a short overview. Against the usual understanding of collective intelligence, IEML is really a top-down endeavor. Lévy came up with the basic syntax and vocabulary and the proposal explicitly states the need for experts in helping with formalization and translation. I must admit that I have been very skeptical of the whole thing, but after reading Clay Shirky’s “Here comes Everybody” (which I found interesting but also seriously lacking – I’ll get to that in another post though) there is a feeling creeping up on me that Lévy might yet again be five years ahead of everybody else. In my view, the mindset of large parts of research on participation has adopted the ontology and ethics of American-brand Protestantism which, among other things, identifies liberty and democracy with community rather than with the state and which imagines social process as a matter of developing collective morals and practices much more than the outcome of power struggles mediated by political institutions. This view idealizes the “common man” and shuns expert culture as “elitist”. Equality is phrased less in socio-economic terms, as “equal opportunity” (the continental tradition), but in mostly in cultural terms, as “equal recognition”. (Footnote: this is, in my view, why political struggle in the US has been, for many decades now, mostly about the recognition of minority groups while on the European continent – especially in catholic countries – “class struggle” still is a common political vector) In this mindset, meritocracy is therefore necessarily seen as ambiguous.

I believe that the most interesting projects in the whole “amateur” sector are the ones that organize around meritocratic principles and consequently built hierarchy; open source software is the best example but Wikipedia works in a similar fashion. The trick is to keep meritocracy from turning into hegemony. But I digress.

Lévy’s bet is that collective intelligence, if it wants to be more than pop culture, will need experts (and expert tools) for a series of semantic tasks ranging from cartography to translation. His vision is indeed much more ambitious than most of the things we have seen to this day. The idea is that with the proper (semantic) tools, we could tackle problems collectively that are currently very much out of reach; and this in a truly global fashion, without bringing everybody into the rather impoverished linguistic umbrella of globish. Also, in order to make search more pluralistic and less “all visibility to the mainstream” as it currently is, we will need to get closer to the semantic level. I don’t believe that IEML, in its current iteration at least, can really do all these things. But I believe that yet again, Lévy has the right intuition: if collective forms of “problem solving” are to go beyond what they currently do, they will have to find modes of organization that are more sophisticated than the platforms we currently have. These modes will also have to negociate a balance between “equal opportunity” and “equal representation” and make it’s peace with instituionalization.

The philosophical discipline of ethics is, in my view, the intellectually most daunting field in the humanities. The central problem has been identified by David Hume in his “Treatise of Human Nature”, published in 1738, and is resumed by this paragraph:

“In every system of morality, which I have hitherto met with, I have always remark’d, that the author proceeds for some time in the ordinary ways of reasoning, and establishes the being of a God, or makes observations concerning human affairs; when all of a sudden I am surpriz’d to find, that instead of the usual copulations of propositions, is, and is not, I meet with no proposition that is not connected with an ought, or an ought not. This change is imperceptible; but is however, of the last consequence. For as this ought, or ought not, expresses some new relation or affirmation, ’tis necessary that it shou’d be observ’d and explain’d; and at the same time that a reason should be given; for what seems altogether inconceivable, how this new relation can be a deduction from others, which are entirely different from it.”

Know as the “is-ought problem”, the change of register implied by going from a descriptive mode towards a prescriptive one, poses the question on what to found the latter. There is a necessary recourse to something non-descriptive, a system of values that cannot be stabilized by the scientific method and is therefore necessarily a terrain for permanent struggle. Value systems are, however, by no means random but deeply embedded in historic process and while the conflictual nature of the “ought” cannot be dissolved, the contents of ethical debate can be treated as just another “is”, i.e. a field of discourse that can be described and analyzed. While the specific answers we give to Kant’s question “what should we do?” may well be products of long and hard reasoning, they are nonetheless developed against the backdrop of long-standing “networks of significance” (Geertz), that is, culture.

Having grown up in a German-speaking country, living in France but also following and participating in the globalized English-language sphere of discourse, it is hard not to be amazed by the striking differences in how recent developments in technology and digital culture are framed and appreciated. I have recently attended the “Web 2.0 Politics” conference near London and in a sense the experience had the quality of an epiphany. From the perspective of a drifter like me, culture (defined in national or linguistic terms) can sometimes look like a vast assembly of automatisms and reflexes. Coming from the outside, we cannot help but see how little in culture is actually decided upon and how much seems to be simply received. This is especially true when it comes to intrinsically shifty areas like ethics and political reasoning. What struck me at this conference was how certain words seemed to pass through what one could call “automated moral preprocessing”, which would allow filing very complicated and ambiguous concepts very quickly into neatly labeled boxes, largely divided into “good” and “bad”. This is very effective because it speeds up the reasoning process and bridges the rift between “is” and “ought” without much effort. A concept like “participation” for example gets preprocessed into the “good” box and can then be used as a general-purpose moral qualifier for all kinds of technological and cultural phenomena. Online services that allow people to participate can suddenly be called “democratic” because “participation” and “democracy” are commonly filed together. This is the moment when my Germanic “me” comes to spoil the party and points to the fact that pogroms and lynch mobs are in fact quite participatory activities. The little Frenchman that has secretly taken up home somewhere in my wetware adds that “populisme” is a permanent danger to true democracy and that only strong institutions can guarantee freedom. Catholicism’s heritage is a profound mistrust in human nature. These are perhaps nothing more that worn clichés, but in my case the effect of multiculturalism is a permanent cacophony of competing automatisms that disables the “good” / “bad” preprocessing that so much of the current Web 2.0 discourse seems to fall victim to.

We seriously need to get back to understanding ethics – and as a consequence politics – as deeply troubling subjects. The usual suspects of French philosophy have become household names but their principal lesson has been washed away like the famous face in the sand: that critical thinking must look at the ground it is built on. That doesn’t mean that normative arguments should be excluded, quite on the contrary – a new Habermas is direly needed. It could mean though that Hume’s bafflement at how the “ought” suddenly seems to spring out of nowhere should trouble us, too.

I have not idea whether it’s going to be accepted but here is my proposal for the Internet Research 9.0: Rethinking Community, Rethinking Place conference. The title is: Algorithmic Proximity – Association and the “Social Web”

How to observe, describe and conceptualize social structure has been a central question in the social sciences since their beginning in the 19th century. From Durkheim’s opposition between organic and mechanic solidarity and Tönnies’ distinction of Gemeinschaft and Gesellschaft to modern Social Network Analysis (Burt, Granovetter, Wellman, etc.), the problem of how individuals and groups relate to each other has been at the core of most attempts to conceive the “social”. The state of “community” – even in the loose understanding that has become prevalent when talking about sociability online – already is an end result of a permanent process of proto-social interaction, the plasma (Latour) from which association and aggregation may arise. In order to understand how the sites and services (Blogs, Social Networking Services, Online Dating, etc.) that make up what has become known as the “Social Web” allow for the emergence of higher-order social forms (communities, networks, crowds, etc.) we need to look at the lower levels of social interaction where sociability is still a relatively open field.
One way of approaching this very basic level of analysis is through the notion of “probability of communication”. In his famous work on the diffusion of innovation, Everett Rogers notes that the absence of social structure would mean that all communication between members of a population would have the same probability of occurring. In any real setting of course this is never the case: people talk (interact, exchange, associate, etc.) with certain individuals more than others. Beyond the limiting aspects of physical space the social sciences have identified numerous parameters – such as age, class, ethnicity, gender, dress, modes of expression, etc. – that make communication and interaction between some people a lot more probable than between others. Higher order social aggregates emerge from this background of attraction and repulsion; sociology has largely concluded that for all practical purposes opposites do not attract.
Digital technology largely obliterates the barriers of physical space: instead of being confined to his or her immediate surroundings an individual can now potentially communicate and interact with all the millions of people registered on the different services of the Social Web. In order to reduce “social overload”, many services allow their users to aggregate around physical or institutional landmarks (cities, universities, etc.) and encourage association through network proximity (the friend of a friend might become my friend too). Many of the social parameters mentioned above are also translated onto the Web in the sense that a person’s informational representations (profile, blog, avatar, etc.) become markers of distinction (Bourdieu) that strongly influence on the probability of communication with other members of the service. Especially in youth culture, opposite cultural interests effectively function as social barriers. These are, in principle, not new; their (partial) digitization however is.
Most of the social services online see themselves as facilitators for association and constantly produce “contact trails” that lead to other people, through category browsing, search technology, or automated path-building via backlinking. Informational representations like member profiles are not only read and interpreted by people but also by algorithms that will make use of this data whenever contact trails are being laid. The most obvious example can be found on dating sites: when searching for a potential partner, most services will rank the list of results based on compatibility calculations that take into account all of the pieces of information members provide. The goal is to compensate for the very large population of potential candidates and to reduce the failure rate of social interaction. Without the randomness that, despite spatial segregation, still marks life offline, the principle of homophily is pushed to the extreme: confrontation with the other as other, i.e. as having different opinions, values, tastes, etc. is reduced to a minimum and the technical nature of this process ensures that it passes without being noticed.
In this paper we will attempt to conceptualize the notion of “algorithmic proximity”, which we understand as the shaping of the probability of association by technological means. We do, however, not intend to argue that algorithms are direct producers of social structure. Rather, they intervene on the level of proto-social interaction and introduce biases whose subtlety makes them difficult to study and theorize conceptually. Their political and cultural significance must therefore be approached with the necessary caution.

When sites that involve any kind of ranking change their algorithm, there’ll probably be a spectacle worth watching. When Google made some changes to their search algorithms in 2005, the company was sued by KinderStart.com (a search engine for kids, talk about irony) who went from PageRank riches to rags and lost 70% of their traffic in a day (the case was dismissed in 2007). When Digg finally gave in to a lot of criticism about organized front page hijacking and changed the way story promotion works to include a measure of “diversity”, the regulars were vocally hurt and unhappy. What I find fascinating about the latter case was the technical problem-solving approach that implied the programming of nothing less that diversity. It’s not that hard to understand how such a thing works (think “anti-recommendation system” or “un-collaborative filtering”), but still, one has to sit back and appreciate the idea. We are talking about social engineering done by software engineers. Social problem = design problem.

The very real-world effects of algorithms are quite baffling and since I started to read this book, I truly appreciate the ingenuity and complex simplicity that cannot be reduced to a pure “this is what I want to achieve and so I do it” narrative. There is a delta between the “want” and the “can” and the final system will be the result of a complex negotiation that will have changed both sides of the story in the end. Programming diversity means to give the elusive concept of diversity an analytical core, to formalize it and to turn it into a machine. The “politics” of a ranking algorithm is not only about the values and the project (make story promotion more diverse) but also a matter – to put it bluntly – of the state of knowledge in computer science. This means, in my opinion, that the politics of systems must be discussed in the larger context of an examination of computer science / engineering / design as in itself an already oriented project, based on yet another layer of “want” and “can”.

Thanks to Joris for pointing out that my blog was hacked. Damn you spammers.

 

The term “determine” is often used rather lightly by those who write about the political dimension of technology. At the same time the accusation of “technological determinism” – albeit sometimes right on target – is being used as a means to exclude discussion of technological parameters from the humanities and the social sciences. But what is actually meant by “technological determinism”? In my view, there are three basic forms of thinking about determinism when it comes to technology:

The first is very much connected to French anthropologist André Leroi-Gourhan and holds that technological evolution is largely self-determined. His notion of “tendance technique” takes its inspiration from evolutionary theory in the sense that the technology evolves blindly but following the paths carved out by the “choices” made throughout its phylogenesis (this has been called “cumulative causation” or “path dependency” by some). Leroi-Gourhan’s perspective has been developed further by Deleuze and Guattari in their concept of “phylum” and, most notably, by philosopher of technology Gilbert Simendon (who’s work is finally going to be translated into English, hopefully still in 2008) who sees the process of technological evolution as “concretization”, going from modular designs to always more integrated forms. “Technological determinism” would mean, in this first sense, that technology is not the result of social, economic, or cultural process but largely independent, forcing the other sectors to adapt. Technology is determined by its inner logic.

A more colloquial meaning of technological determinism is, of course, connected to the Toronto school, namely Harold A. Innis and Marshall McLuhan. This stuff is so well known and overcommented that I don’t really want to get into it – let’s just say that technology, here, determines social process either by installing a specific rapport to covering space and time (Innis) or by establishing a certain equilibrium of the senses (McLuhan). You can find dystopical versions of the same basic concept in Ellul or Postman: technology determines society, to state matters bluntly.

I would argue that there is third version of technological determinism which is, although not completely dissimilar, far more subtle than the last one. Heidegger’s framing of technology as Gestell (an outlook based on cold mathematical reasoning, industrial destruction of more integrated ways of living, exploitation of nature, etc.) opens up a question that has been taken up by a large number of people in design theory and practice: is technology determined to follow the logic of Gestell? In Heidegger’s perspective, technology is doomed to exert a dehumanizing force on being itself: the determinism here does not so much concern the relationship between technology and society but the essence (Wesen) of technology itself. A lot of thinking about design over the last thirty years has been based on the assumption that a different form of technology is possible: technology that would escape its destiny as Gestell and be emancipating instead of alienating. Discourse about information technology is indeed full of such hopes.

Although “technological determinism” refers most often to the second perspective, a closer examination of “what determines what” opens up a series of quite interesting questions that go beyond the vulgar interpretations of McLuhan’s writings. For those who still adhere to the idea that tools determine their use, here is a list of possible remedies:

  1. Look at design studies where determinism has been replaced by the quite elegant notion of affordance.
  2. Read more Actor-Network Theory.
  3. Think about what Roland Barthes meant by “interpretation”.
  4. Dust off your copy of Hall’s “encoding/decoding”.
  5. Work as a software developer and marvel at the infinity of ways users find to use, appropriate, and break your applications.