Monthly Archives: October 2007
Since MySpace and Facebook have become such a big hype, lot of text has been dedicated to social networking. For people like myself whose social drive is not very developed, the attraction of “hey, dude, I love you so much!!!” is pretty difficult to parse into a familiar frame of reference, but apparently there’s something to all that cuddling online. Being alone has to be learned after all. I somehow can’t shake the feeling that people are going to get bored with all the poking eventually…
Independently form that, there is something really interesting about Facebook and that is, of course, Facebook Platform, the API that allows third party developers to write plug-in like applications for the system. Some of them are really impressive (socialistics and the touchgraph app come to mind), others are not. What I find fascinating about the whole thing is that in a certain sense, the social network (the actual “connections” between people – yes, the quotes are not a mistake) becomes an infrastructure that specific can applications “run” on. For the moment, this idea has not yet been pushed all that far, but it is pretty easy to imagine where this could go (from filesharing to virtual yard sale, from identity management to marketing nirvana). In a sense, “special interest” social networks (like LinkedIn who’s currently scrambling to develop their own platform) could plug onto Facebook and instead of having many accounts for different systems you’ve got your Facebook ID (FB Passport) and load the app for a specific function. If the computer is a Universal Machine, the Internet the Universal Network, Facebook Platform might just become what sociologists since Durkheim have been talking about: the universal incarnation of sociality. Very practical indeed – when Latour tells us that the social is not explaining anything but is, in fact, that what has to be explained, we can simply say: Facebook. That’s the Social.
That’s of course far around the corner and futurism is rarely time well spent – but still, actor-network theory is becoming more intelligible by the day. Heterogeneous Associations? Well, you just have to look at the Facebook interface and it’s all there, from relationship status to media preferences – just click on Le Fabuleux Destin d’Amélie Poulain on you profile page (come on, I know it’s there) and there’s the list of all the other people whose cool facade betrays a secret romantic. This is a case of mediation and it’s both technical and symbolic, part Facebook, part Amélie, part postmodern emptiness and longing for simpler times. Heterogeneous, quoi.
A Facebook Platform thought to its end could mediate on many additional levels, take part in producing the social through many other types of attachment, when it will no longer be a social network application but a social network infrastructure. At least Actor-Network theory will be a lot easier to teach then…
Oliver Ertzscheid’s blog recently had an interesting post (French) pointing to a couple of articles and comments on The Facebook, among which an article at the LA Times’ entitled “The Facebook Revolution“. One paragraph in there really stands out:
Boiled down, it goes like this: Humans get their information from two places — from mainstream media or some other centralized organization such as a church, and from their network of family, friends, neighbors and colleagues. We’ve already digitized the first. Almost every news organization has a website now. What Zuckerberg is trying to do with Facebook is digitize the second.
This quote very much reminds me of some of the issues discussed in the “Digital Formations” volume edited by Robert Latham and Saskia Sassen in 2005. In their introduction (available online) they coin the (unpronounceable and therefore probably doomed) term “sociodigitization” by distinguishing it from “digitization”:
The qualifier “socio” is added to distinguish from the process of content conversion, the broader process whereby activities and their histories in a social domain are drawn up into the digital codes, databases, images, and text that constitute the substance of a digital formation. As the various chapters below show, such drawing up can be a function of deliberate planning and reflexive ordering or of contingent and discrete interactions and activities. In this respect as well, sociodigitization differs from digitization: what is rendered in digital form is not only information and artifacts but also logics of social organization, interaction, and space as discussed above.
Facebook, then, is quite plainly an example for the explicit (socio-)digitization of social relations that were mediated quite differently in the past. The “network of family, friends, neighbors and colleagues” that is now recreated inside of the system has of course been relying on technical (and digital) means of communication and interaction for quite a while, and these media did play a role in shaping the relations they helped sustain. There is no need to cite McLuhan to understand that relating to distant friends and family by mail or telephone will influence the way these relations are lived and how they evolve. Being rather stable dispositifs, the specific logics of individual media (their affordances) were largely covered up by habitualization (cf. Berger & Luckmann1967, p.53); it is the high speed of software development on the Web that makes the “rendering of logics of social organization, interaction, and space” so much more visible. In that sense, what started out as media theory is quickly becoming software theory or the theory of ICT. There is, of course, a strong affiliation with Lawrence Lessig’s thoughts about computer code (now in v. 2.0) and its ability to function as both constraint and incentive, shaping human behavior in a fashion comparable to law, morals, and the market.
The important matter seems to be the understanding of how sociodigitization proceeds in the context of the current explosion of Web-based software applications that is set to (re)mediate a great number of everyday practices. While media theory in the tradition of McLuhan has strived to identify the invariant core, the ontological essence of individual media, such an endeavor seems futile when it comes to software, whose prime caracteristic is malleability. This forces us to concentrate the analysis of “system properties” (i.e. the specific and local logic of sociodigitization) on individual platforms or, at best, categories of applications. When looking at Facebook, this means analyzing the actual forms the process of digitization leads to as well as the technical and cultural methods involved. How do I build and grow my network? What are the forms of interaction the system proposes? Who controls data structure, visibility, and perpetuity? What are the possibilities for building associations and what types of public do they give rise to?
In the context of my own work, I ask myself how we can formulate the cultural, ethical, and political dimension of systems like Facebook as matters of design, and not only on a descriptive level, but on the level of design methodology and guidelines. The critical analysis of social network sites and the cultural phenomena that emerge around them is, of course, essential but shouldn’t there be more debate of how such systems should work? What would a social network look like that is explicitly build on the grounds of a political theory of democracy? Is such a think even thinkable?
I have been working, for a couple of month now, on what has been called “network theory” – a rather strange amalgam of social theory, applied mathematics and studies on ICT. What has interested me most in that area is the epistemological heterogeneity of the network concept and the difficulties that come with it. Quite obviously, a cable based computer network, an empirically established social network and the mathematical exploration of dendrite connections in worm brains are not one and the same thing. The buzz around a possible “new science of networks” (Duncan J. Watts) suggests, however, that there is enough common ground between a great number of very different phenomena to justify the use of similar (mathematical) tools and concepts to map them as networks. The question of whether of these things (the Internet, the spreading of disease, ecosystems, etc.) “are” networks or not, seems of less importance than the question of whether network models produce interesting new perspectives on the areas they are being applied to. And this is indeed the case.
One important, albeit often overlooked, aspect of any mathematical modeling is the question of formalization: the mapping of entities from the “real” world onto variables and back again, a process that necessarily implies selection and reduction of complexity. This is a first layer of ambiguity and methodological difficulty. A second one has been noted even more rarely, and it concerns software. Let me explain: the goal of network mapping, especially when applied to the humanities, is indeed to produce a map: a representation of numerical relations that is more intuitively readable that a matrix. Although graph (or network) theory does not need to produce a graphical representation as its result, such representations are highly powerful means to communicate complex relationships in a way that works well with the human capacity for visual understanding. These graphs, however, are not drawn by hand but generally modeled by computer software, e.g. programs like InFlow, Pajek, different tools for social network analysis, or a plethora of open source network visualization libraries. It may be a trivial task to visualize a network of five or ten nodes, but the positioning of 50 or more nodes and their connections is quite a daunting task and there are different conceptual and algorithmic solutions to the problem. Some tool use automatic clustering methods that lump nodes together and allow users to explore a network structure as hierarchical system where lower levels only fold up by zooming in on them. Parabolic projection is another method for reducing the number of nodes to draw for a given viewport. Three-dimensional projections represent yet another way to handle big numbers of nodes and connections. Behind these basic questions lurk matters of spatial distribution, i.e. the algorithms that try to make a compromise between accurate representation and visual coherence. Interface design adds yet another layer, in the sense that certain operations are made available to users, while others are not: zooming, dragging, repositioning of nodes, manual clustering, etc.
The point I’m trying to make is the following: the way we map networks is in large part channeled by the software we use and these tools are therefore not mere accessories to research but indeed epistemological agents that participate in the production of knowledge and participate in shaping research results. For the humanities, this is, in a sense, a new situation: while research methods based on mathematics are nothing new (sociometrics, etc.), the new research tools that a network science brings with it (other examples come to mind, e.g. data-mining) might imply a conceptual rift where part of the methodology gets blackboxed into a piece of software. This is not necessarily a problem but something that has to be discusses, examined, and understood.