Monthly Archives: October 2007
Posted by Bernhard on October 24th 2007 @ 10:28 am
Since MySpace and Facebook have become such a big hype, lot of text has been dedicated to social networking. For people like myself whose social drive is not very developed, the attraction of “hey, dude, I love you so much!!!” is pretty difficult to parse into a familiar frame of reference, but apparently there’s something to all that cuddling online. Being alone has to be learned after all. I somehow can’t shake the feeling that people are going to get bored with all the poking eventually…
Independently form that, there is something really interesting about Facebook and that is, of course, Facebook Platform, the API that allows third party developers to write plug-in like applications for the system. Some of them are really impressive (socialistics and the touchgraph app come to mind), others are not. What I find fascinating about the whole thing is that in a certain sense, the social network (the actual “connections” between people – yes, the quotes are not a mistake) becomes an infrastructure that specific can applications “run” on. For the moment, this idea has not yet been pushed all that far, but it is pretty easy to imagine where this could go (from filesharing to virtual yard sale, from identity management to marketing nirvana). In a sense, “special interest” social networks (like LinkedIn who’s currently scrambling to develop their own platform) could plug onto Facebook and instead of having many accounts for different systems you’ve got your Facebook ID (FB Passport) and load the app for a specific function. If the computer is a Universal Machine, the Internet the Universal Network, Facebook Platform might just become what sociologists since Durkheim have been talking about: the universal incarnation of sociality. Very practical indeed – when Latour tells us that the social is not explaining anything but is, in fact, that what has to be explained, we can simply say: Facebook. That’s the Social.
That’s of course far around the corner and futurism is rarely time well spent – but still, actor-network theory is becoming more intelligible by the day. Heterogeneous Associations? Well, you just have to look at the Facebook interface and it’s all there, from relationship status to media preferences – just click on Le Fabuleux Destin d’Amélie Poulain on you profile page (come on, I know it’s there) and there’s the list of all the other people whose cool facade betrays a secret romantic. This is a case of mediation and it’s both technical and symbolic, part Facebook, part Amélie, part postmodern emptiness and longing for simpler times. Heterogeneous, quoi.
A Facebook Platform thought to its end could mediate on many additional levels, take part in producing the social through many other types of attachment, when it will no longer be a social network application but a social network infrastructure. At least Actor-Network theory will be a lot easier to teach then…
Posted by Bernhard on October 18th 2007 @ 2:38 am
I have recently been thinking quite a lot about what it means to be “critical”. At a lot of the conferences I go to, the term is used a lot but somehow it remains intuitively unintelligible to me. The dictionary says that a critical person would be “inclined to judge severely and find fault” and a critical reading “characterized by careful, exact evaluation and judgment”. I cannot shake the impression that a lot of the debate about the political and ethical dimension of information systems is neither careful, nor exact. Especially when it comes to analyzing the deeds of big commercial actors like Google, there has been a pointed shift from complete apathy to hysteria. People like Siva Vaidhyanathan, whose talk about the “googlization of everything” I heard at the New Network Theory Conference, are, in my view, riding a wave of “critical” outrage that seemingly tries to compensate for the long years of relative silence about issues of power and control in information search, filtering, and structuration. But instead of being careful and exact – apparently the basis of both critical thought and scholarly pursuit – many of the newly appointed Emile Zolas are lumping together all sorts of different arguments in order to make their case. In Vaidhyanathan’s case for example, Google is bad because its search algorithms work too well and the book search not well enough.
Don’t get me wrong, I’m not saying that we should let the emerging giants of the Web era off the hook. I fully agree with many points Jeffrey Chester recently made in The Nation – despite the sensationalist title of that article. What I deplore is a critical reflex that is not concerned with being careful and exact. If we do not adhere, as scholars, to these basic principles, our discourse loses the basis of its justification and we are doing a disservice to both the political cause of fighting for pluralism of opinion in the information landscape and the academic cause of furthering understanding. Our “being critical” should not lead to obsession with the question of whether Google (or other companies for that matter) are “good” or “bad” but to an obsession about the more fundamental issues that link these strange systems that serve us the Web as a digestible meal to matters of political and economic domination. I’ve been reading a lot recently about how Google is invading our privacy but very little about the actual social function of privacy, seen as a historical achievement, and how the very idea could and should be translated into the information age where every action leaves a footprint of data waiting to be mined. We still seem to be in a “1984” mindset that, in my view, is thoroughly misleading when it comes to understanding the matters at hand. If we phrase the challenges posed by Google in purely moral terms we might miss the ethical dimension of the problem – ethics understood as the “art of conduct” that is.
This might sound strange, but under the digital condition the protection of privacy faces many of the same problems as the enforcement of copyright, because they both concern the problem of controlling flows of data. And whether we like it or not, both technical and legal solutions to protecting privacy might end up looking quite similar to the DRM systems we rightfully criticize. It is in that sense that the malleability of digital technology throws us back to the fundamentals of ethics: how do we want to live? What do we want our societies to look like? What makes for a good life? And how do we update the answers to those questions to our current technological and legal situation? Simply put: I would like to read more about why privacy is fundamentally important to democracy and how protection of that right could work when everything we do online is prone to be algorithmically analyzed. Chastising Google sometimes look to me like actually arguing on the same level as the company’s corporate motto: “don’t be evil” – please?
We don’t need Google to repent their sins. We need well-argumented laws that clearly define our rights to the data we produce, patch up the ways around such laws (EULAs come to mind) and think about technical means (encryption based?) that translate them onto the system level. Less morals and more ethics that is.
Posted by Bernhard on October 15th 2007 @ 3:25 am
Oliver Ertzscheid’s blog recently had an interesting post (French) pointing to a couple of articles and comments on The Facebook, among which an article at the LA Times’ entitled “The Facebook Revolution“. One paragraph in there really stands out:
Boiled down, it goes like this: Humans get their information from two places — from mainstream media or some other centralized organization such as a church, and from their network of family, friends, neighbors and colleagues. We’ve already digitized the first. Almost every news organization has a website now. What Zuckerberg is trying to do with Facebook is digitize the second.
This quote very much reminds me of some of the issues discussed in the “Digital Formations” volume edited by Robert Latham and Saskia Sassen in 2005. In their introduction (available online) they coin the (unpronounceable and therefore probably doomed) term “sociodigitization” by distinguishing it from “digitization”:
The qualifier “socio” is added to distinguish from the process of content conversion, the broader process whereby activities and their histories in a social domain are drawn up into the digital codes, databases, images, and text that constitute the substance of a digital formation. As the various chapters below show, such drawing up can be a function of deliberate planning and reflexive ordering or of contingent and discrete interactions and activities. In this respect as well, sociodigitization differs from digitization: what is rendered in digital form is not only information and artifacts but also logics of social organization, interaction, and space as discussed above.
Facebook, then, is quite plainly an example for the explicit (socio-)digitization of social relations that were mediated quite differently in the past. The “network of family, friends, neighbors and colleagues” that is now recreated inside of the system has of course been relying on technical (and digital) means of communication and interaction for quite a while, and these media did play a role in shaping the relations they helped sustain. There is no need to cite McLuhan to understand that relating to distant friends and family by mail or telephone will influence the way these relations are lived and how they evolve. Being rather stable dispositifs, the specific logics of individual media (their affordances) were largely covered up by habitualization (cf. Berger & Luckmann1967, p.53); it is the high speed of software development on the Web that makes the “rendering of logics of social organization, interaction, and space” so much more visible. In that sense, what started out as media theory is quickly becoming software theory or the theory of ICT. There is, of course, a strong affiliation with Lawrence Lessig’s thoughts about computer code (now in v. 2.0) and its ability to function as both constraint and incentive, shaping human behavior in a fashion comparable to law, morals, and the market.
The important matter seems to be the understanding of how sociodigitization proceeds in the context of the current explosion of Web-based software applications that is set to (re)mediate a great number of everyday practices. While media theory in the tradition of McLuhan has strived to identify the invariant core, the ontological essence of individual media, such an endeavor seems futile when it comes to software, whose prime caracteristic is malleability. This forces us to concentrate the analysis of “system properties” (i.e. the specific and local logic of sociodigitization) on individual platforms or, at best, categories of applications. When looking at Facebook, this means analyzing the actual forms the process of digitization leads to as well as the technical and cultural methods involved. How do I build and grow my network? What are the forms of interaction the system proposes? Who controls data structure, visibility, and perpetuity? What are the possibilities for building associations and what types of public do they give rise to?
In the context of my own work, I ask myself how we can formulate the cultural, ethical, and political dimension of systems like Facebook as matters of design, and not only on a descriptive level, but on the level of design methodology and guidelines. The critical analysis of social network sites and the cultural phenomena that emerge around them is, of course, essential but shouldn’t there be more debate of how such systems should work? What would a social network look like that is explicitly build on the grounds of a political theory of democracy? Is such a think even thinkable?
Posted by Bernhard on October 11th 2007 @ 1:38 am
I have been working, for a couple of month now, on what has been called “network theory” – a rather strange amalgam of social theory, applied mathematics and studies on ICT. What has interested me most in that area is the epistemological heterogeneity of the network concept and the difficulties that come with it. Quite obviously, a cable based computer network, an empirically established social network and the mathematical exploration of dendrite connections in worm brains are not one and the same thing. The buzz around a possible “new science of networks” (Duncan J. Watts) suggests, however, that there is enough common ground between a great number of very different phenomena to justify the use of similar (mathematical) tools and concepts to map them as networks. The question of whether of these things (the Internet, the spreading of disease, ecosystems, etc.) “are” networks or not, seems of less importance than the question of whether network models produce interesting new perspectives on the areas they are being applied to. And this is indeed the case.
One important, albeit often overlooked, aspect of any mathematical modeling is the question of formalization: the mapping of entities from the “real” world onto variables and back again, a process that necessarily implies selection and reduction of complexity. This is a first layer of ambiguity and methodological difficulty. A second one has been noted even more rarely, and it concerns software. Let me explain: the goal of network mapping, especially when applied to the humanities, is indeed to produce a map: a representation of numerical relations that is more intuitively readable that a matrix. Although graph (or network) theory does not need to produce a graphical representation as its result, such representations are highly powerful means to communicate complex relationships in a way that works well with the human capacity for visual understanding. These graphs, however, are not drawn by hand but generally modeled by computer software, e.g. programs like InFlow, Pajek, different tools for social network analysis, or a plethora of open source network visualization libraries. It may be a trivial task to visualize a network of five or ten nodes, but the positioning of 50 or more nodes and their connections is quite a daunting task and there are different conceptual and algorithmic solutions to the problem. Some tool use automatic clustering methods that lump nodes together and allow users to explore a network structure as hierarchical system where lower levels only fold up by zooming in on them. Parabolic projection is another method for reducing the number of nodes to draw for a given viewport. Three-dimensional projections represent yet another way to handle big numbers of nodes and connections. Behind these basic questions lurk matters of spatial distribution, i.e. the algorithms that try to make a compromise between accurate representation and visual coherence. Interface design adds yet another layer, in the sense that certain operations are made available to users, while others are not: zooming, dragging, repositioning of nodes, manual clustering, etc.
The point I’m trying to make is the following: the way we map networks is in large part channeled by the software we use and these tools are therefore not mere accessories to research but indeed epistemological agents that participate in the production of knowledge and participate in shaping research results. For the humanities, this is, in a sense, a new situation: while research methods based on mathematics are nothing new (sociometrics, etc.), the new research tools that a network science brings with it (other examples come to mind, e.g. data-mining) might imply a conceptual rift where part of the methodology gets blackboxed into a piece of software. This is not necessarily a problem but something that has to be discusses, examined, and understood.
Posted by Bernhard on October 10th 2007 @ 5:45 am
Most of the political potentialities of automated surveillance depend on two elements. The debate has generally concentrated on the first: data acquisition. But “digital wiretapping”, in whatever technical shape it may come, is only one part of the issue. Making sense of the data collected is the more complicated, albeit often neglected, half of the equation. Data mining technologies have certainly much advanced over the last couple of years but the commercial applications are not necessarily adapted to the demands that government agencies might have. This article makes the claim that scientist have developed a software that can “help detect terrorists before they strike”. It reads:
Computer and behavioral scientists at the University at Buffalo are developing automated systems that track faces, voices, bodies and other biometrics against scientifically tested behavioral indicators to provide a numerical score of the likelihood that an individual may be about to commit a terrorist act.
Although I’m quite skeptical about the actual performance of such software in the field (the real world is pretty messy after all) it shows the direction things are heading. The actual piece of software comes pretty close to what Virilio has described as “machine de vision” (vision machine) – a device that not only records reality (camera) but also interprets it (the man on the subway platform walking around nervously is not just a heap of pixels but a potential terrorist). Virilio talks about the “delegation of perception” and this is perhaps the most interesting aspect of the increasing technologizing of control: part of the process of decision-making (and therefore part of the responsibility) is transferred to algorithms and questions of professional ethics become matters of optimizing parameters and debugging.