Posted by Bernhard on October 18th 2007 @ 2:38 am
I have recently been thinking quite a lot about what it means to be “critical”. At a lot of the conferences I go to, the term is used a lot but somehow it remains intuitively unintelligible to me. The dictionary says that a critical person would be “inclined to judge severely and find fault” and a critical reading “characterized by careful, exact evaluation and judgment”. I cannot shake the impression that a lot of the debate about the political and ethical dimension of information systems is neither careful, nor exact. Especially when it comes to analyzing the deeds of big commercial actors like Google, there has been a pointed shift from complete apathy to hysteria. People like Siva Vaidhyanathan, whose talk about the “googlization of everything” I heard at the New Network Theory Conference, are, in my view, riding a wave of “critical” outrage that seemingly tries to compensate for the long years of relative silence about issues of power and control in information search, filtering, and structuration. But instead of being careful and exact – apparently the basis of both critical thought and scholarly pursuit – many of the newly appointed Emile Zolas are lumping together all sorts of different arguments in order to make their case. In Vaidhyanathan’s case for example, Google is bad because its search algorithms work too well and the book search not well enough.
Don’t get me wrong, I’m not saying that we should let the emerging giants of the Web era off the hook. I fully agree with many points Jeffrey Chester recently made in The Nation – despite the sensationalist title of that article. What I deplore is a critical reflex that is not concerned with being careful and exact. If we do not adhere, as scholars, to these basic principles, our discourse loses the basis of its justification and we are doing a disservice to both the political cause of fighting for pluralism of opinion in the information landscape and the academic cause of furthering understanding. Our “being critical” should not lead to obsession with the question of whether Google (or other companies for that matter) are “good” or “bad” but to an obsession about the more fundamental issues that link these strange systems that serve us the Web as a digestible meal to matters of political and economic domination. I’ve been reading a lot recently about how Google is invading our privacy but very little about the actual social function of privacy, seen as a historical achievement, and how the very idea could and should be translated into the information age where every action leaves a footprint of data waiting to be mined. We still seem to be in a “1984” mindset that, in my view, is thoroughly misleading when it comes to understanding the matters at hand. If we phrase the challenges posed by Google in purely moral terms we might miss the ethical dimension of the problem – ethics understood as the “art of conduct” that is.
This might sound strange, but under the digital condition the protection of privacy faces many of the same problems as the enforcement of copyright, because they both concern the problem of controlling flows of data. And whether we like it or not, both technical and legal solutions to protecting privacy might end up looking quite similar to the DRM systems we rightfully criticize. It is in that sense that the malleability of digital technology throws us back to the fundamentals of ethics: how do we want to live? What do we want our societies to look like? What makes for a good life? And how do we update the answers to those questions to our current technological and legal situation? Simply put: I would like to read more about why privacy is fundamentally important to democracy and how protection of that right could work when everything we do online is prone to be algorithmically analyzed. Chastising Google sometimes look to me like actually arguing on the same level as the company’s corporate motto: “don’t be evil” – please?
We don’t need Google to repent their sins. We need well-argumented laws that clearly define our rights to the data we produce, patch up the ways around such laws (EULAs come to mind) and think about technical means (encryption based?) that translate them onto the system level. Less morals and more ethics that is.
Posted by Bernhard on October 11th 2007 @ 1:38 am
I have been working, for a couple of month now, on what has been called “network theory” – a rather strange amalgam of social theory, applied mathematics and studies on ICT. What has interested me most in that area is the epistemological heterogeneity of the network concept and the difficulties that come with it. Quite obviously, a cable based computer network, an empirically established social network and the mathematical exploration of dendrite connections in worm brains are not one and the same thing. The buzz around a possible “new science of networks” (Duncan J. Watts) suggests, however, that there is enough common ground between a great number of very different phenomena to justify the use of similar (mathematical) tools and concepts to map them as networks. The question of whether of these things (the Internet, the spreading of disease, ecosystems, etc.) “are” networks or not, seems of less importance than the question of whether network models produce interesting new perspectives on the areas they are being applied to. And this is indeed the case.
One important, albeit often overlooked, aspect of any mathematical modeling is the question of formalization: the mapping of entities from the “real” world onto variables and back again, a process that necessarily implies selection and reduction of complexity. This is a first layer of ambiguity and methodological difficulty. A second one has been noted even more rarely, and it concerns software. Let me explain: the goal of network mapping, especially when applied to the humanities, is indeed to produce a map: a representation of numerical relations that is more intuitively readable that a matrix. Although graph (or network) theory does not need to produce a graphical representation as its result, such representations are highly powerful means to communicate complex relationships in a way that works well with the human capacity for visual understanding. These graphs, however, are not drawn by hand but generally modeled by computer software, e.g. programs like InFlow, Pajek, different tools for social network analysis, or a plethora of open source network visualization libraries. It may be a trivial task to visualize a network of five or ten nodes, but the positioning of 50 or more nodes and their connections is quite a daunting task and there are different conceptual and algorithmic solutions to the problem. Some tool use automatic clustering methods that lump nodes together and allow users to explore a network structure as hierarchical system where lower levels only fold up by zooming in on them. Parabolic projection is another method for reducing the number of nodes to draw for a given viewport. Three-dimensional projections represent yet another way to handle big numbers of nodes and connections. Behind these basic questions lurk matters of spatial distribution, i.e. the algorithms that try to make a compromise between accurate representation and visual coherence. Interface design adds yet another layer, in the sense that certain operations are made available to users, while others are not: zooming, dragging, repositioning of nodes, manual clustering, etc.
The point I’m trying to make is the following: the way we map networks is in large part channeled by the software we use and these tools are therefore not mere accessories to research but indeed epistemological agents that participate in the production of knowledge and participate in shaping research results. For the humanities, this is, in a sense, a new situation: while research methods based on mathematics are nothing new (sociometrics, etc.), the new research tools that a network science brings with it (other examples come to mind, e.g. data-mining) might imply a conceptual rift where part of the methodology gets blackboxed into a piece of software. This is not necessarily a problem but something that has to be discusses, examined, and understood.