Monthly Archives: October 2010

Some debates are just so much older than our short forgetful minds allow us to recognize. In 1965 Jacques Barzun (still alive today at a biblical 102!) made the following statement:

What have the humanities been doing for thirty-five years except to do exactly what a computer would do, only with their own unaided card indexes and fountain pens? They have taken apart poetry, they have taken apart novels, they have counted images, they have followed symbols that are sometimes non-existent, they have destroyed their own subject matter by a pseudo-computer-like approach, and now they have only themselves to blame if they have to learn the tricks and the jargon of computerizing. (Jacques Barzun at a conference at Yale University, cited in. Taviss (ed.), The Computer Impact, 1970, p.199)

While I have not found the original document of Barzun’s talk, Bowler (ed.), Computers in Humanistic Research, 1967, p.232 has a summary of his three main points of critique:

First is the assumption of a false relation between the units defined and written and the reality they are supposed to represent. For example, 20 years ago, someone attempted to study genius by selecting names from Who’s Who in America, as being indicative of the quality of genius. Second is the fallacy of assessing importance by weight or numbers. The speaker mentioned a published census, again some 20 years ago, which indicated that the number of brownstone or frame houses in New York was much larger than the number of skyscrapers, giving the erroneous impression that the former represented the city’s characteristic architectural form. The third error is the attribution of meaning based upon only a partial study of the object in question. Two conspicuous examples of the faulty attribution of meaning to partial signs are the cases of machine translation and the objective tests given to school children and the people in business.

Would it be very hard to find contemporary examples that fit these three points?

Yesterday, Microsoft announced another step in their “long-term partnership” with Facebook. The two companies have had close ties since Microsoft invested a hefty sum in Facebook in 2007 and the former has managed advertisement on the latter’s site for quite a while. The “next step” will basically add a “social layer” to Bing search results (go to Ars Technica for a writeup or All Things Digital for a liveblog of the PR event) and this is actually a pretty big thing. Google has certainly taken contextual information into account when deciding which results to show and how to rank them: physical location, search history, and gmail contacts have been part of that process for a while, but the effects have been rather subtle.

Bing’s new features basically use the same technical layer as the Facebook boxes that popped up all over the Web about half a year ago (most modern browsers have plug-ins that allow you to block those by the way). If Bing detects the Facebook cookie while you’re on their site and adds a couple of features that allow you to interact with “friends” more easily. There are some basic convenience features but it is the “liked results” that are the most remarkable: results will use your contact’s “likes” to rank results. While we will have to wait to see how these features will pan out, social search may look something like this:

Bing social search interface

In this example, the first result is the announcement of a news article on the release of the DVD version of Iron Man 2 and this would be hardly a top-ranked result without the social layer. If Bing continues to make inroads on Google, the “like” button may take on additional importance for driving traffic and marketeers will most certainly device new ways to get people to “like” stuff – e.g. “press the button and win a free t-shirt”.

Cas Sunstein’s arguments on the dangers of echo chambers – “incestuous amplification” in social groups – will certainly be taken up again, and perhaps rightfully so: while the Internet remains a beautifully heterogeneous mess, the algorithmically sustained support for the logic of homophily (“birds of a feather…”) that can be observed in more and more places on the Web merits critical examination. While Diana Mutz’s work makes the inconvenient argument that “hearing the other side” of political debate may actually lead to less political engagement, our representative systems of democratic governance require a certain willingness to accept different political viewpoints (that always float on less clearly delineated cultural sensibilities) as sincere and legitimate. Also, adding a “friend” dimension to yet another dimension of the Web could be seen as a further reduction of the “publicness” that, according to Michael Schudson, caracterizes working democratic discourse. Being able to dissociate ourselves from our private entanglements and take into account the interests of those who do not ressemble us is perhaps the central prerequisite to successfully navigating a smaller planet.

Bing’s new features are certainly not the end of life as we know it but I believe that the privacy question – as important as it is – is covering a series of more difficult problems that sit at the heart of political life in the age of the Internet…

What is a link? From a methodology standpoint, there is no answer to that question but only the recognition that when using graph theory and associated software tools, we project certain aspects of a dataset as nodes and others as links. In my last post, I “projected” authors from the air-l list as nodes and mail-reply relationships as links. In the example below, I still use authors as nodes but links are derived from a similarity measure of a statistical analysis of each poster’s mails. Here are two gephi graphs:

If you are interested in the technique, it’s a simple similarity measure based on the vector-space model and my amateur computer scientist’s PHP implementation can be found here. The fact that the two posters who changed their “from:” text have both of their accounts close together (can you find them?) is a good indication that the algorithm is not completely botched. The words floating on the links on the right graph are the words that confer the highest value to the similarity calculation, which means that it is a word that is relatively often used by both of the linked authors while being generally rare in the whole corpus. Elis Godard and Dana Boyd for example have both written on air-l about Ron Vietti, a pastor who (rightfully?) thinks the Internet is the devil and because very few other people mentioned the holy warrior, the word “vietti” is the highest value “binder” between the two.

What is important in networks that are the result of heavily iterative processing is that the algorithms used to create them are full of parameters and changing one of these parameters just little bit may (!) have larger repercussions. In the example above I actually calculate a similarity measure between each two nodes (60^2 / 2 results) but in order to make the graph somewhat readable I inserted a threshold that boils it down to 637 links. The missing measures are not taken into account in the physics simulation that produces the layout – although they may (!) be significant. I changed the parameter a couple of times to get the graph “right”, i.e. to find a good compromise between link density for simulation and readability. But look at what happens when I grow the threshold so than only the 100 strongest similarity measures survive:

First, a couple of nodes disconnect, two binary stars form around the “from:” changers and the large component becomes a lot looser. Second, Jeremy Hunsinger looses the highest PageRank to Chris Heidelberg. Hunsinger had more links when lower similarity scores were taken into account, but when things get rough in the network world, bonding is better than bridging. What is result and what is artifact?

Most advanced algorithmic techniques are riddled with such parameters and getting a “good” result not only implies fiddling around a lot (how do I clean the text corpus, what algorithms to look for what kind of structures or dynamics, what parameters, what type of representation, here again, what parameters, and so on…) but also having implicit ideas about what kind of result would be “plausible”. The back and forth with the “algorithmic microscope” is always floating against a backdrop of “domain knowledge” and this is one of the reasons why the idea of a science based purely on data analysis is positively absurd. I believe that the key challenge is to stay clear of methodological monoculture and to articulate different approaches together whenever possible.

The Association of Internet Researchers (AOIR) is an important venue if you’re interested in, like the name indicates, Internet research. But it is also a good primary source if one wants to inquire into how and why people study the Internet, which aspects of it, etc. Conveniently for the lazy empirical researcher that I am, the AOIR has an archive of its mailing-list, which has about 22K mails posted by 3K addresses, enough for a little playing around with the impatient person’s tool, the algorithm. I have downloaded the data and I hope I can motivate some of my students to build something interesting with it, but I just had to put it into gephi right away. Some of the tools we’ll hopefully build will concentrate more on text mining but using an address as a node and a mail-reply relationship as a link, one can easily build a social graph.

I would like to take this example as an occasion to show how different algorithms can produce quite different views on the same data:

So, these are the air-l posters with more than 60 messages posted since 2001. Node size indicates the number of posts, a node’s color (from blue to red) shows its connectivity in the graph (click on the image to see a much larger version). Link strength, i.e. number of replies between two people, is taken into account. You can download the full .gdf here. The only difference between the four graphs is the layout algorithm used (Force Atlas, Force Atlas with attraction distribution, Yifan Hu, and Fruchterman Reingold). You can instantly notice that Yifan Hu pushes nodes with low link count much more strongly to the periphery than the others, while Fruchterman Reingold as always keeps its symmetrical sphere shape, suggesting a more harmonious picture than the rest. Force Atlas’ attraction distribution feature will try to differentiate between hubs and authorities, pushing the former to the periphery while keeping the latter in the center; just compare Barry Wellman’s position over the different graphs.

I’ll probably repeat this experiment with a more segmented graph, but I think this already shows that layout algorithms are not just innocently rendering a graph readable. Every method puts some features of the graph to the forefront and the capacity for critical reading is as important as the willingness for “critical use” that does not gloss over the differences in tools used.