Category Archives: actor-network theory
Yesterday, Google introduced a new feature, which represents a substantial extension to how their search engine presents information and marks a significant departure from some of the principles that have underpinned their conceptual and technological approach since 1998. The “knowledge graph” basically adds a layer to the search engine that is based on formal knowledge modelling rather than word statistics (relevance measures) and link analysis (authority measures). As the title of the post on Google’s search blog aptly points out, the new features work by searching “things not strings”, because what they call the knowledge graph is simply a – very large – ontology, a formal description of objects in the world. Unfortunately, the roll-out is progressive and I have not yet been able to access the new features, but the descriptions, pictures, and video paint a rather clear picture of what product manager Johanna Wright calls the move “from an information engine to a knowledge engine”. In terms of the DIKW model (Data-Information-Knowledge-Wisdom), the new feature proposes to move up a layer by adding a box of factual information on a recognized object (the examples Google uses are the Taj Mahal, Marie Curie, Matt Groening, etc.) next to the search results. From the presentation, we can gather that the 500 million objects already referenced will include a large variety of things, such as movies, events, organizations, ideas, and so on.
This is really a very significant extension to the current logic and although we’ll need more time to try things out and get a better understanding of what this actually means, there are a couple of things that we can already single out:
- On a feature level, the fact box brings Google closer to “knowledge engines” such as Wolfram Alpha and as we learn from the explanatory video, this explicitly includes semantic or computational queries, such as “how many women won the Nobel Prize?” type of questions.
- If we consider Wikipedia to be a similar “description layer”, the fact box can also be seen as a competitor to everybody’s favorite encyclopedia, which is a further step into the direction of bringing information directly to the surface of the results page instead of simply referring to a location. This means that users do not have to leave the Google garden to find a quick answer. It will be interesting to see whether this will actually show up in Wikipedia traffic stats.
- The introduction of an ontology layer is a significant departure from the largely statistical and graph theoretical methods favored by Google in the past. While features based on knowledge modelling have proliferated around the margins (e.g. in Google Maps and Local Search), the company is now bringing them to the center stage. From what I understand, the selection of “facts” to display will be largely driven by user statistics but the facts themselves come from places like Freebase, which Google bought in 2010. While large scale ontologies were prohibitive in the past, a combination of the availability of crowd-sourced databases (Wikipedia, etc.), the open data movement, better knowledge extraction mechanisms, and simply the resources to hire people to do manual repairs has apparently made them a viable option for a company of Google’s size.
- Competing with the dominant search engine has just become a lot harder (again). If users like the new feature, the threshold for market entry moves up because this is not a trivial technical gimmick that can be easily replicated.
- The knowledge graph will most certainly spread out into many other services (it’s already implemented in the new Google Docs research bar), further boosting the company’s economies of scale and enhancing cross-navigation between the different services.
- If the fact box – and the features that may follow – becomes a pervasive and popular feature, Google’s participation in making information and knowledge accessible, in defining its shape, scope, and relevance, will be further extended. This is a reason to worry a bit more, not because the Google tools as such are a danger, but simply because of the levels of institutional and economic concentration the Internet has enabled. The company has become what Michel Callon calls an “obligatory passage point” in our relation to the Web and beyond; the knowledge graph has the potential to exacerbate the situation even further.
This is a development that looks like another element in the war for dominance on the Web that is currently fought at a frenetic pace. Since the introduction of actions into Facebook’s social graph, it has become clear that approaches based on ontologies and concept modelling will play an increasing role in this. In a world mediated by screens, the technological control of meaning – the one true metamedium – is the new battleground. I guess that this is not what Berners-Lee had in mind for the Semantic Web…
Over the last couple of years, the social sciences have been increasingly interested in using computer-based tools to analyze the complexity of the social ant farm that is the Web. Issuecrawler was one of the first of such tools and today researchers are indeed using very sophisticated pieces of software to “see” the Web. Sciences-Po, one of these rather strange french institutions that were founded to educate the elite but which now have to increasingly justify their existence by producing research, has recently hired Bruno Latour to head their new médialab, which will most probably head into that very direction. Given Latour’s background (and the fact that Paul Girard, a very competent former colleague at my lab, heads the R&D departement), this should be really very interesting. I do hope that there will be occasion to tackle the most compelling methodological question when in comes to the application of computers (or mathematics in general) to analyzing human life, which is beautifully framed in a rather reluctant statement from 1889 by Karl Pearson, a major figure in the history of statistics:
“Personally I ought to say that there is, in my own opinion, considerable danger in applying the methods of exact science to problems in descriptive science, whether they be problems of heredity or of political economy; the grace and logical accuracy of the mathematical processes are apt to so fascinate the descriptive scientist that he seeks for sociological hypotheses which fit his mathematical reasoning and this without first ascertaining whether the basis of his hypotheses is as broad as that human life to which the theory is to be applied.” cit. in. Stigler, Stephen M.: The History of Statistics. Harvard University Press, 1990 p. 304
Since MySpace and Facebook have become such a big hype, lot of text has been dedicated to social networking. For people like myself whose social drive is not very developed, the attraction of “hey, dude, I love you so much!!!” is pretty difficult to parse into a familiar frame of reference, but apparently there’s something to all that cuddling online. Being alone has to be learned after all. I somehow can’t shake the feeling that people are going to get bored with all the poking eventually…
Independently form that, there is something really interesting about Facebook and that is, of course, Facebook Platform, the API that allows third party developers to write plug-in like applications for the system. Some of them are really impressive (socialistics and the touchgraph app come to mind), others are not. What I find fascinating about the whole thing is that in a certain sense, the social network (the actual “connections” between people – yes, the quotes are not a mistake) becomes an infrastructure that specific can applications “run” on. For the moment, this idea has not yet been pushed all that far, but it is pretty easy to imagine where this could go (from filesharing to virtual yard sale, from identity management to marketing nirvana). In a sense, “special interest” social networks (like LinkedIn who’s currently scrambling to develop their own platform) could plug onto Facebook and instead of having many accounts for different systems you’ve got your Facebook ID (FB Passport) and load the app for a specific function. If the computer is a Universal Machine, the Internet the Universal Network, Facebook Platform might just become what sociologists since Durkheim have been talking about: the universal incarnation of sociality. Very practical indeed – when Latour tells us that the social is not explaining anything but is, in fact, that what has to be explained, we can simply say: Facebook. That’s the Social.
That’s of course far around the corner and futurism is rarely time well spent – but still, actor-network theory is becoming more intelligible by the day. Heterogeneous Associations? Well, you just have to look at the Facebook interface and it’s all there, from relationship status to media preferences – just click on Le Fabuleux Destin d’Amélie Poulain on you profile page (come on, I know it’s there) and there’s the list of all the other people whose cool facade betrays a secret romantic. This is a case of mediation and it’s both technical and symbolic, part Facebook, part Amélie, part postmodern emptiness and longing for simpler times. Heterogeneous, quoi.
A Facebook Platform thought to its end could mediate on many additional levels, take part in producing the social through many other types of attachment, when it will no longer be a social network application but a social network infrastructure. At least Actor-Network theory will be a lot easier to teach then…