Category Archives: social networks
What is a link? From a methodology standpoint, there is no answer to that question but only the recognition that when using graph theory and associated software tools, we project certain aspects of a dataset as nodes and others as links. In my last post, I “projected” authors from the air-l list as nodes and mail-reply relationships as links. In the example below, I still use authors as nodes but links are derived from a similarity measure of a statistical analysis of each poster’s mails. Here are two gephi graphs:
If you are interested in the technique, it’s a simple similarity measure based on the vector-space model and my amateur computer scientist’s PHP implementation can be found here. The fact that the two posters who changed their “from:” text have both of their accounts close together (can you find them?) is a good indication that the algorithm is not completely botched. The words floating on the links on the right graph are the words that confer the highest value to the similarity calculation, which means that it is a word that is relatively often used by both of the linked authors while being generally rare in the whole corpus. Elis Godard and Dana Boyd for example have both written on air-l about Ron Vietti, a pastor who (rightfully?) thinks the Internet is the devil and because very few other people mentioned the holy warrior, the word “vietti” is the highest value “binder” between the two.
What is important in networks that are the result of heavily iterative processing is that the algorithms used to create them are full of parameters and changing one of these parameters just little bit may (!) have larger repercussions. In the example above I actually calculate a similarity measure between each two nodes (60^2 / 2 results) but in order to make the graph somewhat readable I inserted a threshold that boils it down to 637 links. The missing measures are not taken into account in the physics simulation that produces the layout – although they may (!) be significant. I changed the parameter a couple of times to get the graph “right”, i.e. to find a good compromise between link density for simulation and readability. But look at what happens when I grow the threshold so than only the 100 strongest similarity measures survive:
First, a couple of nodes disconnect, two binary stars form around the “from:” changers and the large component becomes a lot looser. Second, Jeremy Hunsinger looses the highest PageRank to Chris Heidelberg. Hunsinger had more links when lower similarity scores were taken into account, but when things get rough in the network world, bonding is better than bridging. What is result and what is artifact?
Most advanced algorithmic techniques are riddled with such parameters and getting a “good” result not only implies fiddling around a lot (how do I clean the text corpus, what algorithms to look for what kind of structures or dynamics, what parameters, what type of representation, here again, what parameters, and so on…) but also having implicit ideas about what kind of result would be “plausible”. The back and forth with the “algorithmic microscope” is always floating against a backdrop of “domain knowledge” and this is one of the reasons why the idea of a science based purely on data analysis is positively absurd. I believe that the key challenge is to stay clear of methodological monoculture and to articulate different approaches together whenever possible.
The Association of Internet Researchers (AOIR) is an important venue if you’re interested in, like the name indicates, Internet research. But it is also a good primary source if one wants to inquire into how and why people study the Internet, which aspects of it, etc. Conveniently for the lazy empirical researcher that I am, the AOIR has an archive of its mailing-list, which has about 22K mails posted by 3K addresses, enough for a little playing around with the impatient person’s tool, the algorithm. I have downloaded the data and I hope I can motivate some of my students to build something interesting with it, but I just had to put it into gephi right away. Some of the tools we’ll hopefully build will concentrate more on text mining but using an address as a node and a mail-reply relationship as a link, one can easily build a social graph.
I would like to take this example as an occasion to show how different algorithms can produce quite different views on the same data:
So, these are the air-l posters with more than 60 messages posted since 2001. Node size indicates the number of posts, a node’s color (from blue to red) shows its connectivity in the graph (click on the image to see a much larger version). Link strength, i.e. number of replies between two people, is taken into account. You can download the full .gdf here. The only difference between the four graphs is the layout algorithm used (Force Atlas, Force Atlas with attraction distribution, Yifan Hu, and Fruchterman Reingold). You can instantly notice that Yifan Hu pushes nodes with low link count much more strongly to the periphery than the others, while Fruchterman Reingold as always keeps its symmetrical sphere shape, suggesting a more harmonious picture than the rest. Force Atlas’ attraction distribution feature will try to differentiate between hubs and authorities, pushing the former to the periphery while keeping the latter in the center; just compare Barry Wellman’s position over the different graphs.
I’ll probably repeat this experiment with a more segmented graph, but I think this already shows that layout algorithms are not just innocently rendering a graph readable. Every method puts some features of the graph to the forefront and the capacity for critical reading is as important as the willingness for “critical use” that does not gloss over the differences in tools used.
Gabriel Tarde is a springwell of interesting – and sometimes positively weird – ideas. In his 1899 article L’opinion et la conversation (reprinted in his 1901 book L’opinion et la foule), the French judge/sociologist makes the following comment:
Il n’y [dans un Etat féodal, BR] avait pas “l’opinion”, mais des milliers d’opinions séparées, sans nul lien continuel entre elles. Ce lien, le livre d’abord, le journal ensuite et avec bien plus d’efficacité, l’ont seuls fourni. La presse périodique a permis de former un agrégat secondaire et très supérieur dont les unités s’associent étroitement sans s’être jamais vues ni connues. De là, des différences importantes, et, entre autre, celles-ci : dans les groupes primaires [des groupes locales basés sur la conversation, BR], les voix ponderantur plutôt que numerantur, tandis que, dans le groupe secondaire et beaucoup plus vaste, où l’on se tient sans se voir, à l’aveugle, les voix ne peuvent être que comptées et non pesées. La presse, à son insu, a donc travaillé à créer la puissance du nombre et à amoindrir celle du caractère, sinon de l’intelligence.
After a quick survey, I haven’t found an English translation anywhere – there might be one in here – so here’s my own (taking some liberties to make it easier to read):
[In a feudal state, BR] there was no “opinion” but thousands of separate opinions, without any steady connection between them. This connection was only delivered by first the book, then, and with greater efficiency, the newspaper. The periodical press allowed for the formation of a secondary and higher-order aggregate whose units associate closely without ever having seen or known each other. Several important differences follow from this, amongst others, this one: in primary groups [local groups based on conversation, BR], voices ponderantur rather than numerantur, while in the secondary and much larger group, where people connect without seeing each other – blind – voices can only be counted and cannot be weighed. The press has thus unknowingly labored towards giving rise to the power of the number and reducing the power of character, if not of intelligence.
Two things are interesting here: first, Lazarsfeld, Berelson, and Gaudet’s classic study from 1945, The People’s Choice, and even more so Lazarsfeld’s canonical Personal Influence (with Elihu Katz, 1955) are seen as a rehabilitation of the significance (for the formation of opinion) of interpersonal communication at a time when media were considered all-powerful brainwashing machines by theorists such as Adorno and Horkheimer (Adorno actually worked with/for Lazarsfeld in the 30ies, where Lazarsfeld tried to force poor Adorno into “measuring culture”, which may have soured the latter to any empirical inquiry, but that’s a story for another time). Tarde’s work on conversation (the first order medium) is theoretically quite sophisticated – floating against the backdrop of Tarde’s theory of imitation as basic mechanism of cultural production – and actually succeeds in thinking together everyday conversation and mass-media without creating any kind of onerous dichotomy. L’opinion et la conversation would merit an inclusion into any history of communication science and it should come as no surprise that Elihu Katz actually published a paper on Tarde in 1999.
Second, the difference between ponderantur (weighing) and numerantur (counting) is at the same time rather self-evident – an object’s weight and it’s number are logically quite different things – and somewhat puzzling: it reminds us that while measurement does indeed create a universe of number where every variable can be compared to any other, the aspects of reality we choose to measure remain connected to a conceptual backdrop that is by itself neither numerical nor mathematical. What Tarde calls “character” is a person’s capacity to influence, to entice imitation, not the size of her social network.
I’m currently working on a software tool that helps studying Twitter and while sifting through the literature I came across this citation from a 2010 paper by Cha et al.:
We describe how we collected the Twitter data and present the characteristics of the top users based on three influence measures: indegree, retweets, and mentions.
Besides the immense problem of defining influence in non trivial terms, I wonder whether many of the studies on (social) networks that pop up all over the place are hoping to weigh but end up counting again. What would it mean, then, to weigh a person’s influence? What kind of concepts would we have to develop and what could be indicators? In our project we use the bit.ly API to look at clickstream referers – if several people post the same link, who succeeds in getting the most people to click it – but this may be yet another count that says little or nothing about how a link will be uses/read/received by a person. But perhaps this is as far as the “hard” data can take us. But is that really a problem? The one thing I love about Tarde is how he can jump from a quantitative worldview to beautiful theoretical speculation and back with a smile on his face…
Since I have started to play around with the latest (and really great, easy to use) version of the gephi graph visualization and analysis platform, I have developed an obsession to build .gdf output (.gdf is a graph description format that you can open with gephi) into everything I come across. The latest addition is a Facebook application called netvizz that creates a .gdf file describing either your personal network or the groups you are a member of.
There are of course many applications that let you visualize your network directly in Facebook but by being able to download a file, you can choose your own visualization tool, play around with it, select and parameter layout algorithms, change colors and sizes, rearrange by hand, and so forth. Toolkits like gephi are just so much more powerful than Flash toys…
What’s rather striking about these Facebook networks is how much the shape is connected to physical and social mobility. If you look at my network, you can easily see the Klagenfurt (my hometown) cluster to the very right, my studies in Vienna in the middle, and my French universe on the left. The small grape on the top left documents two semesters of teaching at the American University of Paris…
Update: v0.2 of netvizz is out, allowing you to add some data for each profile. Next up is GraphML and Mondrian file support, more data for profiles, etc…
Update 2: netvizz currently only works with http and not https. I will try to move the app to a different server ASAP.
Continuing in the direction of exploring statistics as an instrument of power more characteristic of contemporary society than means of surveillance centered on individuals, I found a quite beautiful citation by French sociologist Gabriel Tarde in his Les Lois de l’imitation (1890/2001, p.192f):
Si la statistique continue à faire des progrès qu’elle a faits depuis plusieurs années, si les informations qu’elle nous fournit vont se perfectionnant, s’accélérant, se régularisant, se multipliant toujours, il pourra venir un moment où, de chaque fait social en train de s’accomplir, il s’échappera pour ainsi dire automatiquement un chiffre, lequel ira immédiatement prendre son rang sur les registres de la statistique continuellement communiquée au public et répandue en dessins par la presse quotidienne.
And here’s my translation (that’s service, folks):
If statistics continues to make the progress it has made for several years now, if the information it provides us with continues to become more perfect, faster, more regular, steadily multiplying, there might come the moment where from every social fact taking place springs – so to speak – automatically a number that would immediately take its place in the registers of the statistics continuously communicated to the public and distributed in graphic form by the daily press.
When Tarde wrote this in 1890, he saw the progress of statistics as a boon that would allow a more rational governance and give society the means to discuss itself in a more informed, empirical fashion. Nowadays, online, a number springs from every social fact indeed but the resulting statistics are rarely a public good that enters public debate. User data on social networks will probably prove to be the very foundation of any business that is to be made with these platforms and will therefore stay jealously guarded. The digital town squares are quite private after all…
When talking about the politics of the social Web and particularly online networking, the first issue coming up is invariably the question of privacy and its counterpart, surveillance – big brother, corporations bent on world dominance, and so on. My gut reaction has always been “yeah, but there’s a lot more to it than that” and on this blog (and hopefully a book in a not so far future) I’ve been trying to sort out some of the political issues that do not pertain to surveillance. For me, social networking platforms are more relevant to politics as marketing rather than surveillance. Not that these tools cannot function quite formidably to spy on people, but it is my impression that contemporary governance relies on other principles more than the gathering of intelligence about individual citizens (although it does, too). But I’ve never been very pleased with most of the conceptualizations of “post-disciplinarian” mechanisms of power, even Deleuze’s Post-scriptum sur les sociétés de contrôle, although full of remarkable leads, does not provide a fleshed-out theoretical tool – and it does not fit well with recent developments in the Internet domain.
But then, a couple of days ago I finally started to read the lectures Foucault gave at the Collège de France between 1971 and 1984. In the 1977-1978 term the topic of that class was “Sécurité, Territoire, Population” (STP, Gallimard, 2004) and it holds, in my view, the key to a quite different perspective on how social networking platforms can be thought of as tools of governance involved in specific mechanisms of power.
STP can be seen as both an extension and reevaluation of Foucault’s earlier work on the transition from punishment to discipline as central form in the exercise of power, around the end of the 18th century. The establishing of “good practice” is central to the notion of discipline and disciplinary settings such as schools, prisons or hospitals serve most of all as means for instilling these “good practices” into their subjects. Jeremy Bentham’s Panopticon – a prison architecture that allows a single guard to observe a large population of inmates from a central control point – has in a sense become the metaphor for a technology of power that, in Foucault’s view, is part of a much more complex arrangement of how sovereignty can be performed. Many a blogpost has been dedicated to applying the concept on social networking online.
Curiously though, in STP, Foucault calls the Panopticon both modern and archaic, and he goes as far as dismissing it as the defining element of the modern mechanics of power; in fact, the whole course is organized around the introduction of a third logic of governance besides (and historically following) “punishment” and “discipline”, which he calls “security”. This third regime is no longer focusing on the individual as subject that has to be punished or disciplined but on a new entity, a statistical representation of all individuals, namely the population. The logic of security, in a sense, gives up on the idea of producing a perfect status quo by reforming individuals and begins to focus on the management on averages, acceptable margins, and homeostasis. With the development of the social sciences, society is perceived as a “natural” phenomenon in the sense that it has its own rules and mechanisms that cannot be so easily bent into shape by disciplinary reform of the individual. Contemporary mechanisms of power are, then, not so much based on the formatting of individuals according to good practices but rather on the management of the many subsystems (economy, technology, public health, etc.) that affect a population so that this population will refrain from starting a revolution. Foucault actually comes pretty close to what Ulrich Beck’s will call, eight years later, the Risk Society. The sovereign (Foucault speaks increasingly of “government”) assures its political survival no longer primarily through punishment and discipline but by managing risk by means of scientific arrangements of security. This not only means external risk, but also risk produced by imbalance in the corps social itself.
I would argue that this opens another way of thinking about social networking platforms in political terms. First, we would look at something like Facebook in terms of population not in terms of the individual. I would argue that governmental structures and commercial companies are only in rare cases interested in the doings of individuals – their business is with statistical representations of populations because this is the level contemporary mechanisms of power (governance as opinion management, market intelligence, cultural industries, etc.) preferably operate on. And second – and this really is a very nasty challenge indeed – we would probably have to give up on locating power in specific subsystems (say, information and communication systems) and trace the interplay between the many different layers that compose contemporary society.
I have not idea whether it’s going to be accepted but here is my proposal for the Internet Research 9.0: Rethinking Community, Rethinking Place conference. The title is: Algorithmic Proximity – Association and the “Social Web”
How to observe, describe and conceptualize social structure has been a central question in the social sciences since their beginning in the 19th century. From Durkheim’s opposition between organic and mechanic solidarity and Tönnies’ distinction of Gemeinschaft and Gesellschaft to modern Social Network Analysis (Burt, Granovetter, Wellman, etc.), the problem of how individuals and groups relate to each other has been at the core of most attempts to conceive the “social”. The state of “community” – even in the loose understanding that has become prevalent when talking about sociability online – already is an end result of a permanent process of proto-social interaction, the plasma (Latour) from which association and aggregation may arise. In order to understand how the sites and services (Blogs, Social Networking Services, Online Dating, etc.) that make up what has become known as the “Social Web” allow for the emergence of higher-order social forms (communities, networks, crowds, etc.) we need to look at the lower levels of social interaction where sociability is still a relatively open field.
One way of approaching this very basic level of analysis is through the notion of “probability of communication”. In his famous work on the diffusion of innovation, Everett Rogers notes that the absence of social structure would mean that all communication between members of a population would have the same probability of occurring. In any real setting of course this is never the case: people talk (interact, exchange, associate, etc.) with certain individuals more than others. Beyond the limiting aspects of physical space the social sciences have identified numerous parameters – such as age, class, ethnicity, gender, dress, modes of expression, etc. – that make communication and interaction between some people a lot more probable than between others. Higher order social aggregates emerge from this background of attraction and repulsion; sociology has largely concluded that for all practical purposes opposites do not attract.
Digital technology largely obliterates the barriers of physical space: instead of being confined to his or her immediate surroundings an individual can now potentially communicate and interact with all the millions of people registered on the different services of the Social Web. In order to reduce “social overload”, many services allow their users to aggregate around physical or institutional landmarks (cities, universities, etc.) and encourage association through network proximity (the friend of a friend might become my friend too). Many of the social parameters mentioned above are also translated onto the Web in the sense that a person’s informational representations (profile, blog, avatar, etc.) become markers of distinction (Bourdieu) that strongly influence on the probability of communication with other members of the service. Especially in youth culture, opposite cultural interests effectively function as social barriers. These are, in principle, not new; their (partial) digitization however is.
Most of the social services online see themselves as facilitators for association and constantly produce “contact trails” that lead to other people, through category browsing, search technology, or automated path-building via backlinking. Informational representations like member profiles are not only read and interpreted by people but also by algorithms that will make use of this data whenever contact trails are being laid. The most obvious example can be found on dating sites: when searching for a potential partner, most services will rank the list of results based on compatibility calculations that take into account all of the pieces of information members provide. The goal is to compensate for the very large population of potential candidates and to reduce the failure rate of social interaction. Without the randomness that, despite spatial segregation, still marks life offline, the principle of homophily is pushed to the extreme: confrontation with the other as other, i.e. as having different opinions, values, tastes, etc. is reduced to a minimum and the technical nature of this process ensures that it passes without being noticed.
In this paper we will attempt to conceptualize the notion of “algorithmic proximity”, which we understand as the shaping of the probability of association by technological means. We do, however, not intend to argue that algorithms are direct producers of social structure. Rather, they intervene on the level of proto-social interaction and introduce biases whose subtlety makes them difficult to study and theorize conceptually. Their political and cultural significance must therefore be approached with the necessary caution.
Two things currently stand out in my life: a) I’m working on an article on the relationship between mathematical network analysis and the humanities, and b) continental
Part of the research that I’m looking into is what has been called “The New Science of Networks” (NSN), a field founded mostly by physicists and mathematicians that started to quantitatively analyze very big networks belonging to very different domains (networks of acquaintance, the Internet, food networks, brain connectivity, movie actor networks, disease spread, etc.). Sociologists have worked with mathematical analysis and network concepts from at least the 1930ies but because of the limits of available data, the networks studied rarely went beyond hundreds of nodes. NSN however studies networks with millions of nodes and tries to come up with representations of structure, dynamics and growth that are not just used to make sense of empirical data but also to build simulations and come up with models that are independent of specific domains of application.
Very large data sets have only become available in recent history: social network data used to be based on either observation or surveys and thus inherently limited. Since the arrival of digital networking, a lot more data has been produced because many forms of communication or interaction leave analyzable traces. From newsgroups to trackback networks on blogs, very simple crawler programs suffice to produce matrices that include millions of nodes and can be played around with indefinitely, from all kinds of angles. Social network sites like Facebook or MySpace are probably the best example for data pools just waiting to be analyzed by network scientists (and marketers, but that’s a different story). This brings me to a naive question: what is a social network?
The problem of creating data sets for quantitative analysis in the social sciences is always twofold: a) what do I formalize, i.e. what are the variables I want to measure? b) how do I produce my data? The question is that of building a representation. Do my categories represent the defining traits of the system I wish to study? Do my measuring instruments truly capture the categories I decided on? In short: what to measure and how to measure it, categories and machinery. The results of mathematical analysis (which is not necessarily statistical in nature) will only begin to make sense if formalization and data collection were done with sufficient care. So, again, what is a social network?
Facebook (pars pro toto for the whole category qua currently most annoying of the bunch) allows me to add “friends” to my “network”. By doing so, I am “digitally mapping out the relationships I already have”, as Mark Zuckerberg recently explained. So I am, indeed, creating a data model of my social network. Fifty million people are doing the same, so the result is a digital representation of the social connectivity of an important part of the Internet-connected world. From a social science research perspective, we could now ask whether Facebook’s social network (as database) is a good model of the social network (as social structure) it supposedly maps. This does, of course, depend on what somebody would want to study but if you ask yourself, whether Facebook is an accurate map of your social connections, you’ll probably say no. For the moment, the formalization and data collection that apply when people use a social networking site does not capture the whole gamut of our daily social interactions (work, institutions, groceries, etc.) and does not include many of the people that play important roles in our lives. This does not mean that Facebook would not be an interesting data set to explore quantitatively; but it means that there still is an important distinction between the formal model (data and algorithm, what? and how?) of “social network” produced by this type of information system and the reality of daily social experience.
So what’s my point? Facebook is not a research tool for the social sciences and nobody cares whether the digital maps of our social networks are accurate or not. Facebook’s data model was not created to represent a social system but to produce a social system. Unlike the descriptive models of science, computer models are performative in a very materialist sense. As Baudrillard argues, the question is no longer whether the map adequately represents the territory, but in which way the map is becoming the new territory. The data model in Facebook is a model in the sense that it orients rather than represents. The “machinery” is not there to measure but to produce a set of possibilities for action. The social network (as database) is set to change the way our social network (as social structure) works – to produce reality rather than map it. But much as we can criticize data models in research for not being adequate to the phenomena they try to describe, we can examine data models, algorithms and interfaces of information systems and decide whether they are adequate for the task at hand. In science, “adequate” can only be defined in connection to the research question. In design and engineering there needs to be a defined goal in order to make such a judgment. Does the system achieve what I set out to achieve? And what is the goal, really?
When looking at Facebook and what the people around me do with it, the question of what “the politics of systems” could mean becomes a little clearer: how does the system affect people’s social network (as social structure) by allowing them to build a social network (as database)? What’s the (implicit?) goal that informs the system’s design?
Social networking systems are in their infancy and both technology and uses will probably evolve rapidly. For the moment, at least, what Facebook seems to be doing is quite simply to sociodigitize as many forms of interaction as possible; to render the implicit explicit by formalizing it into data and algorithms. But beware merry people of The Book of Faces! For in a database “identity” and “consumer profile” are one and the same thing. And that might just be the design goal…
Since MySpace and Facebook have become such a big hype, lot of text has been dedicated to social networking. For people like myself whose social drive is not very developed, the attraction of “hey, dude, I love you so much!!!” is pretty difficult to parse into a familiar frame of reference, but apparently there’s something to all that cuddling online. Being alone has to be learned after all. I somehow can’t shake the feeling that people are going to get bored with all the poking eventually…
Independently form that, there is something really interesting about Facebook and that is, of course, Facebook Platform, the API that allows third party developers to write plug-in like applications for the system. Some of them are really impressive (socialistics and the touchgraph app come to mind), others are not. What I find fascinating about the whole thing is that in a certain sense, the social network (the actual “connections” between people – yes, the quotes are not a mistake) becomes an infrastructure that specific can applications “run” on. For the moment, this idea has not yet been pushed all that far, but it is pretty easy to imagine where this could go (from filesharing to virtual yard sale, from identity management to marketing nirvana). In a sense, “special interest” social networks (like LinkedIn who’s currently scrambling to develop their own platform) could plug onto Facebook and instead of having many accounts for different systems you’ve got your Facebook ID (FB Passport) and load the app for a specific function. If the computer is a Universal Machine, the Internet the Universal Network, Facebook Platform might just become what sociologists since Durkheim have been talking about: the universal incarnation of sociality. Very practical indeed – when Latour tells us that the social is not explaining anything but is, in fact, that what has to be explained, we can simply say: Facebook. That’s the Social.
That’s of course far around the corner and futurism is rarely time well spent – but still, actor-network theory is becoming more intelligible by the day. Heterogeneous Associations? Well, you just have to look at the Facebook interface and it’s all there, from relationship status to media preferences – just click on Le Fabuleux Destin d’Amélie Poulain on you profile page (come on, I know it’s there) and there’s the list of all the other people whose cool facade betrays a secret romantic. This is a case of mediation and it’s both technical and symbolic, part Facebook, part Amélie, part postmodern emptiness and longing for simpler times. Heterogeneous, quoi.
A Facebook Platform thought to its end could mediate on many additional levels, take part in producing the social through many other types of attachment, when it will no longer be a social network application but a social network infrastructure. At least Actor-Network theory will be a lot easier to teach then…
Oliver Ertzscheid’s blog recently had an interesting post (French) pointing to a couple of articles and comments on The Facebook, among which an article at the LA Times’ entitled “The Facebook Revolution“. One paragraph in there really stands out:
Boiled down, it goes like this: Humans get their information from two places — from mainstream media or some other centralized organization such as a church, and from their network of family, friends, neighbors and colleagues. We’ve already digitized the first. Almost every news organization has a website now. What Zuckerberg is trying to do with Facebook is digitize the second.
This quote very much reminds me of some of the issues discussed in the “Digital Formations” volume edited by Robert Latham and Saskia Sassen in 2005. In their introduction (available online) they coin the (unpronounceable and therefore probably doomed) term “sociodigitization” by distinguishing it from “digitization”:
The qualifier “socio” is added to distinguish from the process of content conversion, the broader process whereby activities and their histories in a social domain are drawn up into the digital codes, databases, images, and text that constitute the substance of a digital formation. As the various chapters below show, such drawing up can be a function of deliberate planning and reflexive ordering or of contingent and discrete interactions and activities. In this respect as well, sociodigitization differs from digitization: what is rendered in digital form is not only information and artifacts but also logics of social organization, interaction, and space as discussed above.
Facebook, then, is quite plainly an example for the explicit (socio-)digitization of social relations that were mediated quite differently in the past. The “network of family, friends, neighbors and colleagues” that is now recreated inside of the system has of course been relying on technical (and digital) means of communication and interaction for quite a while, and these media did play a role in shaping the relations they helped sustain. There is no need to cite McLuhan to understand that relating to distant friends and family by mail or telephone will influence the way these relations are lived and how they evolve. Being rather stable dispositifs, the specific logics of individual media (their affordances) were largely covered up by habitualization (cf. Berger & Luckmann1967, p.53); it is the high speed of software development on the Web that makes the “rendering of logics of social organization, interaction, and space” so much more visible. In that sense, what started out as media theory is quickly becoming software theory or the theory of ICT. There is, of course, a strong affiliation with Lawrence Lessig’s thoughts about computer code (now in v. 2.0) and its ability to function as both constraint and incentive, shaping human behavior in a fashion comparable to law, morals, and the market.
The important matter seems to be the understanding of how sociodigitization proceeds in the context of the current explosion of Web-based software applications that is set to (re)mediate a great number of everyday practices. While media theory in the tradition of McLuhan has strived to identify the invariant core, the ontological essence of individual media, such an endeavor seems futile when it comes to software, whose prime caracteristic is malleability. This forces us to concentrate the analysis of “system properties” (i.e. the specific and local logic of sociodigitization) on individual platforms or, at best, categories of applications. When looking at Facebook, this means analyzing the actual forms the process of digitization leads to as well as the technical and cultural methods involved. How do I build and grow my network? What are the forms of interaction the system proposes? Who controls data structure, visibility, and perpetuity? What are the possibilities for building associations and what types of public do they give rise to?
In the context of my own work, I ask myself how we can formulate the cultural, ethical, and political dimension of systems like Facebook as matters of design, and not only on a descriptive level, but on the level of design methodology and guidelines. The critical analysis of social network sites and the cultural phenomena that emerge around them is, of course, essential but shouldn’t there be more debate of how such systems should work? What would a social network look like that is explicitly build on the grounds of a political theory of democracy? Is such a think even thinkable?