Category Archives: collective intelligence
This spring worked on an R&D project that was really quite interesting but – as it happens with projects – took up nearly all of my spare time. La montre verte is based on the idea that pollution measurement can be brought down to street level if sensors can be made small enough to be carried around by citizens. Together with a series of partners from the private sector, the CiTu group of my laboratory came up with the idea to put an ozone sensor and a microphone (to measure noise levels) into a watch. That way, the device is not very intrusive and still in direct contact with the surrounding air. We built about 15 prototypes, based on the fact that currently, Paris’ air quality is measured by only a handful of (really high quality) sensors and even the low resolution devices we have in our watches should therefore be able to complement that data with a geographically more fine grained analysis of noise and pollution levels. The watch produces a georeferenced measurement (a GPS is built into the watch) every second and transmits the data via Bluetooth to a Java application on a portable phone, which then sends every data packet via GPRS to a database server.
My job in the project was to build a Web application that allows people to interact with and make sense of the data produced by the watches. Despite the help from several brilliant students from our professional Masters program, this proved to be a daunting task and I spent *at lot* of time programming. The result is quite OK I believe; the application allows users to explore the data (which is organized in localized “experiments”) in different ways, either in real-time or afterward. With a little more time (we had only about three month for the whole project and we got the hardware only days before the first public showcase) we could have done more but I’m still quite content with the result. Especially the heatmap (see image) algorithm was fun to program, I’ve never done a lot of visual stuff so this was new territory and a steep learning curve.
Unfortunately, the strong emphasis on the technological side and the various problems we had (the agile methods one needs for experimental projects are still not understood by many companies) cut down the time for reflection to a minimum and did not allow us to come up with a deeper analysis of the social and political dimensions of what could be called “distributed urban intelligence”. The whole project is embedded in a somewhat naive rhetoric of citizen participation and the idea that technological innovation can solve social problems, in this case matters of urban planning and local governance. A lesson I have learned from this is that the current emphasis in funding on short-term projects that bring together universities and the industry makes it very difficult to carve out an actual space for scientific practice between all the deadlines and the heavy technical demands. And by scientific practice, I mean a *critical* practice that does not only try to base specifications and prototyping on “scientifically valid” approaches to building tools and objects but which includes a reflection on social utility that takes a wider view than just immediate usefulness. In the context of this project, this would have implied a close look at how urban development is currently configured in respect to environmental concerns in order to identify structures of governance and chains of decision-making. This way, the whole project could have targeted issues more clearly and consciously, fine-tuning both the tools and the accompanying discourse to the social dimension it aimed at.
I think my point is that we (at least I) have to learn how to better include a humanities-based research agenda into very high-tech projects. We have known for a long time now that every technical project is in fact a socio-technical enterprise but research funding and the project proposals that it generates are still pretending that the “socio-” part is some fluffy coating that decorates the manly material core where cogs and wire produce tangible effects. As I programmer I know how difficult and time-consuming technical work can be but if there is to be a conscious socio-technical perspective in R&D we have to accept that the fluffy stuff takes even more time – if it is done right. And to do it right means not only reading every book and paper relevant to a subject matter but to take the time to reflect on methodology, to evaluate every step critically, to go back to the drawing board, and to include and to produce theory every step of the way. There is a cost to the scientific method and if that cost is not figured in, the result may still be useful, interesting, thought-provoking, etc. but it will not be truly scientific. I believe that we should defend these costs and show why they are necessary; if we cannot do so, we risk confining the humanities to liberal armchair commentary and the social sciences to ex-post usage analysis.
But there is a more practical reason why I ask myself this very question. Pierre Lévy actually used to work at my department and my laboratory has recently stuck up a cooperation with his research unit in
IEML (Information Economy Meta Language) is an artificial language designed to be simultaneously: a) optimally manipulable by computers; and b) capable of expressing the semantic and pragmatic nuances of natural languages. The design of IEML responds to three interdependent problems: the semantic addressing of cyberspace data; the coordination of research in the humanities and social sciences; and the distributed governance of collective intelligence in the service of human development.
IEML is not another syntax proposal for a semantic web like RDF or OWL. It is a philosopher’s creation of a new language that allows mainly two things: facilitate the processing of data tagged with IEML sentences and help cross-language and intercultural reasoning. This page gives a short overview. Against the usual understanding of collective intelligence, IEML is really a top-down endeavor. Lévy came up with the basic syntax and vocabulary and the proposal explicitly states the need for experts in helping with formalization and translation. I must admit that I have been very skeptical of the whole thing, but after reading Clay Shirky’s “Here comes Everybody” (which I found interesting but also seriously lacking – I’ll get to that in another post though) there is a feeling creeping up on me that Lévy might yet again be five years ahead of everybody else. In my view, the mindset of large parts of research on participation has adopted the ontology and ethics of American-brand Protestantism which, among other things, identifies liberty and democracy with community rather than with the state and which imagines social process as a matter of developing collective morals and practices much more than the outcome of power struggles mediated by political institutions. This view idealizes the “common man” and shuns expert culture as “elitist”. Equality is phrased less in socio-economic terms, as “equal opportunity” (the continental tradition), but in mostly in cultural terms, as “equal recognition”. (Footnote: this is, in my view, why political struggle in the
I believe that the most interesting projects in the whole “amateur” sector are the ones that organize around meritocratic principles and consequently built hierarchy; open source software is the best example but Wikipedia works in a similar fashion. The trick is to keep meritocracy from turning into hegemony. But I digress.
Lévy’s bet is that collective intelligence, if it wants to be more than pop culture, will need experts (and expert tools) for a series of semantic tasks ranging from cartography to translation. His vision is indeed much more ambitious than most of the things we have seen to this day. The idea is that with the proper (semantic) tools, we could tackle problems collectively that are currently very much out of reach; and this in a truly global fashion, without bringing everybody into the rather impoverished linguistic umbrella of globish. Also, in order to make search more pluralistic and less “all visibility to the mainstream” as it currently is, we will need to get closer to the semantic level. I don’t believe that IEML, in its current iteration at least, can really do all these things. But I believe that yet again, Lévy has the right intuition: if collective forms of “problem solving” are to go beyond what they currently do, they will have to find modes of organization that are more sophisticated than the platforms we currently have. These modes will also have to negociate a balance between “equal opportunity” and “equal representation” and make it’s peace with instituionalization.