Lecture date: 22 Feb 2005
In an esoteric – yet fascinating – presentation, Luc Steels described a new paradigm for communication between disparate entities.
According to Steels, the world is moving rapidly towards a peer-to-peer information model. Music files, scientific research papers, travel information and bookmarks are just a few examples of the information that’s now being shared. However, P2P creates a mess. Everyone has their own metadata and their own way of presenting information.
Steels analyzed three approaches to communication in a P2P world and then explained why user-driven semiotic dynamics may be the best.
The first approach is to index information – what Google does. But the problem with Google and the other search engines is that they know nothing of semantics. Moreover, search engines have a creeping bias – some websites rise in ranking at the expense of others – partly because of the search engine’s commercial focus.
The second approach is to use ontologies – conventions for describing and categorizing information. But who is going to create the ontology? And who will enforce its use? Is it even possible to foresee and formalize everything that interests humans? Steels concludes that this ‘semantic web’ only works for restricted domains.
A third approach is ‘social software’. No, it’s not about dating or mating. The idea is that a community share annotations (tags) that are used to index objects. Steels likes this approach better because there is no central control and it is user-driven. Unfortunately, however, the de-centralized nature of social software leads to a proliferation of tags. “In such a chaotic world, schema-mapping is inherently unsolvable,” says Steels.
Enter ’emergent semiotics’. Despite its fancy name and arcane terminology, the idea is quite simple. Entities simply learn to understand each other through repeated interactions. Steels calls this process of coordination ‘grounded communication’. The power of this type of communication is that it is grounded in perception of the real world and based on negotiation between actual ‘users’. In contrast, other approaches are often based on artificial constructs or the lopsided views of a central authority – be it a particular individual, group or organisation. In the ‘Talking Heads Experiment’ back in 1999, two robots invented an own language that enabled one robot to describe coloured, geometric shapes (a blue circle, etc.) and the other robot to properly identify them. Moreover, these two robots were like newborn babies, they could perceive sensory dimensions but had no innate ability to distinguish between colours.
Emergent semiotics is most effective, Steels believes, because perception will always be different across cultures, and even from individual to individual. A study comparing two island tribes – the English and the Berinmo of Papua New Guinea – found that their colour categorization was vastly different. Steels claims that emergent semiotics will allow us to finally accept differences and instead simply coordinate. It doesn’t matter if your ‘red’ looks like ‘pink’ to me, as long as I eventually understand that “Oh, that’s what you mean by red.”
This communication process may sound like a lot of work, and it is. But fortunately, software agents do the work for us. By using genetic algorithms, for example, the best solution literally ‘evolves’. An explicit ‘man-made mapping’ between information sources is no longer necessary.
In this brave new world, no human intervention is required. “We will have to accept that information systems are like living organisms. We don’t fully control them and they simply survive,” says Steels.
Still, humans can’t always leave things alone. In one of the experiments, hackers taught the robots dirty words. Researchers were embarrassed by these profane robots when groups of school children visited the exhibition.