In this section, we show how to use MultiContext Systems (MCS) as a formal framework in which knowledge representation and integration in a distributed intelligence system can be specified. MCS were introduced in [9]. Their semantic counterpart, Local Models Semantics, was presented in [10]. A foundational account of MCS can be found in [1]. MCS have also been used as a specification language for multi-agent systems [20].
MCS can be described as a logic of relationships between local, independent representations. The logic is based on two very general principles [10]. The first, named principle of locality, says that reasoning is intrinsically local, namely happen within a given context. The second, named principle of compatibility, says that reasoning in a context is partly constrained by its relationship with reasoning processes that happen in other contexts. These principles are given both a proof and a model theoretical formalization.
Locality. Proof theoretically, each context is associated with a logical theory, finitely presented as an axiomatic formal system (where L is a formal language, is a set of axioms, and is a correct and complete set of inference rules defined over L). Notationally, we write to express the (metalinguistic) fact that is a formula of the context ci (i.e. ). Model theoretically, each context ci is characterized as a set of models (called local models) of the language Li (for the moment, we require that these models satisfy at least the theorems of the theory associated with ci).
Compatibility. Proof theoretically,
compatibility is formalized as a collection of bridge rules,
namely inference rules whose premisses and conclusion belong to
different contexts, for example:
The proof theoretical effect of the principle of compatibility is that it increases the set of theorems which are locally derivable in a context (with respect to the theorems that can be derived in the associated theory taken in isolation). The model-theoretical effect of the principle of compatibility is that it cuts off the set of local models that satisfy a context (again, with respect to the models that satisfy the associated theory taken in isolation), as it eliminates the (sets of) local models that are not compatible with (sets of) local models of other contexts.
Letting aside the technicalities, MCS are a highly flexible and modular way to formalize a collection of local representations and to model the process of knowledge integration (using compatibility relations) without resorting to the GEP. As an example, consider the interaction between a user and a technician who has been called to repair a photocopier. Each of them has a representation of the machine which is partial (e.g. the user knows very little of what is inside the machine, whereas the technician has no information about the ``history'' of the machine and the conditions in which it's been used), approximate (the user and the technician have knowledge about the machine at very different level of detail), and perspectival (the user's perspective on the machine is quite different from the technician's perspective: the first has to use it, the second to repair it). In short, the user and the technician have local representations of the photocopier. In MCS, this means that we represent what the user and the technician know as two different contexts, each with its own representation language (they do not completely share the lexicon), its set of axioms (they have different information), its inference rules (namely we assume that they have the same reasoning abilities). Notice that the languages of both contexts are interpreted over a set of local models of the two contexts.
The crucial question now is: how do the user and the technician integrate what they know in order to communicate and cooperate in solving the machine's problem? The intuition is that knowledge cannot be shared across contexts, but that the fact that they are talking of the same machines imposes a compatibility relation between their representations. Some aspects of this relation can be known (e.g. the technician may know how to map part of what the user says into a more technical language), some need to be learned in the communication through a process of meaning negotiation. Whichever the case, the relation can be represented as a collection of bridge rules, namely rules that allow the technician to map onto his/her language what the user says (and vice versa). Notice that, in general, knowledge about this relationship can be incomplete (the user and the technician may have only partial knowledge about it), and even worse can be incorrect. Incompleteness means that the user or the technician (or both) lack some bridge rule; incorrectness means that they are not using the right bridge rules. Bridge rules (or, correspondigly, compatibility relations) are the way MCS formalize knowledge integration.
Notice that, as a consequence of accepting the DIP, both incompleteness and incorrectness cannot be eliminated a priori from the system, but must be detected in the communication process. In a conversation, there are some typical situations that allow us to realize that something is going wrong. For example, we can imagine that the technician uses a term that the user has never heard before; or the user describes a problem of the machine in such a way that the technician cannot make sense of it. In these situations, the two speakers start a process of meaning negotiation, whose goal is to establish new links between local representations (e.g. learning a new word, learning that a word has a different meaning, learning how to map a functional problem into a technical description, and so on).
These ideas, that in many respects recall ideas discussed from an organizational perspective in works such as [21,2], have a direct application to KM. The case of AA described in section 2 is a paradigmatic example of how eliminating contextual aspects of knowledge (KB approach), or the possibility of meaning negotiation across context-dependent representations (AAOnLine approach), may lead to failures.