Contextual and contextualized knowledge: an application in subway control

Contextual and contextualized knowledge: an application in subway control

Int. J. Human—Computer Studies (1998) 48, 357—373 Contextual and contextualized knowledge: an application in subway control P. BRE¨ZILLON, J.-CH. POM...

334KB Sizes 0 Downloads 1 Views

Int. J. Human—Computer Studies (1998) 48, 357—373

Contextual and contextualized knowledge: an application in subway control P. BRE¨ZILLON, J.-CH. POMEROL AND I. SAKER LIP6, Case 169, University Paris 6, 4 place Jussieu, 75252 Paris Cedex 05, France. email: Mbrezil, pomerol, sakerN

The control of the subway line traffic is a domain where operators must deal with huge quantities of pieces of knowledge more or less implicit in the control itself. When an incident occurs on a subway line, the operator must choose the best strategy applicable for moving from the incidental context to the operational one. An incident on the subway line may cause traffic delay or service interruption and may last for a long or short time, depending on the nature of the incident and many other elements. Operators mainly focus on contextual information for incident solving. An operator said, ‘‘When an incident occurs, I look first at what the incident context is’’. We propose to support subway line traffic operators in incident-solving with an incidentmanager system, which is a part of the SART project (French acronym for support system in the traffic control). The incident manager is a decision-support system based on the contextual analysis of events that arise at the time of the incident. It uses a context-based representation of incidents and applies a context-based reasoning. In this paper we discuss a context-based representation of incidents on the basis of the onion metaphor. The SART project now enters the second year of the system design and development and implies two universities and two subway companies in France and Brazil. ( 1998 Academic Press Limited

1. Introduction Durkin (1993) lists about 2500 expert systems, but the exact operational status is often not precisely stated. Majchrzak and Gasser (1991) point out that more than 50% of the systems installed in companies are not used. Bre´zillon and Pomerol (1996) consider that the lack of context consideration in knowledge-based systems (KBSs) is one of the main reasons that explains some KBS failures. The hypothesis of our work is that of making context explicit, acquiring knowledge incrementally and that explaining in context must be considered as an intrinsic aspect of problem solving in which the user has a crucial role to play. According to Abu-Hakima and Bre´zillon (1994), an efficient cooperation between a human and a machine implies the consideration of three related aspects: explanation, incremental knowledge acquisition and context. This work is ascribed in the realm of intelligent assistant systems (IASs) (Boy, 1991; Bre´zillon & Cases, 1995). Knowledge in current KBSs is acquired in a monolithic manner before the use of the KBS. If a problem with the knowledge is detected, the developer is forced to modify the knowledge, re-load (re-compile or re-interpret) and then test the reasoning again. Capturing and using knowledge in its context of use greatly simplifies knowledge 1071-5819/98/030357#17$25.00/0/hc970175

( 1998 Academic Press Limited



acquisition because knowledge provided by experts is always in a specific context and is essentially a justification of the expert’s judgment in that context. However, there is no common agreement about a definition of context, although this notion seems rather important when developing IASs. In the available literature about context (Bre´zillon, 1996), many questions still arise, either specific to the domain or general ones such as: What is context? What is its role? How can it be modelled? Why make context explicit? This leads us to try and give a general idea about what has been done in relation to context so far. We discuss in Section 1 the notion of context and the relationship between contextual and contextualized knowledge. In Section 2, we present the relationships between context, knowledge and reasoning. Section 3 discusses the nature of knowledge and reasoning and its consequences for representing context. Section 4 describes our application for the subway control, and Section 5 the Incident Manager, the system that we are implementing. A general discussion about the role of the context is given in Section 6.

2. The notion of context 2.1. INTRODUCTION

Since long, context has been playing an important role in a number of domains. This is especially true for activities such as predicting context changes, explaining unanticipated events and helping to handle them and helping to focus attention. In Artificial Intelligence, context was first introduced in a logicist framework by McCarthy (in his 1971 Turing Award talk) and his recent ideas on context are published in McCarthy (1993). However, there is no clear definition of this concept [all the corresponding references are given in Bre´zillon (1996)]: a set of preferences and/or beliefs; a window; an infinite and only partially known collection of assumptions; a list of attributes; the product of an interpretation, and a collection of context schemas; paths in information retrieval; slots in object-oriented languages; buttons which are functional customisable and shareable; possible worlds; assumptions under which a statement is true or false; a special, buffer-like data structure; an interpreter which controls the system’s activity; the characteristics of the situation and the goals of the knowledge in use; or entities (things or events) related in a certain way that permits the listening to what is said and what is not said. In this section, we present a reason for so many definitions and diverging positions on context. Then, we give our position on context.


The notion of context is dependent on its interpretations from a cognitive science viewpoint vs. an engineering (or system building) viewpoint. The cognitive science view is that context is used to model interactions and situations in a world of infinite breadth, and human behaviour is a key in extracting a model. The engineering view is that context is useful in representing and reasoning about a restricted state space within which a problem can be solved. On closer examination, Bre´zillon and Abu-Hakima (1995)



conclude that the engineering view is subsumed by the cognitive science view. In this paper, we focus on the engineering view. A context is a structure, a frame of reference, that permits the dropping of certain statements in a story. For example, ‘‘At his birthday’s party, Paul blew out the candles’’. Here, it is not said that there was a birthday cake, because it is clear for everybody.Contextual knowledge is supposed to be a part of our social inheritance. However, if while blowing on the cake, a candle falls down, the speaker will naturally add ‘‘and a candle fell down on the cake’’. Then, ‘‘cake’’ does not stay in the context, but intervenes directly in the utterance. There is also a consensus on the fact that context is inseparable from its use. Context is considered as a shared knowledge space that is explored and exploited by participants in the interaction. Such shared knowledge includes the history of all that ensured over an interval of time; the overall state of knowledge of the participating agents at a given moment; and the small set of things they are attending to at that particular moment. Other elements intervening in a context come from the domain (e.g. problem solving, task at hand, events, instantiated objects and constraints, the knowledge inferred by the system), the users (e.g. goals, expertise, beliefs, learner’s profile, value assignments), their environment (e.g. organizational knowledge, corporate memory), their interaction with a system (e.g. transaction history, plans for the future, attention-focusing information). However, context lacks a recognizable unifying characteristic and is often the generalization of an infinite and only partially known collection of assumptions as claimed by McCarthy (1993). In our application, we compare an incident with known incidents and compare their contexts to help the operators select the best strategy. A crucial problem is to find a way for representing the contextual component of knowledge explicitly. The literature is not very helpful at this level because context is rarely implemented explicitly. Rather, one finds applications where context is coded implicitly, often darkening the knowledge representation. A well-known example is the tetracycline rule in MYCIN presented by Clancey (1983), where a screening clause limits the consideration of the rule to persons older than 17 yr. The screening clause does not participate in the domain knowledge, but acts as a contextual clue that compiles information as tetracycline gives a violet colour to teeth.


According to the opinions expressed in the literature, our opinion is that context is what constrains something without intervening in it explicitly. The something for us is a problem solving by a user and a system, and the successive steps of the problem solving. Thus, we discuss mainly the context of a problem-solving step and then give some elements about the context of the problem-solving itself. The main lesson of the logical approach is that context is infinite and thus cannot be described completely. For a machine, there is a finite number of knowledge pieces. Thus, context considered as knowledge pieces will have necessarily a finite dimension. We

- A least, in Western countries.



consider three types of knowledge in relationship to context with respect to a given step of problem-solving (we illustrate this aspect in Section 5). (1) Contextualized knowledge is knowledge directly used at the step. (2) Contextual knowledge is knowledge not directly used at the step but constraining it. (3) External knowledge is knowledge having nothing to do with the problem-solving step. Such a distinction is made according to what the problem-solving step is focused on. For example, operators that ensure the monitoring of the distribution of water in Paris have noted some years ago that there was a peak in the water consumption (the contextualized knowledge) late each evening. The peak was reproducible every day but not predictable because it was not exactly at the same time. After an inquiry, they discovered that people use water for domestic needs (drink a glass of water, wash dishes, pour water on flowers, go to the toilets, etc.) during the advertisements introduced in the movies on the TV. The introduction of advertisements in the movie depends on the movie scenario. Such a knowledge (the time at which advertisements on the TV appear) has a contextual nature for the water distribution: it constrains the monitoring task without intervening in it explicitly (water distribution is not dependent on advertisements). Moreover, the nature of the knowledge (contextualized or contextual) changes when the problem-solving progresses from one step to the next. At the next following step, the nature of knowledge pieces changes: a piece of contextual knowledge may become either contextualized knowledge or external knowledge. Conversely, a piece of contextualized knowledge may become either contextual knowledge or external knowledge. For example, the French power company encountered a problem similar to the problem of water consumption. However, the time at which the peak of electricity consumption occurred (people coming back to their home) had been clearly identified and integrated in their distribution planning. Thus, contextual knowledge has been contextualized. The finite nature of a knowledge base implies that contexts associated with steps are discrete contexts. The intersection of these contexts is not empty because some pieces of contextual knowledge may belong to contexts of different steps. The context of the problem-solving as a whole contains all these discrete contexts, and evolves continuously during the problem-solving. Evolution of the problem-solving context also depends on external events as an information coming from the user with whom a system interacts. As a consequence, the nature of a context depends on the level of generality at which we consider the context. Karsenty and Brezillon (1995) discuss other aspects of the context of problem-solving.

3. Context, knowledge and reasoning 3.1. DIFFERENT TYPES OF KNOWLEDGE

Different authors have attempted to distinguish different types of knowledge. Worden, Foote, Knight and Andersen (1987) distinguish declarative domain knowledge (stored as rules and facts), dynamic knowledge (stored in datasets), procedural knowledge (stored in tasks and subtasks), task division knowledge (stored as rules), system self-knowledge



about the structure and quality of its own knowledge (dependency rules, timestamps, sources of updates and measures of reliability) and ‘‘how to be an assistant’’ knowledge. Stothert and McLeod (1997) bring another distinction with a priori and operational knowledge. A priori knowledge defines what is known about the plant. It is used to build a framework that facilitates a solution for the control problem being addressed. Operational knowledge is the knowledge available during plant operation to determine future control actions. Reatgui, Campbell and Leao (1997) consider specific and general knowledge in reasoning. Specific knowledge is represented in the form of cases while general knowledge is represented in the form of category descriptors. Globally, one must distinguish static knowledge and dynamic knowledge. Static knowledge concerns the definition of the domain and the task at hand. Dynamic knowledge is about the control of the domain and task. It depends on the environment in which the problem is solved. As a consequence of the changing environment in realworld applications, the same task can be treated differently in different contexts. This point meets Suchman’s thesis and for whom human actions are situated and are not controlled by prior plans in the same way that a program controls a computer (Suchman, 1987).


Part of the domain knowledge is reached through captors that provide data and information is drawn from these data. Aamodt and Nygard (1995) underline an unclear distinction between data, information and knowledge and its consequences on the development of integrated systems. For these authors, one acquires information from data on the basis of our knowledge. Moreover, a specific item can be viewed as data, information or knowledge, depending on the role the item plays in decision-making and by learning from experience. The use of domain knowledge is constrained not only by technology changes but also by specific problems. For instance, Degani and Wiener (1997) distinguish procedures, practices and techniques. Procedures are specified beforehand by developers to save time during critical situations. Practices encompass what the users do with procedures. Ideally, procedures and practices should be the same, but the users either conform to procedure or may deviate from it, even if the procedure is mandatory. Techniques are defined as personal methods for carrying out specific tasks without violating procedural constraints. Techniques are developed by users over years of experience. Knowledge acquisition focuses on procedures and, eventually, practices, but rarely on techniques because techniques are developed case by case and cannot be generalized. Beyond these technical aspects, there is often little attention to spontaneous practices of knowledge elaboration. This type of meta-functional activities, parasitic to the work, result in the elaboration of new knowledge and new mental or external tools. Such activities are triggered by functional difficulties met during task fulfillment. In working places, clerks develop working ways (strategies, relationships among them, etc.) to reach the efficiency that decision-makers wait when they design the work. Part of their practical solving of problems is not coded. Such a know-how is generally elaborated case by case in non-written rules. A non-written rule takes into account the real context of the problem at a given moment. Such ‘‘makeshift repairs’’ permit the executive actors to



reach the required efficiency. For example, Falzon, Sauvagnac and Chatigny (1996) observed that operators in the control room always had a notebook with them in which they occasionally wrote different types of information: telephone numbers of specialists in specific domains, references of relevant documentation, tests to be performed, etc. Such notebooks are unofficial documentation generally forbidden by the organization. The validation of such non-written rules is linked more to the result than to the procedure to reach it. Moreover, in most of the real-world applications, a decision-maker faces ill-defined situations where the form of the argumentation rather than the explicit decision proposal is crucial (Forslund, 1995). This implies that it would be better to store advantages and disadvantages rather than the complete decisions. This is what we call contextual knowledge.


Identifying problems is often more valuable for practitioners than providing solutions. From this arises the difficulty to plan everything beforehand and the need to make context explicit in any application. Xiao, Milgram and Doyle (1997) study anesthesiologists’ work for which each patient can represent a drastically different ‘‘plant’’ and thus there are relatively few well-defined procedures stipulated either by inside professional communities or by outside regulatory agencies. To the authors, the planning process appears to be fragmentary and non-exhaustive. The same conclusion is observed in other domains too (see e.g., Hoc, 1996; Debenham, 1997; Bainbridge, 1997). The need of making explicit the relationships between a system and its changing environment is the main reason. In an application of air control, Degani and Wiener (1997) show that the descent phase is highly context-dependent due to the uncertainty of the environment (e.g. air traffic control, weather), making it quite resistant to procedurization. Hollnagel (1993) proposes a contextual control model that distinguishes between a competence model (of actions, heuristics, procedures, plans) and a control model (mechanisms for constructing a sequence of actions in context), which are both heavily influenced by context (skills, knowledge, environmental cues, etc.). If a priori planning is impossible, a solution is the acquisition of skills for anticipating or look-ahead (see Pomerol, 1997). For Hoc (1996), the anticipative mode is the usual functioning mode of humans: the human operator always checks, more or less explicitly, the hypothesis instead of staying in a situation of discovery or surprise. An anticipatory system would be a system that uses the knowledge of future states to decide what action to take in the present. An anticipatory system has a model of itself and of the relevant part of its environment and will use this model to predict the future (Ekdahl, Astor & Davidsson, 1995). The system then uses the predictions to determine its behaviour, i.e. it lets the future states affect its present states. An intelligent system should be as far as possible able to anticipate what will possibly happen and pre-adapt itself for the occurrence of a crucial or time-crucial event. This implies that an intelligent system must be equipped with a simulation component. One must also underline that if it is difficult to fix the status of the knowledge, the means used to represent the knowledge also intervenes: the representation determines how the knowledge is manipulated and as a result affects the ‘‘intelligence’’ of the system.




We have pointed out that two incidents are never identical because the environment is never exactly the same and it is necessary to account for their respective contexts. This is not a problem for small domains, but for large repositories storing different information (e.g. multifunctional information bases or federated databases), a request specification may become a bottleneck. This problem begins to be addressed in different communities as machine learning and case-based reasoning. In case-based reasoning, Jurisca (1994) proposes a context-based similarity as a basis for flexible retrieval. The proposed similarity assessment theory represents explicitly constraints on similarity matching, and a context is the set of attribute values with associated constraints. Context allows to specify how the matching should be performed giving local-based matching criteria as proposed by Kolodner (1993). It also specifies what parts of information representation to compare and what kind of matching criteria to use. Then items are considered relevant if they are similar with respect to the current context. The motivation for considering context is its ability to bring additional knowledge to the reasoning process and thus focus attention on relevant details (Light & Butterworth, 1993; Giunchiglia, 1993). When retrieving items, we might want to consider all the attributes used to describe them or only certain subsets of them (in different situations different subsets may be relevant). The latter point is important because it may be impossible to represent all the attributes of a real-world object, the repository item is constrained by a limited number of information it carries. We can see this constraint as a context applied to a real-world object, an implicit context. An implicit context allows us to map a real-world object into a particular representation (an item) and an explicit context allows the definition of a specific view of the item in an information base. Explicit context allows the definition of a mapping between items defined in different schemata (Jurisca, 1994). Reatgui et al. (1997) point out that context may also play a role in another way. Some findings can imply the presence of other findings. Thus, the system is able to realize that a finding may be present in one case and not present in a second one, but its existence implies the presence of other findings that can be observed in the second case. This type of inference makes it possible for two cases that do not contain a large number of common findings to have a good degree of similarity. Thus, similarity judgements are made with respect to representations of entities, not with respect to the entities themselves. 3.5. APPRAISAL

Our approach relies on a decomposition of knowledge as discussed in this section. We distinguish the domain knowledge, the knowledge to use it and the knowledge to communicate. Each of the three categories presents its own problems. On the one hand, the knowledge to communicate concerns ‘‘how to be an assistant’’, and, on the other, it concerns the interactions among the software agents. Focusing first on three agents inside SART, we will tackle this last type of knowledge with the design and the development of the communication agent. This brief overview about the nature of knowledge leads to the following conclusions concerning our SART project. A part of the domain knowledge can be represented by



static knowledge, e.g. to build a line model. However, parts of the domain knowledge change because the environment evolves dynamically. As a consequence, an intelligent assistant system must have (1) a context-based representation of the changing knowledge (technology changes and description of incidents) and a special type of case based reasoning; (2) the possibility to assemble dynamically, fragments of plans to bring an efficient decision support; and (3) simulation means for look-ahead.

4. The subway control 4.1. THE SUBWAY CONTROL AND INCIDENTS

The main features of a subway line are its great traveller transport capacity (about 60,000 travellers per hour in the Parisian subway) combined with a regular transport supply. The regularity is particularly important at times of great user influx—peak hours—as well as at times of lower influx—off-peak hours. Thus, the transport supply must always be kept compatible with users’ demand by keeping the time interval between trains close to those of a given timetable. The regularity of the service is obtained by a complex association of automatic and communication systems that allow the on-line operators to be aware of what events are occurring at any time. The control of train timing aims at checking if each train follows the theoretical timetable. The timing regulation is associated with an interval regulation when an incident implies an impossibility to follow the timetable. A model of a subway line contains three types of information: a model of the line sections, theoretical and practical timetables and regulation algorithms. The timetables give the interval between trains and account for the hour of the day (peak or off-peak period), the day of the week (working day or week-end), the season, holidays, etc. Most of the operators’ work is to ensure that trains follow these timetables that have been established on the basis of their experience (in Paris, the subway has been in existence for the past 100 years). The timetable appears to be a compiled expression of a number of contextual information concerning train regulations (number of trains, moment of the day, driver availabilities, etc.) An incident may concern any of the three types of information, particularly static elements of the line (e.g. a station), trains (e.g. a door blocked), travellers (e.g. a suicide), train drivers (e.g. starting too late from the terminal of the line) and others (e.g. a dog in a tunnel). When an incident appears, the operator must take predictive or corrective actions to avoid, if possible, a more critical situation that may disrupt the normal supply of transport on the line (especially at peak hours), and rapidly retrieve a normal situation after the incident (at least partially). The consequences of an incident are highly dependent on the context in which the incident occurs (e.g. peak hours or not). The operators’ decisions rely heavily on the context of the incident and operators may conclude on different decisions for a same incident in different contexts. For example, the operator will consider the fact that there will soon be a rush of persons in the subway because a football match will end in few minutes, even if it is in the evening (i.e. at off-peak hours). Thus, for identifying an incident, one needs to know its context, its origin and the consequences on the train traffic.



With a heavy traffic, there is a high number of incidents in the subway in Paris. For example, there is an average of one suicide every 2 days. Clearly, incidents occur more frequently during peak hours than during off-peak hours. It is when the number of trains in circulation (and thus the number of users) is at its highest level that any incident may lead rapidly to a situation very far from the normal one. For example, on the French regional express subway (RER), it has been shown that a delay of 10 s at a station is propagated all along the RER line and may imply a delay of 30 min several stations beyond the initial station in less than 1 h. An incident occurring during off-peak hours may also lead to a peak-hour situation if (1) the incident solving during off-peak hours cannot be solved before the following peak-hour interval; and (2) an external event suddenly transforms the off-peak hour situation to a peak hour situation. An example of the latter situation is an incident occurring at off-peak hours when it suddenly starts raining outside; a number of persons change their mind and take the subway to continue their moving. Thus, elements intervening in an incident situation are: the state of the line, the state of the line exploitation, incident identification and the strategy applied, the action sequence of the operator and the context in which the incident occurred. Context includes information and knowledge on the situation that do not intervene directly in the incident-solving, but constrain the way in which the operator will choose a strategy at each step of incident-solving. The operator intervenes at a local level and a global one. At the local level, the operator helps employees at the place of the incident. This may be a phone call to the police if a traveller is injured, the isolation of a line section for any problem concerning power supply, etc. At the global level, the operator must minimize the consequences of a train stopping on the control of other trains to retrieve rapidly the normal interval between the trains (i.e. the timetable). This may be by ordering the drivers to wait a few minutes more in a station, the planning of the withdrawal of a damaged train and organizing a temporary service on each side of the line section where the incident is. This is the reason an operator told us that when an incident is announced, he first looks at what the current context is. It is particularly crucial that operators rapidly make their decision for choosing the right strategy. Computer systems may support operators in several ways by managing rapidly a huge quantity of knowledge pieces more or less connected. Operators using heavily a context-based reasoning to solve incidents, can with a computer system can review past incidents to (1) retrieve same (or similar) incidents in the past; (2) compare their contexts; (3) propose an ordered list of possible strategies; and (4) argue about any possible alternative.


Contextual considerations are normally drawn up on the basis of the experience of operators, drivers and the maintenance staff. They analyse the on-line situation when the incident occurred (time of occurrence, track loads, seriousness of the incident, etc.) to define the strategies to be adopted to restore normal service. Representing knowledge with its context implies to rethink knowledge representation. There are two aspects of context that play an important role. One aspect is that context is



composed of knowledge pieces that are related to a given piece of knowledge. This gives a static view of knowledge and the way in which knowledge pieces are related within their context of use. The dynamic view that is proposed by Edmondson and Meech (1993) is to consider context as a mechanism of contextualization for retrieving and storing knowledge, and its links to the reasoning mechanism that associated the considered incident with incidents known by the system. We face these two aspects of context in SART. The incident-solving context contains a number of items (e.g. it is almost the end of the football match and a number of persons will come back by the subway) among which the operator will account for the most relevant ones to choose a strategy for the incident-solving. Operators retrieve these items from their personal experiences or from those of other operators. An incident-solving is a sequence of steps and thus we distinguish two types of context: (i) the context of a step and (ii) the overall context of the incident-solving. Crossing the incident-solving from one step to the next implies a move from one context to another, generally after the occurrence of a new event or a new information. After this move, some pieces of contextual knowledge are without interest at the new step of the incident-solving, and other contextual pieces appear (knowledge pieces that are new or were previously contextualized). However, if one may speak of a discrete set of (static) contexts at the step level as McCarthy (1993), at the level of the incident itself, there is a global continuous context (i.e. the incident-solving context) that evolves dynamically along the sequence of steps. 4.3. AN EXAMPLE OF AN INCIDENT

Hereafter, we present the example of incident-solving usually analysed as a sequence of steps. For example, consider the incident ‘‘ill-traveller in a train.’’ The sequence of steps for the driver are the following. f f f f f f f

Inform the operator by phone. Stop the train at the next station. Go and identify the problem. Alert the operator. Await the police or firemen’s authorization. Inform the operator. Wait for the operator’s authorization to resume the operations.

For the operator, the sequence of steps after the driver’s announcement of an incident are the following. f f f f f f f

Evaluate the type of incident and its duration. Give a phone call to the police or the firemen. Identify the zone where the incident is. Choose a strategy to minimize the consequences of the incident on the line control. Inform all the drivers on the line. Wait for the announcement of the end of the problem-solving. Resume the normal service.



Such step sequences for the incident-solving (from the operator or the driver’s viewpoint) would be what a knowledge engineer would elicit from the operator. Except for an explanation purpose, the knowledge engineer will not acquire information on the underlined reasons of each step. For instance, at a first level of detail, the explanation is that step ‘‘stop the train at the next station’’ for the driver corresponds to the fact that the driver is not authorized to stop immediately when the alarm signal is triggered (i.e. in a tunnel) but must stop at the next station. At a second level of detail, the reason is that the driver must follow procedures imposed by the control centre. At a third level, the reason for the procedure is that experience proves that stopping a train in a tunnel, and not at a station, may have the following drawbacks. — It is more difficult to provide help in a tunnel than in a station. — Some travellers may be sensible to claustrophobia and trigger a panic among other travellers. — Travellers may leave the train in the tunnel. — Travellers in the tunnel may be electrocuted by power in the rails or knocked down by a train coming in the opposite direction. Knowledge, at this third level, is compiled as contextual knowledge ‘‘Drivers must follow procedures’’. However, there are other pieces of contextual knowledge that belongs to the domain. For instance, in a tunnel there is no pier and travellers cannot easily leave the train if the problem is a fire somewhere in the train. Moreover, the distance between a station and the next is less than 500 m. Thus, the delay between the triggering of the alarm signal and the train stop does not exceed 1 min.

5. The incident manager 5.1. THE SART PROJECT

The incident manager is an agent of SART. The SART project aims at the development of an intelligent decision-support system, called SART (French acronym for Support system for traffic control), for the traffic control of the subway lines when an incident occurs. The goal of SART is to help the operator who is responsible for a line to make his decision to solve an incident that arises on the line. In order to be a real intelligent assistant system, SART must accomplish several functions as acquiring and learning knowledge from the operators; simulating train behaviour on the line for a given initial situation with, eventually, an incident; changing the model of the line on operator’s request for helping the operator to test alternative issues; proposing alternatives to itself for incident-solving; training a new operator not familiar with a given line; etc. We intend to complete this ambitious project by a multi-agent approach. In such an approach, each agent realizes a function of SART. The multi-agent approach permits SART to have an evolutive architecture in which other agents—as a communication agent, a training agent and an incident-analyser agent—will be added later to increase the functions of SART. However, we limit our objective now to the line operation, but the structure of SART would be re-used easily, say, for line maintenance. Thus, on the one hand, we develop progressively a complex system that will accomplish different



tasks, and, on the other, the architecture can be re-used for other subways in the world because all the subways are built on similar lines. The SART project started with three agents, a line configurator, a traffic simulator and an incident manager. The role of the line configurator is to support the operator either in the model building of a specific line or for changing an existing model. The role of the traffic simulator is to support the operator of the model validation, generating answers about an incident or analysing past incidents. The role of the incident manager is to record sessions during which an incident occurs and to retrieve past incidents similar to a given one, in a case-based reasoning way. The fourth agent will be the communication agent. The main reason of the last agent is to limit the cognitive overload of the user facing several agents communicating with different languages and interfaces and to answer to complex questions where several agents must intervene.


Hereafter, we focus on the context-based representation of incidents handled by the incident manager. The role of the incident manager is to manage a base of incidents with their contexts. A record contains an incident, the context in which the incident has been solved, and the strategy chosen by the operator to solve the incident. A problem is the development of the incident manager on the basis of ‘‘elementary’’ incidents (e.g. an ill-traveller or a failure of the door closing of a train). Indeed, an incident is often a combination of elementary incidents. Thus, the incident manager may provide the operator with partial solutions only. However, the incident manager will acquire incrementally new incident solvings by interacting with the operator to solve incident combinations. The following is the participation of the incident manager in the problem-solving made by the operator. First, the operator provides an initial situation and defines an incident. Second, SART simulates (through the simulation agent) the train traffic on the line until the incident time and identifies the context. Third, SART retrieves (through the incident manager) incidents with their contexts for similar incidents (whatever its context is) or contexts similar (whatever the incident is) to the considered incident. Fourth, the incident manager classifies the retrieved incidents and proposes to the operator the strategies used with the retrieved incidents. Fifth, the operator evaluates the differences between the considered incident and the retrieved ones. At this step, the operator and the incident manager refine the retrieved strategy for the considered incident. During this strategy refinement, the incident manager may acquire new knowledge that will permit it to refine its description of the known incidents, their contexts and the chosen strategies. The determining factor here is the context-based representation of incidents and its use in an incident-based reasoning. The incident manager may fail to provide the operator with a good strategy for different reasons: (1) there are no similar incidents known by the incident manager; (2) there are similar known incidents but their respective contexts are missing or quite different; and (3) information on known incidents is wrong or partial. Facing these limitations, the operator would have the possibility to provide the system with the needed information. Thus, the incident manager will incrementally acquire knowledge



on the present incident, its context and the strategy used to exploit this knowledge for solving the following incidents.


Figure 1 gives a partial view of the solving of the elementary incident ‘‘Ill-traveller in a train’’. Such an incident is represented by an oval. Part of the steps of the incidentsolving is represented as rectangular boxes, e.g. ‘‘Alarm signal’’, ‘‘Stop at the next station’’, ‘‘Incident identification’’ and ‘‘Call operator’’. Consider the step ‘‘Stop at the next station’’. This step—contextualized knowledge— is imposed on the driver because, for example, this corresponds to procedures. Procedures arise from experience with similar incidents. For example, the travellers’ security is better ensured in a station than in a tunnel, employees of the subway are not allowed to take care of injured persons, etc. At a deeper level, the driver has to avoid stopping the train for a long time in a tunnel. One reason for this is that some travellers may have behaviour trouble as claustrophobia and leave the train to walk in the tunnel where another train may cross. All these pieces of contextual knowledge are not at the same distance from the step ‘‘Stop at the next station’’. Some pieces of contextual knowledge are nearer than others. For instance, ‘‘Procedures’’ is a contextual knowledge that is close to the incident-solving step, whereas ‘‘Avoid stopping in a tunnel’’ is another piece of contextual knowledge that

FIGURE 1. Context-based representation of the incident ‘‘Ill traveller in a subway’’.



is far from the same step. However, both of them make this step necessary. Such a kind of distance permits us to order pieces of contextual knowledge in layers around the step like skins of an onion. We call this representation the onion metaphor. Layers of contextual knowledge are represented by stippled circles in Figure 1. Our context modelling along the onion metaphor reveals several interesting results. (1) A step takes a meaning in a given context. Contextual knowledge does not intervene directly at this step but constrains it. For the step ‘‘Stop at the next station’’, the contextual knowledge ‘‘Easy help’’ is not the main reason for stopping the train at the station. However, it intervenes in its realization. (2) Pieces of contextual knowledge may be partially ordered. If we consider the step ‘‘Stop at the next station’’, we observe that some knowledge pieces of its context (e.g. ‘‘Easy help’’) are closer to the step than other (e.g. ‘‘Do not touch an injured traveller’’) because the constraints applied on the step are more direct. (3) A contextual knowledge itself takes a meaning in a context. The piece of contextual knowledge ‘‘Procedures’’ of the step ‘‘Stop at the next station’’ has its own context with elements as ‘‘Past experience’’ and ‘‘Do not touch an injured traveller’’. Recursively, one can link knowledge pieces together by layers. This result is a concrete example of McCarthy’s claims (1993) about the definition of a context in an outer context and the infinite dimension of context. (4) Pieces of contextual knowledge relate incidents together. With an association between a number of contextual-knowledge pieces, it appears that there is a relationship between a given incident and others. Figure 1 shows how such a relationship is established between ‘‘Stop at the next station’’ and other incidents such as ‘‘Signal light problem’’ through contextual knowledge. Establishing such incident net accounts for the occurrence in real-life conditions of combinations of incidents that seem apparently not related (e.g. ‘‘Ill-traveller in a train’’ and ‘‘Signal light problem’’). Common contextual components (or pieces of knowledge) of two contexts are highlighted by dotted, bold arrows in the figure. This paper addresses only one type of context, the context at the level of the knowledge representation. This type of context will have to be related to others as the context at the level of the reasoning mechanism and the user—system interaction. The representation of contextual knowledge in SART may be implemented regardless of formal considerations (logic, programming languages, specialist systems, etc.). However, for the sake of convenience in the choice of implementation tools in the current market, incidents and their contexts are represented in Smalltalk.

6. Discussion In our application, we consider a particular context—the incident-solving context—that evolves continuously during the problem-solving and a set of discrete contexts at the level of the problem-solving steps. For representing, managing and using contextual knowledge in problem-solving, we use the onion metaphor that gives an integrated view on contextual and contextualized knowledge. The onion model concerns the context at the level of the knowledge representation in the repository and permits a context-based representation of incidents.



Such a study must be led on the other types of context (e.g. at the level of the human—machine interaction) to find other models of context. A complete view of what context is can only be obtained at this price. Having established a context-based representation of knowledge, we now study the mechanisms to exploit contextual knowledge in incident-solving and develop a real system based on contextual knowledge. At another level, it appears that such a representation of contextual knowledge also gives an integrated view of the relationships between context and explanation. In Figure 1, one sees how a system may answer questions such as ‘‘Why the driver must stop at the next station?’’ Thus, one also notes in Figure 1 that it is possible for a system to provide explanations at different levels of details, from a very specific explanation (contextual knowledge close to the incident step) to a general explanation (contextual knowledge far from the step of the incident-solving). Recent research has stressed the necessity of developing knowledge at work. Situated cognition insists on the need to account for the environment (through our interaction with it) and the ongoing context in organizing behaviour. However, environment and context appear to be two views of the same problem. Context plays an important role in human—machine interaction, and making it explicit permits a generation of relevant explanation, the acquirement of knowledge incrementally in the context of its use and to learn in a context-sensitive way. This permits the effective making of a kind of ‘‘cognitive coupling’’ between the human and the machine. Context is always relative to another outer context in a recursive way. As a consequence, context cannot be fully modelled and represented, although, one often assimilates context of something at a set of restrictions (e.g. pre-conditions) that limit access to parts of this something. We also need to position context at the level of the knowledge and its representation, at the level of the reasoning mechanism, or at the level of the human—machine interaction. All these types of contexts are interdependent. For example, the way in which a user defines its request (concerning the interaction context) depends on the conceptual schema of the interrogated database (context at the level of the knowledge representation). From our study, context is considered as a structure, a frame of reference, a shared knowledge space, a process of contextualization, etc. Whatever the nature of context, one must speak about context with respect to its use with something (interaction context, task context, etc.). The first thing to do is to explore the relationships between context and knowledge. A piece of knowledge may be contextual or contextualized according to the step of the problem-solving where we are. Contextualized knowledge (i.e. operational knowledge) is knowledge that is explicitly considered in the problem-solving. Context appears as a dynamic extension of knowledge and must be considered at the level of knowledge. Aamodt (1993) proposes an alternative to the generalization-based knowledge acquisition, the situation-specific knowledge acquisition that captures a collection of previously solved cases and combined with generalized domain knowledge in the form of a densely connected semantic network. Another alternative proposed by Menzies (1996) is to modify the knowledge modelling approaches of knowledge acquisition by adding any technique supporting specification change. In the two cases, the key idea is a rapid, incremental knowledge acquisition in the context of use. This supposes that a system



may learn continuously by updating its knowledge base after each new problem has been solved. However, the acquisition of knowledge in context is still a challenge. Knowlege elaboration is also made complex because contextual knowledge is often under a highly compiled expression. As quoted by Clancey (1991), even what we consider a highly stable behaviour, such as reciting a phone number, is highly contextual. You establish this context by sitting in front of a phone. Such a situated knowledge is not acquired with the classical tools prior to its use. In Clancey’s example, one generally develops automatisms that authorize to dial the phone number without formulating it. For instance, one only ‘‘sees’’ the sequence of physical positions that the finger must have on the phone keyboard. It is rather difficult to acquire such a knowledge that may be expressed in a representation formalism not known by the machine. This situation is close to the situation studied by researchers in psychology who observe that we naturally organize our memories for past events into episodes, and the location of the episode, who was there, what was going on and what happened before or after, which are all strong cues for recall. Representing context remains a difficult problem. However, this problem seems to be easier in the framework of a human—machine interaction than in human—human interaction. Representing context is possible, and indeed made since a long time. The main problem is a real modelling of context. We thank A. Chevrier and J.-M. Sieur at RATP, and Professors M. Cavalcanti and R. Naveiro at the FURJ and C. Gentile and M. Secron at Metroˆ in Brazil for their support. This work is realized, thanks to the grants obtained from RATP and CAPES/COFECUB.

References AAMODT, A. (1993). A case-based answer to some problems of knowledge-based systems. In E. SANDEWALL & C. G. JANSSON, Eds. Scandinavian Conference on Artificial Intelligence, pp. 168—182. Amsterdam: IOS Press. AAMODT, A. & NYGARD, M. (1995). Different roles and mutual dependencies of data, information and knowledge — an AI perspective on their integration. In Data and Knowledge Engineering, Vol. 16, pp. 191—222. North-Holland: Elsevier. ABU-HAKIMA, S. & BRE¨ZILLON, P. (1994). Context in incremental knowledge acquisition and explanation for diagnosis. Research Report ERB-1042, NRC-CNRC, Canada, September. BAINBRIDGE, L. (1997). The change in concepts needed to account for human behavior in complex dynamic tasks. IEEE ¹ransactions on Systems, Man and Cybernetics A, 27, 351—359. BOY, G. (1991). Intelligent Assistant Systems, Series Knowledge-Based Systems, Vol. 6. London: Academic Press. BRE¨ZILLON, P. (1996). Context in human—machine problem solving: a survey. Technical Report 96/29, LAFORIA, October, pp. 37. (The paper can be retreived at ftp: // laforia.96/ BRE¨ZILLON, P. & ABU-HAKIMA, S. (1995). Using knowledge in its context: report on the IJCAI-93 workshop. AI Magazine, 16, 87—91. BRE¨ZILLON, P. & CASES, E. (1995). Cooperating for assisting intelligently operators. Proceedings of COOP-95. pp. 370—384. INRIA. BRE¨ZILLON, P. & POMEROL, J.-CH. (1996). Misuse and nonuse of knowledge-based systems: the past experiences revisited. In: P. HUMPHREYS, L. BANNON, A. MCCOSH, P. MIGLIARESE & J.-CH. POMEROL, Eds. Implementing Systems for Supporting Management Decisions, pp. 44—60. New York: Chapman & Hall, ISBN 0-412-75540-8.



CLANCEY, W. J. (1983). The epistemology of a rule-based expert system: a framework for explanation. Artificial Intelligence Journal, 20, 197—204. CLANCEY, W. J. (1991). Situated cognition: stepping out of representational flatland. AI Communications, 4, 109—112. DEBENHAM, J. (1997). Strategic workflow management: an experiment. Proceedings of the International ¼orkshop Distributed Artificial Intelligence and Multi-Agent Systems (DAIMAS’97), pp. 103—112. St. Petersburg, Russia: The Russian Academy of Sciences. DEGANI, A. & WIENER, E. L. (1997). Procedures in complex systems: the airline cockpit. IEEE ¹ransactions on Systems, Man and Cybernetics A, 27, 302—312. DURKIN, J. (1993). Expert Systems. Catalog of Applications. Intelligent Computer Systems Inc., PO Box 4117, Akron, OH 44321-117, USA. EDMONDSON, W. H. & MEECH, J. F. (1993). A model of context for human—computer interaction. Proceedings of the IJCAI-93 ¼orkshop on ºsing Knowledge in its Context, pp. 31—38. Technical Report 93/13, LAFORIA, University Paris 6. EKDAHL, B., ASTOR, E. & DAVIDSSON, P. (1995). Toward anticipatory agents. In: M. WOOLDRIDGE & N. JENNINGS, Eds. Intelligent Agents, Lecture Notes in AI, Vol. 890, pp. 191—202. Berlin: Springer. FALZON, P., SAUVAGNAC, C. & CHATIGNY, C. (1996). Collective knowledge elaboration. Proceedings of COOP’96, pp. 171—186. France: INRIA Publisher. FORSLUND, G. (1995). ¹oward cooperative advice-giving systems. ¹he expert systems experience. Ph.D. Thesis 518, Linkoping University, Sweden. GIUNCHIGLIA, F. (1993). Contextual reasoning. Proceedings of the IJCAI-93 ¼orkshop on ºsing Knowledge in its Context. pp. 39—48. Research Report 93/13, LAFORIA. HOC, J. M. (1996). Supervision et Controˆle de Processus. La Cognition en Situation Dynamique. Presses Universitaires de Grenoble. Se´rie Science et Technologies de la Connaissance. JURISCA, I. (1994). Context-base similarity applied to retrieval of relevant cases. Proceedings of the AII Fall Symposium Series on Relevance, New Orleans, Louisiana, USA. KOLODNER, J. L. (1993). Case-Based Reasoning. San Mateo, CA: Morgan Kauffmann. LIGHT, P. & BUTTERWORTH, G. (1993). Context and Cognition: ¼ays of ¸earning and Knowing. Hillsdale, NJ: L. Erlbaum Associates. MCCARTHY, J. (1993). Notes on formalizing context. Proceedings of the 13th IJCAI, Vol. 1, pp. 555—560. MAJCHRZAK, A. & GASSER, L. (1991). On using Artificial Intelligence to integrate the design of organizational and process change in US manufacturing. Artificial Intelligence and Society Journal, 5, 321—338. MENZIES, T. (1996). Assessing responses to situated cognition. Proceedings of the Banff Knowledge Acquisition ¼orkshop. http: //\timm/pub/docs/papersonly.html. REATGUI, E. B., CAMPBELL, J. A. & LEAO, B. F. (1997). A case-based model that integrates specific and general knowledge in reasoning. Applied Intelligence, 7, 79—90. STOTHERT, A. & MCLEOD, M. (1997). Distributed intelligent control system for a continuous-state plant. IEEE ¹ransactions on Systems, Man and Cybernetics B, 27, 395—401. SUCHMAN, L. A. (1987). Plans and Situated Actions. Cambridge, UK: Cambridge University Press. WORDEN, R. P., FOOTE, M. H., KNIGHT, J. A. & ANDERSEN, S. K. (1987). Co-operative expert systems. In: B. DU BOULAY, D. HOGG & L. STEELS, Eds. Advances in Artificial Intelligence, Vol. II. Amsterdam: Elsevier. XIAO, Y., MILGRAM, P. & DOYLE, D. J. (1997). Planning behavior and its functional role in interactions with complex systems. IEEE ¹ransactions on Systems, Man and Cybernetics A, Systems and Humans, 27, 313—324.