Correcting Object-Related Misconceptions: How Should The System Respond?


Tills paper describes a computat ional method for correcting users' miseonceptioas concerning the objects modelled by a compute," s.ystem. The method involves classifying object-related misc,mce|,tions according to the knowledge-base feature involved in the incorrect information. For each resulting class sub-types are identified, : . :cording to the s t ructure of the knowledge base, which indicate wh:LI i . format ivn may be supporting the misconception and therefore what information to include in the response. Such a characteriza*i,,n, along with a model of what the user knows, enables the to reas,m in a domain-independent way about how best to c~rrv,'t [he user. 1. I n t r o d u c t i o n A meier ar,.a of Al research has been the development of "expert sys.tcms" systems which are able to answer user's que:~titms concerning a part icular domain. Studies identifying desirabl,, iutora, ' t ive capabili t ies for such systems [Pollack et al. 82] have ft,und that. it is not sufficient simply to allow the user to ,~k a question and Itavo the system answ~.r it. Users often want to question the system's rea-~oning,to make sure certain constraints have been taken into consideration, anti so on. Thus we must str ive to provide expert systems with the ability to interact with the user in the kind of cooperative di:LIogues tha t we see between two bullish ctmversational partners. Allowing .,uch interactions between the system and a user raises difficulties for a Natural-Language system. Since the user is interacting with a system a.s s /he would with a human export, s /he will nizam likely exp-ct the system to b(have as a human expert. Among other things, the will expect the systenl to be adhering to the cooperative principles of conversation [Grice 7,5, .loshi 821. If these principte~ are not followed by the system, the user is bkeiy to become confu~ed. In this paper I.focus on one a,;pect of the cooperative behavior found between two conversat, ional partners: responding to recognized differences in the beliefs of the two participants. Often when two people interact, ouc reveals-a belief or assumption tha t is incompatible with the b~*liefs held by the other. Failure to correct this disparity may not only implicitly confirm the disparate bcli,'f, but may even make it impos~;ibie to complete tile ongoing task. Imagine the following excilange: U. Give ll|e the ItUI.L NO of all Destroyers whose M A S T _ I I E I G I I T is above 190. E. All Destrt,yers tha t I know al)out | lave a M A b T _ H E I G l l T between 85 and 90. Were you thinking of the Aircraft-Carriers? in this example, the user (U) ha.s apparently ctmfused a Destroyer with an Aircraft-Carrier. This confusion has caused her to a t t r ibute a property value to Destroyers tha t they do not have. In this case a correct a/tswer by the expert (E} of *none" is likely to confuse U'. In order to continue the conver.-ation with a minimal amount of eoafu.~ion, the user's incorrect belief must first be addressed. My primary interest is in what an expert system, aspiring to human expert performance, should include in such responses. In particular, [ am concerned with system responses to te~'ognized disparate bel iefs /assumptions about cbflct.~. In the past this problem has been h, ft to the tutoring or CAI systems [Stevens et aL 79, Steven~ & ( 'ollins 80, Brown g:: Burton 78, Sleeman 82], which a t te tupt to correct s tudent ' s misconceptions concerning a part icular domain. For the most part, their approach ha.~ been to list a priori :dl mi.-conceptions in a given domain. Tile futility t,f this appr,~ach is empha'.,ized in [gleeman ,~2]. In contrast , the approach taken hvre i~ to ,-la:,~iry. in a dolttnin independent way, obj, 'ct-related di.-pariti,~s ;u:c,~rding to the l'~n.wh'dge ~:tse (l(.I~) feature involved. A nund)er of respon:~e strategies :ire associated with each resulting cla,~. Deciding which s t ra tegy to use for a given miseoncepti,m will be determined by analyzing a user model and the discourse si tuation. 2. W h a t Goes In to a C o r r e c t i o n ? In this work I am making thc btllowing assunlptions: • ]:or th*, purposes . f the initial correct.ion a t tempt , the system is a~umed to have complet,, attd corr~'ct knowledge of the domain. Th:tt is. the system will initiMly perceive a disparity as a mise.neel , t ion on the par t of the u~er It w i l l t hus a t t emp t to bring tile user's beli~,fs into line with its own. • The system's KB i~tclude-: the following fo:t~trce: an object taxonomy, knowledge of object a t t r ibutes and their possible values, and intornlation about I)O.~ible relationships between ol)jects. • Tile user's KB contains similar features, l lowev, 'r , mneh of the information (content} in the system's !'(B may he mb-.~ing from the u~or '~ b~ll [e.g., the us+,r's l'([~ may I)e ~parser ot coarser than the system's I(B, c,r various a t t r ibutes (,~f c~:nccpts ma~ t;e missi:~g frets the u~,'r's I'(P,}. In additi~m. inf,~rmation ia the u.,er's KB may be wrong, in tiffs work, to say that the user's KB is u'rong means that it is i.,:'m.:i.~terJ with the ,~g.,t,m) K B (e.g., things may be c!a.'~ified differently, properties a t t r ibuted differently, and ~'o on). IThiz v, ork is p~rtiMb" supported by the NSF gr~nt #MC~81-07200.


0 Figures and Tables

    Download Full PDF Version (Non-Commercial Use)