A Model for Determining the Degree of Contradictions in InformationMalaysian Journal of Computer Science (2011)
AbstractConversational systems are gaining popularity rapidly. Consequently, the believability of the conversational systems or chatterbots is becoming increasingly important. Recent research has proven that learning chatterbots tend to be rated as being more believable by users. Based on Raj’s Model for Chatterbot Trust, we present a model for allowing chatterbots to determine the degree of contradictions in contradictory statements when learning thereby allowing them to potentially learn more accurately via a form of discourse. Some information that is learnt by a chatterbot may be contradicted by other information presented subsequently. Choosing correctly which information to use is critical in chatterbot believability. Our model uses sentence structures and patterns to compute contradiction degrees that can be used to overcome the limitations of Raj’s Trust Model, which takes any contradictory information as being equally contradictory as opposed to some contradictions being greater than others and therefore having a greater impact on the actions that the chatterbot should take. This paper also presents the relevant proofs and tests of the contradiction degree model as well as a potential implementation method to integrate our model with Raj’s Trust Model.
- vector space model
Citation InformationRam Gopal Raj. "A Model for Determining the Degree of Contradictions in Information" Malaysian Journal of Computer Science Vol. 24 Iss. 3 (2011)
Available at: http://works.bepress.com/ramgopal_raj/2/