Verification, Validation and Evaluation of Expert Systems in Order to Develop a Safe Support in the Process of Decision Making
Expert systems use computational techniques that involve making decisions, just as human experts do, but it is well known that human experts may be wrong from time to time. We used to think that designing and using an expert system will solve this problem, but unfortunately expert systems can also be wrong, although the system contains no errors, but the knowledge on which the system is based, even if it is the best available, does not offer an answer, or the right answer for any situation. A wrong answer might cause a lot of harm if the system in question is one for medical use, or a management decision support. If the person using the system has no experience or solid knowledge about the area the system has been designed for, he or she will not be able to judge the accuracy of the given advice. Considering that real-world knowledge bases may contain a large number of rules, there will be a very large number of computational paths through an expert system, and each one of them will have to pass a test of correctness. More than ever, the risk management will be an important part of the entire process of the system’s project planning and management. For these reasons it is important for the software engineering expert to make sure the validation, verification and evaluation of the system are made at their best.
When requesting a correction, please mention this item's handle: RePEc:wpa:wuwpco:0510002. See general information about how to correct material in RePEc.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: (EconWPA)
If references are entirely missing, you can add them using this form.