Evaluation of Uncertain Inference Models I: PROSPECTOR

27 Mar 2013  ·  Robert M. Yadrick, Bruce M. Perrin, David S. Vaughan, Peter D. Holden, Karl G. Kempf ·

This paper examines the accuracy of the PROSPECTOR model for uncertain reasoning. PROSPECTOR's solutions for a large number of computer-generated inference networks were compared to those obtained from probability theory and minimum cross-entropy calculations. PROSPECTOR's answers were generally accurate for a restricted subset of problems that are consistent with its assumptions. However, even within this subset, we identified conditions under which PROSPECTOR's performance deteriorates.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here