Testing Multiple Forecasters
We consider a cross-calibration test of predictions by multiple potential experts in a stochastic environment. This test checks whether each expert is calibrated conditional on the predictions made by other experts. We show that this test is good in the sense that a true expert--one informed of the true distribution of the process--is guaranteed to pass the test no matter what the other potential experts do, and false experts will fail the test on all but a small (category one) set of true distributions. Furthermore, even when there is no true expert present, a test similar to cross-calibration cannot be simultaneously manipulated by multiple false experts, but at the cost of failing some true experts. In contrast, tests that allow false experts to make precise predictions can be jointly manipulated.
|Date of creation:||Jan 2007|
|Contact details of provider:|| Postal: Stanford University, Stanford, CA 94305-5015|
Phone: (650) 723-2146
Web page: http://gsbapps.stanford.edu/researchpapers/
More information through EDIRC
References listed on IDEAS
Please report citation or reference errors to , or , if you are the registered author of the cited work, log in to your RePEc Author Service profile, click on "citations" and make appropriate adjustments.:
- Eddie Dekel & Yossi Feinberg, 2006.
"Non-Bayesian Testing of a Stochastic Prediction,"
1418, Northwestern University, Center for Mathematical Studies in Economics and Management Science.
- Ehud Kalai, 1995.
"Calibrated Forecasting and Merging,"
1144, Northwestern University, Center for Mathematical Studies in Economics and Management Science.
- Ehud Kalai & Ehud Lehrer & Rann Smorodinsky, 2010. "Calibrated Forecasting and Merging," Levine's Working Paper Archive 584, David K. Levine.
- Ehud Kalai, 1995. "Calibrated Forecasting and Merging," Discussion Papers 1144R, Northwestern University, Center for Mathematical Studies in Economics and Management Science.
- Drew Fudenberg & David K. Levine, 1997.
"Conditional Universal Consistency,"
Levine's Working Paper Archive
471, David K. Levine.
- Wojciech Olszewski & Alvaro Sandroni, 2006.
"Strategic Manipulation of Empirical Tests,"
1425, Northwestern University, Center for Mathematical Studies in Economics and Management Science.
- Nabil I. Al-Najjar & Jonathan Weinstein, 2006.
"Comparative Testing of Experts,"
Levine's Working Paper Archive
321307000000000590, David K. Levine.
- Alvaro Sandroni, 2003. "The reproducible properties of correct forecasts," International Journal of Game Theory, Springer;Game Theory Society, vol. 32(1), pages 151-159, December.
- Vladimir Vovk & Glenn Shafer, 2005. "Good randomized sequential probability forecasting is always possible," Journal of the Royal Statistical Society Series B, Royal Statistical Society, vol. 67(5), pages 747-763.
When requesting a correction, please mention this item's handle: RePEc:ecl:stabus:1957. See general information about how to correct material in RePEc.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: ()
If references are entirely missing, you can add them using this form.