Good randomized sequential probability forecasting is always possible
AbstractBuilding on the game theoretic framework for probability, we show that it is possible, using randomization, to make sequential probability forecasts that will pass any given battery of statistical tests. This result, an easy consequence of von Neumann's minimax theorem, simplifies and generalizes work by earlier researchers. Copyright 2005 Royal Statistical Society.
Download InfoIf you experience problems downloading a file, check if you have the proper application to view it first. In case of further problems read the IDEAS help page. Note that these files are not on the IDEAS site. Please be patient as the files may be large.
As the access to this document is restricted, you may want to look for a different version under "Related research" (further below) or search for a different version of it.
Bibliographic InfoArticle provided by Royal Statistical Society in its journal Journal of the Royal Statistical Society Series B.
Volume (Year): 67 (2005)
Issue (Month): 5 ()
Contact details of provider:
Postal: 12 Errol Street, London EC1Y 8LX, United Kingdom
Web page: http://wileyonlinelibrary.com/journal/rssb
More information through EDIRC
You can help add them by filling out this form.
CitEc Project, subscribe to its RSS feed for this item.
- Wojciech Olszewski & Alvaro Sandroni, 2008.
"Manipulability of Future-Independent Tests,"
Econometric Society, vol. 76(6), pages 1437-1466, November.
- Wojciech Olszewski & Alvaro Sandroni, 2006.
"Strategic Manipulation of Empirical Tests,"
1425, Northwestern University, Center for Mathematical Studies in Economics and Management Science.
- Wojciech Olszewski & Marcin Pęski, 2011. "The Principal-Agent Approach to Testing Experts," American Economic Journal: Microeconomics, American Economic Association, vol. 3(2), pages 89-113, May.
- Wojciech Olszewski & Alvaro Sandroni, 2011. "Falsifiability," American Economic Review, American Economic Association, vol. 101(2), pages 788-818, April.
- Feinberg, Yossi & Stewart, Colin, 2007.
"Testing Multiple Forecasters,"
1957, Stanford University, Graduate School of Business.
- Dean Foster & Rakesh Vohra, 2011. "Calibration: Respice, Adspice, Prospice," Discussion Papers 1537, Northwestern University, Center for Mathematical Studies in Economics and Management Science.
- Colin Stewart, 2009.
"Nonmanipulable Bayesian Testing,"
tecipa-360, University of Toronto, Department of Economics.
- Alvaro Sandroni & Wojciech Olszewski, 2008. "Falsifiability," PIER Working Paper Archive 08-016, Penn Institute for Economic Research, Department of Economics, University of Pennsylvania.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: (Wiley-Blackwell Digital Licensing) or (Christopher F. Baum).
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
If references are entirely missing, you can add them using this form.
If the full references list an item that is present in RePEc, but the system did not link to it, you can help with this form.
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your profile, as there may be some citations waiting for confirmation.
Please note that corrections may take a couple of weeks to filter through the various RePEc services.