IDEAS home Printed from
   My bibliography  Save this paper

Analysis of PISA 2006 Preferred Items Ranking Using the Percent-Correct Method


  • Ray Adams

    (Australian Council for Educational Research)

  • Alla Berezner

    (Australian Council for Educational Research)

  • Maciej Jakubowski



This paper uses an approximate average percent-correct methodology to compare the ranks that would be obtained for PISA 2006 countries if the rankings had been derived from items judged by each country to be of highest priority for inclusion. The results reported show a remarkable consistency in the country rank orderings across different sets of countries’ preferred items when comparing with the rank reported in the PISA 2006 initial report (OECD, 2007). On average, only few countries systemically go up or down in their ranking position. As these countries are in a group of moderate performers with very comparable outcomes, these shifts in the ranking would probably be associated with minor changes in mean performance on the final PISA scale. The analysis suggests that PISA rankings are noticeably stable thanks to the large enough pool of test items able to accommodate diverse preferences. The paper shows how important it is to base a choice of test items on a properly structured process which allows different experts and countries to equally contribute. The evidence presented demonstrates that in PISA, average rank positions of countries across different sets of preferred items are apparently stable and experts are not able to predict which items can elevate performance of their countries in the final test. Le présent document repose sur une méthodologie fondée sur la moyenne des pourcentages de réponses correctes. Il vise à déterminer le rang que les pays auraient obtenu à l’évaluation PISA 2006 si le classement avait été réalisé à partir des items considérés comme prioritaires par chaque pays. Sur les différents groupes d’items préférés des pays, les résultats montrent une cohérence remarquable avec le classement réel qui figure dans le rapport initial PISA 2006 (OCDE, 2007). En moyenne, peu de pays gagnent ou perdent des places systématiquement. Étant donné que ces pays font partie d’un groupe de niveau moyen avec des résultats très comparables, les décalages dans le classement s’accompagneraient probablement de changements mineurs dans la performance moyenne sur l’échelle finale PISA. L’analyse suggère que les classements PISA sont manifestement stables grâce à l’existence d’un vivier d’items de tests suffisamment important pour autoriser toutes les préférences. Ce document montre qu’il est important de fonder le choix des items de tests sur un processus bien structuré, ce qui permet aux pays et aux experts de contribuer de la même façon. Les observations présentées ici établissent que pour PISA, le classement des pays selon les différents groupes d’items de premier choix est apparemment stable et les experts ne peuvent pas prédire quels items pourraient gonfler la performance de leur pays dans les tests finals.

Suggested Citation

  • Ray Adams & Alla Berezner & Maciej Jakubowski, 2010. "Analysis of PISA 2006 Preferred Items Ranking Using the Percent-Correct Method," OECD Education Working Papers 46, OECD Publishing.
  • Handle: RePEc:oec:eduaab:46-en

    Download full text from publisher

    File URL:
    Download Restriction: no


    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.

    Cited by:

    1. Hanushek, Eric A. & Woessmann, Ludger, 2011. "Sample selectivity and the validity of international student achievement tests in economic research," Economics Letters, Elsevier, vol. 110(2), pages 79-82, February.
    2. Hanushek, Eric A. & Woessmann, Ludger, 2011. "The Economics of International Differences in Educational Achievement," Handbook of the Economics of Education, Elsevier.
    3. Baranov, Igor N., 2012. "Quality of Secondary Education in Russia: Between Soviet Legacy and Challenges of Global Competitiveness," Working Papers 538, Graduate School of Management, St. Petersburg State University.
    4. Svend Kreiner & Karl Christensen, 2014. "Analyses of Model Fit and Robustness. A New Look at the PISA Scaling Model Underlying Ranking of Countries According to Reading Literacy," Psychometrika, Springer;The Psychometric Society, vol. 79(2), pages 210-231, April.
    5. Baranov, Igor N., 2012. "Quality of Secondary Education in Russia: Between Soviet Legacy and Challenges of Global Competitiveness," Working Papers 797, Graduate School of Management, St. Petersburg State University.

    More about this item

    NEP fields

    This paper has been announced in the following NEP Reports:


    Access and download statistics


    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:oec:eduaab:46-en. See general information about how to correct material in RePEc.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: (). General contact details of provider: .

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service hosted by the Research Division of the Federal Reserve Bank of St. Louis . RePEc uses bibliographic data supplied by the respective publishers.