IDEAS home Printed from https://ideas.repec.org/a/eee/infome/v7y2013i2p562-574.html
   My bibliography  Save this article

How to calculate the practical significance of citation impact differences? An empirical example from evaluative institutional bibliometrics using adjusted predictions and marginal effects

Author

Listed:
  • Bornmann, Lutz
  • Williams, Richard

Abstract

Evaluative bibliometrics is concerned with comparing research units by using statistical procedures. According to Williams (2012) an empirical study should be concerned with the substantive and practical significance of the findings as well as the sign and statistical significance of effects. In this study we will explain what adjusted predictions and marginal effects are and how useful they are for institutional evaluative bibliometrics. As an illustration, we will calculate a regression model using publications (and citation data) produced by four universities in German-speaking countries from 1980 to 2010. We will show how these predictions and effects can be estimated and plotted, and how this makes it far easier to get a practical feel for the substantive meaning of results in evaluative bibliometric studies. An added benefit of this approach is that it makes it far easier to explain results obtained via sophisticated statistical techniques to a broader and sometimes non-technical audience. We will focus particularly on Average Adjusted Predictions (AAPs), Average Marginal Effects (AMEs), Adjusted Predictions at Representative Values (APRVs) and Marginal Effects at Representative Values (MERVs).

Suggested Citation

  • Bornmann, Lutz & Williams, Richard, 2013. "How to calculate the practical significance of citation impact differences? An empirical example from evaluative institutional bibliometrics using adjusted predictions and marginal effects," Journal of Informetrics, Elsevier, vol. 7(2), pages 562-574.
  • Handle: RePEc:eee:infome:v:7:y:2013:i:2:p:562-574
    DOI: 10.1016/j.joi.2013.02.005
    as

    Download full text from publisher

    File URL: http://www.sciencedirect.com/science/article/pii/S1751157713000205
    Download Restriction: Full text for ScienceDirect subscribers only

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Schneider, Jesper W., 2013. "Caveats for using statistical significance tests in research assessments," Journal of Informetrics, Elsevier, vol. 7(1), pages 50-62.
    2. James W. Hardin & Joseph W. Hilbe, 2012. "Generalized Linear Models and Extensions, 3rd Edition," Stata Press books, StataCorp LP, edition 3, number glmext, December.
    3. Bornmann, Lutz & Leydesdorff, Loet & Mutz, Rüdiger, 2013. "The use of percentiles and percentile rank classes in the analysis of bibliometric data: Opportunities and limits," Journal of Informetrics, Elsevier, vol. 7(1), pages 158-165.
    4. Michael N. Mitchell, 2012. "Interpreting and Visualizing Regression Models Using Stata," Stata Press books, StataCorp LP, number ivrm, December.
    5. J. Scott Long & Jeremy Freese, 2006. "Regression Models for Categorical Dependent Variables using Stata, 2nd Edition," Stata Press books, StataCorp LP, edition 2, number long2, December.
    6. Lutz Bornmann & Rüdiger Mutz & Werner Marx & Hermann Schier & Hans‐Dieter Daniel, 2011. "A multilevel modelling approach to investigating the predictive validity of editorial decisions: do the editors of a high profile journal select manuscripts that are highly cited after publication?," Journal of the Royal Statistical Society Series A, Royal Statistical Society, vol. 174(4), pages 857-879, October.
    7. Richard Williams, 2012. "Using the margins command to estimate and interpret adjusted predictions and marginal effects," Stata Journal, StataCorp LP, vol. 12(2), pages 308-331, June.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Albarrán, Pedro & Herrero, Carmen & Ruiz-Castillo, Javier & Villar, Antonio, 2017. "The Herrero-Villar approach to citation impact," Journal of Informetrics, Elsevier, vol. 11(2), pages 625-640.
    2. Williams, Richard & Bornmann, Lutz, 2016. "Sampling issues in bibliometric analysis," Journal of Informetrics, Elsevier, vol. 10(4), pages 1225-1232.
    3. Bornmann, Lutz, 2014. "Validity of altmetrics data for measuring societal impact: A study using data from Altmetric and F1000Prime," Journal of Informetrics, Elsevier, vol. 8(4), pages 935-950.
    4. Iman Tahamtan & Askar Safipour Afshar & Khadijeh Ahamdzadeh, 2016. "Factors affecting number of citations: a comprehensive review of the literature," Scientometrics, Springer;Akadémiai Kiadó, vol. 107(3), pages 1195-1225, June.
    5. Bornmann, Lutz & Leydesdorff, Loet, 2015. "Does quality and content matter for citedness? A comparison with para-textual factors and over time," Journal of Informetrics, Elsevier, vol. 9(3), pages 419-429.
    6. Bornmann, Lutz & Leydesdorff, Loet & Wang, Jian, 2014. "How to improve the prediction based on citation impact percentiles for years shortly after the publication date?," Journal of Informetrics, Elsevier, vol. 8(1), pages 175-180.
    7. repec:spr:scient:v:113:y:2017:i:2:d:10.1007_s11192-017-2501-0 is not listed on IDEAS
    8. Bornmann, Lutz & Haunschild, Robin, 2015. "Which people use which scientific papers? An evaluation of data from F1000 and Mendeley," Journal of Informetrics, Elsevier, vol. 9(3), pages 477-487.
    9. Bornmann, Lutz, 2014. "Do altmetrics point to the broader impact of research? An overview of benefits and disadvantages of altmetrics," Journal of Informetrics, Elsevier, vol. 8(4), pages 895-903.
    10. Lutz Bornmann, 2015. "Interrater reliability and convergent validity of F1000Prime peer review," Journal of the Association for Information Science & Technology, Association for Information Science & Technology, vol. 66(12), pages 2415-2426, December.
    11. Bornmann, Lutz & Marx, Werner, 2015. "Methods for the generation of normalized citation impact scores in bibliometrics: Which method best reflects the judgements of experts?," Journal of Informetrics, Elsevier, vol. 9(2), pages 408-418.
    12. Bornmann, Lutz, 2013. "The problem of citation impact assessments for recent publication years in institutional evaluations," Journal of Informetrics, Elsevier, vol. 7(3), pages 722-729.
    13. Bornmann, Lutz & Leydesdorff, Loet & Wang, Jian, 2013. "Which percentile-based approach should be preferred for calculating normalized citation impact values? An empirical comparison of five approaches including a newly developed citation-rank approach (P1," Journal of Informetrics, Elsevier, vol. 7(4), pages 933-944.
    14. Thelwall, Mike & Wilson, Paul, 2014. "Regression for citation data: An evaluation of different methods," Journal of Informetrics, Elsevier, vol. 8(4), pages 963-971.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:infome:v:7:y:2013:i:2:p:562-574. See general information about how to correct material in RePEc.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: (Dana Niculescu). General contact details of provider: http://www.elsevier.com/locate/joi .

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service hosted by the Research Division of the Federal Reserve Bank of St. Louis . RePEc uses bibliographic data supplied by the respective publishers.