Assessing agreement with intraclass correlation coefficient and concordance correlation coefficient for data with repeated measures
The intraclass correlation coefficient and the concordance correlation coefficient are two popular scaled indices for assessing the closeness between observers who make measurements for quantitative responses. These two indices are usually based on subject and observer effects only, and therefore we cannot use these indices if the observer produces repeated measurements rather than replicated readings. In this paper, we consider not only subject and observer effects, but also time effects for data with repeated measurements since it is difficult to obtain the true replications in practice. We compare these two agreement indices for different combinations of random or fixed effects of observer and time. Finally, we use image data of 2D-echocardiograms to illustrate the proposed methodology and the comparison of these two indices. If there is a need to choose between these two indices for repeated measurements, we recommend to use the new concordance correlation coefficient since it does not need ANOVA assumptions.
If you experience problems downloading a file, check if you have the proper application to view it first. In case of further problems read the IDEAS help page. Note that these files are not on the IDEAS site. Please be patient as the files may be large.
As the access to this document is restricted, you may want to look for a different version under "Related research" (further below) or search for a different version of it.
Volume (Year): 60 (2013)
Issue (Month): C ()
|Contact details of provider:|| Web page: http://www.elsevier.com/locate/csda|
References listed on IDEAS
Please report citation or reference errors to , or , if you are the registered author of the cited work, log in to your RePEc Author Service profile, click on "citations" and make appropriate adjustments.:
- Tony Vangeneugden & Annouschka Laenen & Helena Geys & Didier Renard & Geert Molenberghs, 2005. "Applying Concepts of Generalizability Theory on Clinical Trial Data to Investigate Sources of Variation and Their Impact on Reliability," Biometrics, The International Biometric Society, vol. 61(1), pages 295-304, 03.
- Chen, Chia-Cheng & Barnhart, Huiman X., 2008. "Comparison of ICC and CCC for assessing agreement for data without and with replications," Computational Statistics & Data Analysis, Elsevier, vol. 53(2), pages 554-564, December.
When requesting a correction, please mention this item's handle: RePEc:eee:csdana:v:60:y:2013:i:c:p:132-145. See general information about how to correct material in RePEc.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: (Shamier, Wendy)
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
If references are entirely missing, you can add them using this form.
If the full references list an item that is present in RePEc, but the system did not link to it, you can help with this form.
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your profile, as there may be some citations waiting for confirmation.
Please note that corrections may take a couple of weeks to filter through the various RePEc services.