IDEAS home Printed from https://ideas.repec.org/a/oup/biomet/v107y2020i2p311-330..html
   My bibliography  Save this article

Classification with imperfect training labels

Author

Listed:
  • Timothy I Cannings
  • Yingying Fan
  • Richard J Samworth

Abstract

Summary We study the effect of imperfect training data labels on the performance of classification methods. In a general setting, where the probability that an observation in the training dataset is mislabelled may depend on both the feature vector and the true label, we bound the excess risk of an arbitrary classifier trained with imperfect labels in terms of its excess risk for predicting a noisy label. This reveals conditions under which a classifier trained with imperfect labels remains consistent for classifying uncorrupted test data points. Furthermore, under stronger conditions, we derive detailed asymptotic properties for the popular $k$-nearest neighbour, support vector machine and linear discriminant analysis classifiers. One consequence of these results is that the $k$-nearest neighbour and support vector machine classifiers are robust to imperfect training labels, in the sense that the rate of convergence of the excess risk of these classifiers remains unchanged; in fact, our theoretical and empirical results even show that in some cases, imperfect labels may improve the performance of these methods. The linear discriminant analysis classifier is shown to be typically inconsistent in the presence of label noise unless the prior probabilities of the classes are equal. Our theoretical results are supported by a simulation study.

Suggested Citation

  • Timothy I Cannings & Yingying Fan & Richard J Samworth, 2020. "Classification with imperfect training labels," Biometrika, Biometrika Trust, vol. 107(2), pages 311-330.
  • Handle: RePEc:oup:biomet:v:107:y:2020:i:2:p:311-330.
    as

    Download full text from publisher

    File URL: http://hdl.handle.net/10.1093/biomet/asaa011
    Download Restriction: Access to full text is restricted to subscribers.
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Ahfock, Daniel & McLachlan, Geoffrey J., 2021. "Harmless label noise and informative soft-labels in supervised classification," Computational Statistics & Data Analysis, Elsevier, vol. 161(C).

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:oup:biomet:v:107:y:2020:i:2:p:311-330.. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Oxford University Press (email available below). General contact details of provider: https://academic.oup.com/biomet .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.