Author
Listed:
- Shun Zhang
(School of Electronics and Information, Northwestern Polytechnical University, Xi’an 710129, China)
- Yaohui Xu
(School of Electronics and Information, Northwestern Polytechnical University, Xi’an 710129, China)
- Xuebin Zhang
(School of Electronics and Information, Northwestern Polytechnical University, Xi’an 710129, China)
- Boyang Cheng
(School of Electronics and Information, Northwestern Polytechnical University, Xi’an 710129, China)
- Ke Wang
(China Railway First Survey and Design Institute Group Co., Ltd., Xi’an 710043, China)
Abstract
Driven by growing public security demands and the advancement of intelligent surveillance systems, person re-identification (ReID) has emerged as a prominent research focus in the field of computer vision. However, this task presents challenges due to its high sensitivity to variations in visual appearance caused by factors such as body pose and camera parameters. Although deep learning-based methods have achieved marked progress in ReID, the high cost of annotation remains a challenge that cannot be overlooked. To address this, we propose an unsupervised attribute learning framework that eliminates the need for costly manual annotations while maintaining high accuracy. The framework learns the mid-level human attributes (such as clothing type and gender) that are robust to substantial visual appearance variations and can hence boost the accuracy of attributes with a small amount of labeled data. To carry out our framework, we present a part-based convolutional neural network (CNN) architecture, which consists of two components for image and body attribute learning on a global level and upper- and lower-body image and attribute learning at a local level. The proposed architecture is trained to learn attribute-semantic and identity-discriminative feature representations simultaneously. For model learning, we first train our part-based network using a supervised approach on a labeled attribute dataset. Then, we apply an unsupervised clustering method to assign pseudo-labels to unlabeled images in a target dataset using our trained network. To improve feature compatibility, we introduce an attribute consistency scheme for unsupervised domain adaptation on this unlabeled target data. During training on the target dataset, we alternately perform three steps: extracting features with the updated model, assigning pseudo-labels to unlabeled images, and fine-tuning the model. Through a unified framework that fuses complementary attribute-label and identity label information, our approach achieves considerable improvements of 10.6% and 3.91% mAP on Market-1501→DukeMTMC-ReID and DukeMTMC-ReID→Market-1501 unsupervised domain adaptation tasks, respectively.
Suggested Citation
Shun Zhang & Yaohui Xu & Xuebin Zhang & Boyang Cheng & Ke Wang, 2025.
"Unsupervised Person Re-Identification via Deep Attribute Learning,"
Future Internet, MDPI, vol. 17(8), pages 1-24, August.
Handle:
RePEc:gam:jftint:v:17:y:2025:i:8:p:371-:d:1725511
Download full text from publisher
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jftint:v:17:y:2025:i:8:p:371-:d:1725511. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.