Author
Listed:
- Yihao Zhan
(School of Environment and Energy, South China University of Technology, Guangzhou Higher Education Mega Center, Guangzhou 510006, China)
- Yun Zhu
(School of Environment and Energy, South China University of Technology, Guangzhou Higher Education Mega Center, Guangzhou 510006, China)
- Ji-Cheng Jang
(School of Environment and Energy, South China University of Technology, Guangzhou Higher Education Mega Center, Guangzhou 510006, China)
- Wenwei Yang
(Cloud & Information (Guangdong) Eco-Environment Science and Technology Co., Ltd., Foshan 528000, China)
- Kunjie Li
(School of Environment and Energy, South China University of Technology, Guangzhou Higher Education Mega Center, Guangzhou 510006, China)
- Haowen He
(School of Environment and Energy, South China University of Technology, Guangzhou Higher Education Mega Center, Guangzhou 510006, China)
- Zeyu Li
(School of Environment and Energy, South China University of Technology, Guangzhou Higher Education Mega Center, Guangzhou 510006, China)
- Qianer Chen
(School of Environment and Energy, South China University of Technology, Guangzhou Higher Education Mega Center, Guangzhou 510006, China)
- Shicheng Long
(School of Environment and Energy, South China University of Technology, Guangzhou Higher Education Mega Center, Guangzhou 510006, China)
- Jinying Li
(School of Environment and Energy, South China University of Technology, Guangzhou Higher Education Mega Center, Guangzhou 510006, China)
Abstract
Identifying noise sources in exceedance-triggered audio is essential for targeted source tracing and sustainable urban social noise governance. While accurate models require massive labeled data, the acoustic complexity, high redundancy, and imbalanced class distributions of real-world recordings incur prohibitive manual annotation costs, hindering their widespread application in IoT networks. To tackle this bottleneck, we present a label-efficient active learning framework designed to minimize annotation costs by dynamically selecting the most valuable audio samples. Specifically, rather than treating uncertainty, class balance, and diversity as separate query criteria, it encodes uncertainty and dynamic class-aware learning needs into a weighted acoustic feature space, so that diversity-based selection can be performed in a unified manner. Experiments on the UrbanSound8K benchmark and a realistic exceedance-triggered monitoring dataset demonstrate consistent label-efficiency advantages over mainstream methods. Notably, our approach reaches 98% of the fully supervised upper bound on the real-world dataset while reducing the training annotation workload by 85.0% compared to random sampling. On the real-world dataset, the proposed framework yields higher F1-scores for several challenging under-represented categories and reduces the misclassification of dominant sound events relevant to social noise source tracing. Furthermore, cross-site generalization experiments reveal rapid localized adaptation to new monitoring environments, reaching the fully supervised upper bound with only 13% of the target-domain training data. Overall, this study provides a scalable and cost-effective classification framework for urban noise monitoring, offering practical support for noise regulatory authorities and city managers in more targeted noise source tracing and governance.
Suggested Citation
Yihao Zhan & Yun Zhu & Ji-Cheng Jang & Wenwei Yang & Kunjie Li & Haowen He & Zeyu Li & Qianer Chen & Shicheng Long & Jinying Li, 2026.
"Label-Efficient Social Noise Classification in Exceedance-Triggered Audio for Cost-Effective Source Tracing,"
Sustainability, MDPI, vol. 18(8), pages 1-22, April.
Handle:
RePEc:gam:jsusta:v:18:y:2026:i:8:p:3936-:d:1921053
Download full text from publisher
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jsusta:v:18:y:2026:i:8:p:3936-:d:1921053. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.