Author
Listed:
- Lanlan Jiang
(School of Business, Guilin University of Electronic Technology, Guilin 541004, China
These authors contributed equally to this work.)
- Cheng Zhang
(School of Computer Science and Information Security, Guilin University of Electronic Technology, Guilin 541004, China
These authors contributed equally to this work.)
- Xingguo Qin
(School of Computer Science and Information Security, Guilin University of Electronic Technology, Guilin 541004, China
These authors contributed equally to this work.)
- Ya Zhou
(School of Computer Science and Information Security, Guilin University of Electronic Technology, Guilin 541004, China)
- Guanglun Huang
(School of Computer Science and Information Security, Guilin University of Electronic Technology, Guilin 541004, China)
- Hui Li
(School of Informatics, Xiamen University, Xiamen 361005, China)
- Jun Li
(School of Computer Science and Information Security, Guilin University of Electronic Technology, Guilin 541004, China)
Abstract
In the realm of natural language processing (NLP), text classification constitutes a task of paramount significance for large language models (LLMs). Nevertheless, extant methodologies predominantly depend on the output generated by the final layer of LLMs, thereby neglecting the wealth of information encapsulated within neurons residing in intermediate layers. To surmount this shortcoming, we introduce LENS (Linear Exploration and Neuron Selection), an innovative technique designed to identify and sparsely integrate salient neurons from intermediate layers via a process of linear exploration. Subsequently, these neurons are transmitted to downstream modules dedicated to text classification. This strategy effectively mitigates noise originating from non-pertinent neurons, thereby enhancing both the accuracy and computational efficiency of the model. The detection of telecommunication fraud text represents a formidable challenge within NLP, primarily attributed to its increasingly covert nature and the inherent limitations of current detection algorithms. In an effort to tackle the challenges of data scarcity and suboptimal classification accuracy, we have developed the LENS-RMHR (Linear Exploration and Neuron Selection with RoBERTa, Multi-head Mechanism, and Residual Connections) model, which extends the LENS framework. By incorporating RoBERTa, a multi-head attention mechanism, and residual connections, the LENS-RMHR model augments the feature representation capabilities and improves training efficiency. Utilizing the CCL2023 telecommunications fraud dataset as a foundation, we have constructed an expanded dataset encompassing eight distinct categories that encapsulate a diverse array of fraud types. Furthermore, a dual-loss function has been employed to bolster the model’s performance in multi-class classification scenarios. Experimental results reveal that LENS-RMHR demonstrates superior performance across multiple benchmark datasets, underscoring its extensive potential for application in the domains of text classification and telecommunications fraud detection.
Suggested Citation
Lanlan Jiang & Cheng Zhang & Xingguo Qin & Ya Zhou & Guanglun Huang & Hui Li & Jun Li, 2025.
"Telecom Fraud Recognition Based on Large Language Model Neuron Selection,"
Mathematics, MDPI, vol. 13(11), pages 1-17, May.
Handle:
RePEc:gam:jmathe:v:13:y:2025:i:11:p:1784-:d:1665628
Download full text from publisher
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jmathe:v:13:y:2025:i:11:p:1784-:d:1665628. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.