Author
Listed:
- Jie Niu
(Yunnan Key Laboratory of Statistical Modeling and Data Analysis, Yunnan University, Kunming 650500, China
Current address: School of Mathematics and Statistics, Yunnan University, Kunming 650106, China.)
- Runqi He
(Yunnan Key Laboratory of Statistical Modeling and Data Analysis, Yunnan University, Kunming 650500, China)
- Qiyao Zhou
(Yunnan Key Laboratory of Statistical Modeling and Data Analysis, Yunnan University, Kunming 650500, China)
- Wenjing Li
(Yunnan Key Laboratory of Statistical Modeling and Data Analysis, Yunnan University, Kunming 650500, China)
- Ruxian Jiang
(Yunnan Key Laboratory of Statistical Modeling and Data Analysis, Yunnan University, Kunming 650500, China)
- Huimin Li
(Yunnan Key Laboratory of Statistical Modeling and Data Analysis, Yunnan University, Kunming 650500, China
School of Mathematics and Computer Science, Yunnan Minzu University, Kunming 650504, China)
- Dan Chen
(Yunnan Key Laboratory of Statistical Modeling and Data Analysis, Yunnan University, Kunming 650500, China)
Abstract
In the data-driven healthcare sector, balancing privacy protection and model performance is critical. This paper enhances accuracy and reliability in survival analysis by integrating differential privacy, deep learning, and the Cox proportional hazards model within a federated learning framework. Traditionally, differential privacy’s noise injection often degrades model performance. To address this, we propose two adaptive privacy budget allocation strategies considering weight changes across neural network layers. The first, LS-ADP, utilizes layer sensitivity to assess the influence of individual layer weights on model performance and develops an adaptive differential privacy algorithm. The second, ROW-DP, comprehensively assesses weight variations and absolute values to propose a random one-layer weighted differential privacy algorithm. These algorithms provide differentiated privacy protection for various weights, mitigating privacy leakage while ensuring model performance. Experimental results on simulated and clinical datasets demonstrate improved predictive performance and robust privacy protection.
Suggested Citation
Jie Niu & Runqi He & Qiyao Zhou & Wenjing Li & Ruxian Jiang & Huimin Li & Dan Chen, 2025.
"Adaptive Differential Privacy Cox-MLP Model Based on Federated Learning,"
Mathematics, MDPI, vol. 13(7), pages 1-18, March.
Handle:
RePEc:gam:jmathe:v:13:y:2025:i:7:p:1096-:d:1621735
Download full text from publisher
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jmathe:v:13:y:2025:i:7:p:1096-:d:1621735. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.