IDEAS home Printed from https://ideas.repec.org/a/gam/jmathe/v13y2025i19p3201-d1765591.html
   My bibliography  Save this article

A Federated Fine-Tuning Framework for Large Language Models via Graph Representation Learning and Structural Segmentation

Author

Listed:
  • Yuxin Dong

    (School of Business, Wake Forest University, Winston-Salem, NC 27109, USA
    These authors contributed equally to this work.)

  • Ruotong Wang

    (Department of Computer Science, Rutgers University, Piscataway, NJ 08901, USA
    These authors contributed equally to this work.)

  • Guiran Liu

    (College of Science & Engineering (CoSE), San Francisco State University, San Francisco, CA 94132, USA)

  • Binrong Zhu

    (College of Science & Engineering (CoSE), San Francisco State University, San Francisco, CA 94132, USA)

  • Xiaohan Cheng

    (D’Amore-McKim School of Business, Northeastern University, Boston, MA 02115, USA)

  • Zijun Gao

    (Khoury College of Computer Sciences, Northeastern University, Boston, MA 02115, USA)

  • Pengbin Feng

    (Department of Mathematics, University of Southern California, Los Angeles, CA 90007, USA)

Abstract

This paper focuses on the efficient fine-tuning of large language models within the federated learning framework. To address the performance bottlenecks caused by multi-source heterogeneity and structural inconsistency, a structure-aware federated fine-tuning method is proposed. The method incorporates a graph representation module (GRM) to model internal structural relationships within text and employs a segmentation mechanism (SM) to reconstruct and align semantic structures across inputs, thereby enhancing structural robustness and generalization under non-IID (non-Independent and Identically Distributed) settings. During training, the method ensures data locality and integrates structural pruning with gradient encryption (SPGE) strategies to balance privacy preservation and communication efficiency. Compared with representative federated fine-tuning baselines such as FedNLP and FedPrompt, the proposed method achieves consistent accuracy and F1-score improvements across multiple tasks. To evaluate the effectiveness of the proposed method, extensive comparative experiments are conducted across tasks of text classification, named entity recognition, and question answering, using multiple datasets with diverse structures and heterogeneity levels. Experimental results show that the proposed approach significantly outperforms existing federated fine-tuning strategies on most tasks, achieving higher performance while preserving privacy, and demonstrating strong practical applicability and generalization potential.

Suggested Citation

  • Yuxin Dong & Ruotong Wang & Guiran Liu & Binrong Zhu & Xiaohan Cheng & Zijun Gao & Pengbin Feng, 2025. "A Federated Fine-Tuning Framework for Large Language Models via Graph Representation Learning and Structural Segmentation," Mathematics, MDPI, vol. 13(19), pages 1-29, October.
  • Handle: RePEc:gam:jmathe:v:13:y:2025:i:19:p:3201-:d:1765591
    as

    Download full text from publisher

    File URL: https://www.mdpi.com/2227-7390/13/19/3201/pdf
    Download Restriction: no

    File URL: https://www.mdpi.com/2227-7390/13/19/3201/
    Download Restriction: no
    ---><---

    More about this item

    Keywords

    ;
    ;
    ;
    ;
    ;

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jmathe:v:13:y:2025:i:19:p:3201-:d:1765591. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.