IDEAS home Printed from https://ideas.repec.org/a/gam/jftint/v15y2023i11p371-d1283795.html
   My bibliography  Save this article

Federated Adversarial Training Strategies for Achieving Privacy and Security in Sustainable Smart City Applications

Author

Listed:
  • Sapdo Utomo

    (Graduate Institute of Ambient Intelligence and Smart Systems, National Chung Cheng University, Chiayi 621301, Taiwan
    Research Center for Smart Mechatronics, National Research and Innovation Agency (BRIN), Bandung 40135, Indonesia
    These authors contributed equally to this work.)

  • Adarsh Rouniyar

    (Department of Computer Science and Information Engineering, National Chung Cheng University, Chiayi 621301, Taiwan)

  • Hsiu-Chun Hsu

    (Department of Information Management, National Chung Cheng University, Chiayi 621301, Taiwan)

  • Pao-Ann Hsiung

    (Department of Computer Science and Information Engineering, National Chung Cheng University, Chiayi 621301, Taiwan
    These authors contributed equally to this work.)

Abstract

Smart city applications that request sensitive user information necessitate a comprehensive data privacy solution. Federated learning (FL), also known as privacy by design, is a new paradigm in machine learning (ML). However, FL models are susceptible to adversarial attacks, similar to other AI models. In this paper, we propose federated adversarial training (FAT) strategies to generate robust global models that are resistant to adversarial attacks. We apply two adversarial attack methods, projected gradient descent (PGD) and the fast gradient sign method (FGSM), to our air pollution dataset to generate adversarial samples. We then evaluate the effectiveness of our FAT strategies in defending against these attacks. Our experiments show that FGSM-based adversarial attacks have a negligible impact on the accuracy of global models, while PGD-based attacks are more effective. However, we also show that our FAT strategies can make global models robust enough to withstand even PGD-based attacks. For example, the accuracy of our FAT-PGD and FL-mixed-PGD models is 81.13% and 82.60%, respectively, compared to 91.34% for the baseline FL model. This represents a reduction in accuracy of 10%, but this could be potentially mitigated by using a more complex and larger model. Our results demonstrate that FAT can enhance the security and privacy of sustainable smart city applications. We also show that it is possible to train robust global models from modest datasets per client, which challenges the conventional wisdom that adversarial training requires massive datasets.

Suggested Citation

  • Sapdo Utomo & Adarsh Rouniyar & Hsiu-Chun Hsu & Pao-Ann Hsiung, 2023. "Federated Adversarial Training Strategies for Achieving Privacy and Security in Sustainable Smart City Applications," Future Internet, MDPI, vol. 15(11), pages 1-25, November.
  • Handle: RePEc:gam:jftint:v:15:y:2023:i:11:p:371-:d:1283795
    as

    Download full text from publisher

    File URL: https://www.mdpi.com/1999-5903/15/11/371/pdf
    Download Restriction: no

    File URL: https://www.mdpi.com/1999-5903/15/11/371/
    Download Restriction: no
    ---><---

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jftint:v:15:y:2023:i:11:p:371-:d:1283795. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.