IDEAS home Printed from https://ideas.repec.org/a/gam/jmathe/v10y2022i13p2231-d847963.html
   My bibliography  Save this article

A Neural Network Model Secret-Sharing Scheme with Multiple Weights for Progressive Recovery

Author

Listed:
  • Xianhui Wang

    (College of Electronic Engineering, National University of Defense Technology, Hefei 230037, China
    Anhui Key Laboratory of Cyberspace Security Situation Awareness and Evaluation, Hefei 230037, China)

  • Hong Shan

    (College of Electronic Engineering, National University of Defense Technology, Hefei 230037, China
    Anhui Key Laboratory of Cyberspace Security Situation Awareness and Evaluation, Hefei 230037, China)

  • Xuehu Yan

    (College of Electronic Engineering, National University of Defense Technology, Hefei 230037, China
    Anhui Key Laboratory of Cyberspace Security Situation Awareness and Evaluation, Hefei 230037, China)

  • Long Yu

    (College of Electronic Engineering, National University of Defense Technology, Hefei 230037, China
    Anhui Key Laboratory of Cyberspace Security Situation Awareness and Evaluation, Hefei 230037, China)

  • Yongqiang Yu

    (College of Electronic Engineering, National University of Defense Technology, Hefei 230037, China
    Anhui Key Laboratory of Cyberspace Security Situation Awareness and Evaluation, Hefei 230037, China)

Abstract

With the widespread use of deep-learning models in production environments, the value of deep-learning models has become more prominent. The key issues are the rights of the model trainers and the security of the specific scenarios using the models. In the commercial domain, consumers pay different fees and have access to different levels of services. Therefore, dividing the model into several shadow models with multiple weights is necessary. When holders want to use the model, they can recover the model whose performance corresponds to the number and weights of the collected shadow models so that access to the model can be controlled progressively, i.e., progressive recovery is significant. This paper proposes a neural network model secret sharing scheme (NNSS) with multiple weights for progressive recovery. The scheme uses Shamir’s polynomial to control model parameters’ sharing and embedding phase, which in turn enables hierarchical performance control in the secret model recovery phase. First, the important model parameters are extracted. Then, effective shadow parameters are assigned based on the holders’ weights in the sharing phase, and t shadow models are generated. The holders can obtain a sufficient number of shadow parameters for recovering the secret parameters with a certain probability during the recovery phase. As the number of shadow models obtained increases, the probability becomes larger, while the performance of the extracted models is related to the participants’ weights in the recovery phase. The probability is proportional to the number and weights of the shadow models obtained in the recovery phase, and the probability of the successful recovery of the shadow parameters is 1 when all t shadow models are obtained, i.e., the performance of the reconstruction model can reach the performance of the secret model. A series of experiments conducted on VGG19 verify the effectiveness of the scheme.

Suggested Citation

  • Xianhui Wang & Hong Shan & Xuehu Yan & Long Yu & Yongqiang Yu, 2022. "A Neural Network Model Secret-Sharing Scheme with Multiple Weights for Progressive Recovery," Mathematics, MDPI, vol. 10(13), pages 1-17, June.
  • Handle: RePEc:gam:jmathe:v:10:y:2022:i:13:p:2231-:d:847963
    as

    Download full text from publisher

    File URL: https://www.mdpi.com/2227-7390/10/13/2231/pdf
    Download Restriction: no

    File URL: https://www.mdpi.com/2227-7390/10/13/2231/
    Download Restriction: no
    ---><---

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jmathe:v:10:y:2022:i:13:p:2231-:d:847963. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.