Author
Listed:
- Rui Jiang
(College of Information Engineering, Shanghai Maritime University, Shanghai 201306, China
These authors contributed equally to this work.)
- Jiatao Li
(College of Information Engineering, Shanghai Maritime University, Shanghai 201306, China
These authors contributed equally to this work.)
- Weifeng Bu
(College of Information Engineering, Shanghai Maritime University, Shanghai 201306, China
These authors contributed equally to this work.)
- Chongqing Chen
(College of Information Engineering, Shanghai Maritime University, Shanghai 201306, China)
Abstract
In the era of deep learning as a service, ensuring that model services are sustainable is a key challenge. To achieve sustainability, the model services, including but not limited to storage and inference, must maintain model security while preserving system efficiency, and be applicable to all deep models. To address these issues, we propose a sub-network-based model storage and inference solution that integrates blockchain and IPFS, which includes a highly distributed storage method, a tamper-proof checking method, a double-attribute-based permission management method, and an automatic inference method. We also design a smart contract to deploy these methods in the blockchain. The storage method divides a deep model into intra-sub-network and inter-sub-network information. Sub-network files are stored in the IPFS, while their records in the blockchain are designed as a chained structure based on their encrypted address. Connections between sub-networks are represented as attributes of their records. This method enhances model security and improves storage and computational efficiency of the blockchain. The tamper-proof checking method is designed based on the chained structure of sub-network records and includes on-chain checking and IPFS-based checking stages. It efficiently and dynamically monitors model correctness. The permission management method restricts user permission based on the user role and the expiration time, further reducing the risk of model attacks and controlling system efficiency. The automatic inference method is designed based on the idea of preceding sub-network encrypted address lookup. It can distribute trusted off-chain computing resources to perform sub-network inference and use the IPFS to store model inputs and sub-network outputs, further alleviating the on-chain storage burden and computational load. This solution is not restricted to model architectures and division methods, or sub-network recording orders, making it highly applicable. In experiments and analyses, we present a use case in intelligent transportation and analyze the security, applicability, and system efficiency of the proposed solution, particularly focusing on the on-chain efficiency. The experimental results indicate that the proposed solution can balance security and system efficiency by controlling the number of sub-networks, thus it is a step towards sustainable model services for deep learning.
Suggested Citation
Download full text from publisher
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jsusta:v:15:y:2023:i:21:p:15435-:d:1270565. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.