Author
Listed:
- Ambrogio Maria Bernardelli
(Department of Mathematics, University of Pavia, 27100 Pavia, Italy)
- Stefano Gualandi
(Department of Mathematics, University of Pavia, 27100 Pavia, Italy)
- Simone Milanesi
(Department of Mathematics, University of Pavia, 27100 Pavia, Italy)
- Hoong Chuin Lau
(School of Computing and Information Systems, Singapore Management University, Singapore 178902, Singapore)
- Neil Yorke-Smith
(Socio-Technical Algorithmic Research (STAR) Laboratory, Delft University of Technology, 2600 GA Delft, Netherlands)
Abstract
Training neural networks (NNs) using combinatorial optimization solvers has gained attention in recent years. In low-data settings, the use of state-of-the-art mixed integer linear programming solvers, for instance, has the potential to exactly train an NN while avoiding computing-intensive training and hyperparameter tuning and simultaneously training and sparsifying the network. We study the case of few-bit discrete-valued neural networks, both binarized neural networks (BNNs) whose values are restricted to ±1 and integer-valued neural networks (INNs) whose values lie in the range { − P , … , P } . Few-bit NNs receive increasing recognition because of their lightweight architecture and ability to run on low-power devices: for example, being implemented using Boolean operations. This paper proposes new methods to improve the training of BNNs and INNs. Our contribution is a multiobjective ensemble approach based on training a single NN for each possible pair of classes and applying a majority voting scheme to predict the final output. Our approach results in the training of robust sparsified networks whose output is not affected by small perturbations on the input and whose number of active weights is as small as possible. We empirically compare this BeMi approach with the current state of the art in solver-based NN training and with traditional gradient-based training, focusing on BNN learning in few-shot contexts. We compare the benefits and drawbacks of INNs versus BNNs, bringing new light to the distribution of weights over the { − P , … , P } interval. Finally, we compare multiobjective versus single-objective training of INNs, showing that robustness and network simplicity can be acquired simultaneously, thus obtaining better test performances. Although the previous state-of-the-art approaches achieve an average accuracy of 51.1 % on the Modified National Institute of Standards and Technology data set, the BeMi ensemble approach achieves an average accuracy of 68.4% when trained with 10 images per class and 81.8% when trained with 40 images per class while having up to 75.3% NN links removed.
Suggested Citation
Ambrogio Maria Bernardelli & Stefano Gualandi & Simone Milanesi & Hoong Chuin Lau & Neil Yorke-Smith, 2025.
"Multiobjective Linear Ensembles for Robust and Sparse Training of Few-Bit Neural Networks,"
INFORMS Journal on Computing, INFORMS, vol. 37(3), pages 623-643, May.
Handle:
RePEc:inm:orijoc:v:37:y:2025:i:3:p:623-643
DOI: 10.1287/ijoc.2023.0281
Download full text from publisher
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:inm:orijoc:v:37:y:2025:i:3:p:623-643. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Chris Asher (email available below). General contact details of provider: https://edirc.repec.org/data/inforea.html .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.