Combining a Regularization Method and the Optimal Brain Damage Method for Reducing a Deep Learning Model Size

Document Type : Research Paper


Computer Science of Hakim Sabzevari University


One of the challenges of convolutional neural networks (CNNs), as the main tool of deep learning, is the large volume of some relevant models. CNNs, inspired form the brain, have millions of connections. Reducing the volume of these models is done by removing (pruning) the redundant connections of the model. Optimal Brain Damage (OBD) and Sparse Regularization are among the famous methods in this field. In this study, a deep learning model has been trained and the effect of reducing connections with the aforementioned methods on its performance has been investigated. As the proposed approach, by combining the OBD and regularization methods its redundant connections were pruned. The resulting model is a smaller model, which has less memory and computational load than the original model, and at the same time its performance is not less than the original model. The experimental results show that the hybrid approach can be more efficient than each of the methods, in the most tested datasets. In one dataset , with the proposed method, the number of connections were reduced by 76%, without sacrificing the efficiency of the model. This reduction in model size has decreased the processing time by 66 percent. The smaller the software model, the more likely it is to be used on weaker hardware, found everywhere, and web applications.