Publication Date
6-2022
Conference/Sponsorship/Institution
Proceedings of the 19th International Conference on the Integration of Constraint Programming, Artificial Intelligence, and Operations Research (CPAIOR 2022)
Description
Neural networks are more expressive when they have multiple layers. In turn, conventional training methods are only successful if the depth does not lead to numerical issues such as exploding or vanishing gradients, which occur less frequently when the layers are sufficiently wide. However, increasing width to attain greater depth entails the use of heavier computational resources and leads to overparameterized models. These subsequent issues have been partially addressed by model compression methods such as quantization and pruning, some of which relying on normalization-based regularization of the loss function to make the effect of most parameters negligible. In this work, we propose instead to use regularization for preventing neurons from dying or becoming linear, a technique which we denote as jumpstart regularization. In comparison to conventional training, we obtain neural networks that are thinner, deeper, and - most importantly - more parameter-efficient.
Type
Conference Paper
Department
Analytics & Operations Management
Link to published version
https://link.springer.com/chapter/10.1007/978-3-031-08011-1_23
Recommended Citation
Riera, Carles; Rey, Camilo; Serra, Thiago; Puertas, Eloi; and Pujol, Oriol, "Training Thinner and Deeper Neural Networks: Jumpstart Regularization" (2022). Faculty Conference Papers and Presentations. 67.
https://digitalcommons.bucknell.edu/fac_conf/67