Title

Lossless Compression of Deep Neural Networks

Publication Date

Summer 2020

Conference/Sponsorship/Institution

Proceedings of the 17th International Conference on the Integration of Constraint Programming, Artificial Intelligence, and Operations Research (CPAIOR 2020)

Description

Deep neural networks have been successful in many predictive modeling tasks, such as image and language recognition, where large neural networks are often used to obtain good accuracy. Consequently, it is challenging to deploy these networks under limited computational resources, such as in mobile devices. In this work, we introduce an algorithm that removes units and layers of a neural network while not changing the output that is produced, which thus implies a lossless compression. This algorithm, which we denote as LEO (Lossless Expressiveness Optimization), relies on Mixed-Integer Linear Programming (MILP) to identify Rectified Linear Units (ReLUs) with linear behavior over the input domain. By using L1 regularization to induce such behavior, we can benefit from training over a larger architecture than we would later use in the environment where the trained neural network is deployed.

Type

Conference Paper

Department

Analytics & Operations Management

Publisher Statement

This is a post-peer-review, pre-copyedit version of an article published in CPAIOR 2020: Integration of Constraint Programming, Artificial Intelligence, and Operations Research pp 417-430. The final authenticated version is available online at: https://doi.org/10.1007/978-3-030-58942-4_27

Share

COinS