Search Results for author: Thiago Serra

Found 14 papers, 4 papers with code

Optimization Over Trained Neural Networks: Taking a Relaxing Walk

1 code implementation7 Jan 2024 Jiatai Tong, Junyang Cai, Thiago Serra

Besides training, mathematical optimization is also used in deep learning to model and solve formulations over trained neural networks for purposes such as verification, compression, and optimization with learned constraints.

Computational Tradeoffs of Optimization-Based Bound Tightening in ReLU Networks

no code implementations27 Dec 2023 Fabian Badilla, Marcos Goycoolea, Gonzalo Muñoz, Thiago Serra

The use of Mixed-Integer Linear Programming (MILP) models to represent neural networks with Rectified Linear Unit (ReLU) activations has become increasingly widespread in the last decade.

When Deep Learning Meets Polyhedral Theory: A Survey

no code implementations29 Apr 2023 Joey Huchette, Gonzalo Muñoz, Thiago Serra, Calvin Tsay

In the past decade, deep learning became the prevalent methodology for predictive modeling thanks to the remarkable accuracy of deep neural networks in tasks such as computer vision and natural language processing.

Getting Away with More Network Pruning: From Sparsity to Geometry and Linear Regions

no code implementations19 Jan 2023 Junyang Cai, Khai-Nguyen Nguyen, Nishant Shrestha, Aidan Good, Ruisen Tu, Xin Yu, Shandian Zhe, Thiago Serra

One surprising trait of neural networks is the extent to which their connections can be pruned with little to no effect on accuracy.

Network Pruning

Recall Distortion in Neural Network Pruning and the Undecayed Pruning Algorithm

no code implementations7 Jun 2022 Aidan Good, Jiaqi Lin, Hannah Sieg, Mikey Ferguson, Xin Yu, Shandian Zhe, Jerzy Wieczorek, Thiago Serra

In this work, we study such relative distortions in recall by hypothesizing an intensification effect that is inherent to the model.

Network Pruning

Optimal Decision Diagrams for Classification

no code implementations28 May 2022 Alexandre M. Florio, Pedro Martins, Maximilian Schiffer, Thiago Serra, Thibaut Vidal

Decision diagrams for classification have some notable advantages over decision trees, as their internal connections can be determined at training time and their width is not bound to grow exponentially with their depth.

Classification Fairness

The Combinatorial Brain Surgeon: Pruning Weights That Cancel One Another in Neural Networks

1 code implementation9 Mar 2022 Xin Yu, Thiago Serra, Srikumar Ramalingam, Shandian Zhe

We propose a tractable heuristic for solving the combinatorial extension of OBS, in which we select weights for simultaneous removal, as well as a systematic update of the remaining weights.

Scaling Up Exact Neural Network Compression by ReLU Stability

1 code implementation NeurIPS 2021 Thiago Serra, Xin Yu, Abhinav Kumar, Srikumar Ramalingam

We can compress a rectifier network while exactly preserving its underlying functionality with respect to a given input domain if some of its neurons are stable.

Neural Network Compression

Lossless Compression of Deep Neural Networks

no code implementations1 Jan 2020 Thiago Serra, Abhinav Kumar, Srikumar Ramalingam

Deep neural networks have been successful in many predictive modeling tasks, such as image and language recognition, where large neural networks are often used to obtain good accuracy.

Equivalent and Approximate Transformations of Deep Neural Networks

no code implementations27 May 2019 Abhinav Kumar, Thiago Serra, Srikumar Ramalingam

On the practical side, we show that certain rectified linear units (ReLUs) can be safely removed from a network if they are always active or inactive for any valid input.

Empirical Bounds on Linear Regions of Deep Rectifier Networks

no code implementations ICLR 2019 Thiago Serra, Srikumar Ramalingam

Our first contribution is a method to sample the activation patterns defined by ReLUs using universal hash functions.

How Could Polyhedral Theory Harness Deep Learning?

no code implementations17 Jun 2018 Thiago Serra, Christian Tjandraatmadja, Srikumar Ramalingam

The holy grail of deep learning is to come up with an automatic method to design optimal architectures for different applications.

Bounding and Counting Linear Regions of Deep Neural Networks

no code implementations6 Nov 2017 Thiago Serra, Christian Tjandraatmadja, Srikumar Ramalingam

We investigate the complexity of deep neural networks (DNN) that represent piecewise linear (PWL) functions.

Cannot find the paper you are looking for? You can Submit a new open access paper.