Search Results for author: Amitabh Basu

Found 10 papers, 2 papers with code

Understanding Deep Neural Networks with Rectified Linear Units

no code implementations ICLR 2018 Raman Arora, Amitabh Basu, Poorya Mianjy, Anirbit Mukherjee

In this paper we investigate the family of functions representable by deep neural networks (DNN) with rectified linear units (ReLU).

Lower bounds over Boolean inputs for deep neural networks with ReLU gates

no code implementations8 Nov 2017 Anirbit Mukherjee, Amitabh Basu

We use the method of sign-rank to show exponential in dimension lower bounds for ReLU circuits ending in a LTF gate and of depths upto $O(n^{\xi})$ with $\xi < \frac{1}{8}$ with some restrictions on the weights in the bottom most layer.

Enumerating integer points in polytopes with bounded subdeterminants

no code implementations19 Feb 2021 Amitabh Basu, Hongyi Jiang

We show that one can enumerate the vertices of the convex hull of integer points in polytopes whose constraint matrices have bounded and nonzero subdeterminants, in time polynomial in the dimension and encoding size of the polytope.

Combinatorics Optimization and Control 90C10, 90C57, 90C60

Towards Lower Bounds on the Depth of ReLU Neural Networks

1 code implementation NeurIPS 2021 Christoph Hertrich, Amitabh Basu, Marco Di Summa, Martin Skutella

We contribute to a better understanding of the class of functions that can be represented by a neural network with ReLU activations and a given architecture.

Neural networks with linear threshold activations: structure and algorithms

no code implementations15 Nov 2021 Sammy Khalife, Hongyu Cheng, Amitabh Basu

We precisely characterize the class of functions that are representable by such neural networks and show that 2 hidden layers are necessary and sufficient to represent any function representable in the class.

On the power of graph neural networks and the role of the activation function

no code implementations10 Jul 2023 Sammy Khalife, Amitabh Basu

In contrast, it was already known that unbounded GNNs (those whose size is allowed to change with the graph sizes) with piecewise polynomial activations can distinguish these vertices in only two iterations.

Data-driven algorithm design using neural networks with applications to branch-and-cut

no code implementations4 Feb 2024 Hongyu Cheng, Sammy Khalife, Barbara Fiedorowicz, Amitabh Basu

We build upon recent work in this line of research by introducing the idea where, instead of selecting a single algorithm that has the best performance, we allow the possibility of selecting an algorithm based on the instance to be solved.

Cannot find the paper you are looking for? You can Submit a new open access paper.