Search Results for author: Amir Joudaki

Found 5 papers, 4 papers with code

Towards Training Without Depth Limits: Batch Normalization Without Gradient Explosion

1 code implementation3 Oct 2023 Alexandru Meterez, Amir Joudaki, Francesco Orabona, Alexander Immer, Gunnar Rätsch, Hadi Daneshmand

We answer this question in the affirmative by giving a particular construction of an Multi-Layer Perceptron (MLP) with linear activations and batch-normalization that provably has bounded gradients at any depth.

On the impact of activation and normalization in obtaining isometric embeddings at initialization

1 code implementation NeurIPS 2023 Amir Joudaki, Hadi Daneshmand, Francis Bach

In this paper, we explore the structure of the penultimate Gram matrix in deep neural networks, which contains the pairwise inner products of outputs corresponding to a batch of inputs.

Batch Normalization Orthogonalizes Representations in Deep Random Networks

1 code implementation NeurIPS 2021 Hadi Daneshmand, Amir Joudaki, Francis Bach

This paper underlines a subtle property of batch-normalization (BN): Successive batch normalizations with random linear transformations make hidden representations increasingly orthogonal across layers of a deep neural network.

Cannot find the paper you are looking for? You can Submit a new open access paper.