Search Results for author: Sourya Basu

Found 10 papers, 5 papers with code

G-RepsNet: A Fast and General Construction of Equivariant Networks for Arbitrary Matrix Groups

no code implementations23 Feb 2024 Sourya Basu, Suhas Lohit, Matthew Brand

Recent work by Finzi et al. (2021) directly solves the equivariance constraint for arbitrary matrix groups to obtain equivariant MLPs (EMLPs).

Image Classification Inductive Bias

Efficient Model-Agnostic Multi-Group Equivariant Networks

no code implementations14 Oct 2023 Razan Baltaji, Sourya Basu, Lav R. Varshney

Inspired by the first design, we use the notion of the IS property to design a second efficient model-agnostic equivariant design for large product groups acting on a single input.

Fairness Image Classification +2

Transformers are Universal Predictors

no code implementations15 Jul 2023 Sourya Basu, Moulik Choraria, Lav R. Varshney

We find limits to the Transformer architecture for language modeling and show it has a universal prediction property in an information-theoretic sense.

Language Modelling

Equivariant Mesh Attention Networks

1 code implementation21 May 2022 Sourya Basu, Jose Gallego-Posada, Francesco Viganò, James Rowbottom, Taco Cohen

Equivariance to symmetries has proven to be a powerful inductive bias in deep learning research.

Inductive Bias

Autoequivariant Network Search via Group Decomposition

1 code implementation10 Apr 2021 Sourya Basu, Akshayaa Magesh, Harshit Yadav, Lav R. Varshney

We address these problems by proving a new group-theoretic result in the context of equivariant neural networks that shows that a network is equivariant to a large group if and only if it is equivariant to smaller groups from which it is constructed.

Inductive Bias Neural Architecture Search +1

Mirostat: A Neural Text Decoding Algorithm that Directly Controls Perplexity

2 code implementations ICLR 2021 Sourya Basu, Govardana Sachitanandam Ramachandran, Nitish Shirish Keskar, Lav R. Varshney

Experiments show that for low values of k and p in top-k and top-p sampling, perplexity drops significantly with generated text length, which is also correlated with excessive repetitions in the text (the boredom trap).

Language Modelling

Succinct Source Coding of Deep Neural Networks

1 code implementation NIPS Workshop CDNNRIA 2018 Sourya Basu, Lav R. Varshney

Deep neural networks have shown incredible performance for inference tasks in a variety of domains.

Universal and Succinct Source Coding of Deep Neural Networks

1 code implementation9 Apr 2018 Sourya Basu, Lav R. Varshney

Deep neural networks have shown incredible performance for inference tasks in a variety of domains.

Cannot find the paper you are looking for? You can Submit a new open access paper.