no code implementations • 14 Jan 2025 • Kusha Sareen, Daniel Levy, Arnab Kumar Mondal, Sékou-Oumar Kaba, Tara Akhound-Sadegh, Siamak Ravanbakhsh
Generative modeling of symmetric densities has a range of applications in AI for science, from drug discovery to physics simulations.
1 code implementation • 17 Jul 2024 • Ayush Kaushal, Tejas Vaidhya, Arnab Kumar Mondal, Tejas Pandey, Aaryan Bhagat, Irina Rish
Rapid advancements in GPU computational power has outpaced memory capacity and bandwidth growth, creating bottlenecks in Large Language Model (LLM) inference.
no code implementations • 3 Jul 2024 • Sanket Gandhi, Atul, Samanyu Mahajan, Vishal Sharma, Rushil Gupta, Arnab Kumar Mondal, Parag Singla
Recent work has shown that object-centric representations can greatly help improve the accuracy of learning dynamics while also bringing interpretability.
no code implementations • 23 May 2024 • Siba Smarak Panigrahi, Arnab Kumar Mondal
This work introduces a novel approach to achieving architecture-agnostic equivariance in deep learning, particularly addressing the limitations of traditional layerwise equivariant architectures and the inefficiencies of the existing architecture-agnostic methods.
no code implementations • CVPR 2024 • Arnab Kumar Mondal, Stefano Alletto, Denis Tome
Understanding human motion from video is essential for a range of applications, including pose estimation, mesh recovery and action recognition.
no code implementations • 20 Jun 2023 • Arnab Kumar Mondal, Siba Smarak Panigrahi, Sai Rajeswar, Kaleem Siddiqi, Siamak Ravanbakhsh
We approach this problem from the lens of Koopman theory, where the nonlinear dynamics of the environment can be linearized in a high-dimensional latent space.
no code implementations • 23 May 2023 • Harman Singh, Poorva Garg, Mohit Gupta, Kevin Shah, Ashish Goswami, Satyam Modi, Arnab Kumar Mondal, Dinesh Khandelwal, Dinesh Garg, Parag Singla
We are interested in image manipulation via natural language text -- a task that is useful for multiple AI applications but requires complex reasoning over multi-modal spaces.
no code implementations • 11 Nov 2022 • Sékou-Oumar Kaba, Arnab Kumar Mondal, Yan Zhang, Yoshua Bengio, Siamak Ravanbakhsh
Symmetry-based neural networks often constrain the architecture in order to achieve invariance or equivariance to a group of transformations.
no code implementations • 19 Feb 2022 • Mehran Shakerinava, Arnab Kumar Mondal, Siamak Ravanbakhsh
We present a simple non-generative approach to deep representation learning that seeks equivariant deep embedding through simple objectives.
no code implementations • 11 Feb 2022 • Arna Ghosh, Arnab Kumar Mondal, Kumar Krishna Agrawal, Blake Richards
Access to task relevant labels at scale is often scarce or expensive, motivating the need to learn from unlabelled datasets with self-supervised learning (SSL).
no code implementations • 29 Sep 2021 • Arnab Kumar Mondal, Vineet Jain, Kaleem Siddiqi, Siamak Ravanbakhsh
We study different notions of equivariance as an inductive bias in Reinforcement Learning (RL) and propose new mechanisms for recovering representations that are equivariant to both an agent’s action, and symmetry transformations of the state-action pairs.
1 code implementation • 16 Jul 2021 • Arnab Kumar Mondal, Himanshu Asnani, Parag Singla, Prathosh AP
The basic idea in RAEs is to learn a non-linear mapping from the high-dimensional data space to a low-dimensional latent space and vice-versa, simultaneously imposing a distributional prior on the latent space, which brings in a regularization effect.
no code implementations • 22 Apr 2021 • Arnab Kumar Mondal, Vineet Jain, Kaleem Siddiqi
Current deep learning models for classification tasks in computer vision are trained using mini-batches.
1 code implementation • 21 Aug 2020 • Arnab Kumar Mondal, Prathosh A. P
The Respiration Pattern is first extracted from the video focusing on the abdominal-thoracic region of a speaker using an optical flow based method.
1 code implementation • 1 Jul 2020 • Arnab Kumar Mondal, Pratheeksha Nair, Kaleem Siddiqi
In Reinforcement Learning (RL), Convolutional Neural Networks(CNNs) have been successfully applied as function approximators in Deep Q-Learning algorithms, which seek to learn action-value functions and policies in various environments.
no code implementations • 10 Jun 2020 • Arnab Kumar Mondal, Himanshu Asnani, Parag Singla, Prathosh AP
Specifically, we consider the class of RAEs with deterministic Encoder-Decoder pairs, Wasserstein Auto-Encoders (WAE), and show that having a fixed prior distribution, \textit{a priori}, oblivious to the dimensionality of the `true' latent space, will lead to the infeasibility of the optimization problem considered.
no code implementations • 17 May 2020 • Arnab Kumar Mondal, Arnab Bhattacharya, Sudipto Mukherjee, Prathosh AP, Sreeram Kannan, Himanshu Asnani
Estimation of information theoretic quantities such as mutual information and its conditional variant has drawn interest in recent times owing to their multifaceted applications.
no code implementations • 10 Dec 2019 • Arnab Kumar Mondal, Sankalan Pal Chowdhury, Aravind Jayendran, Parag Singla, Himanshu Asnani, Prathosh AP
The field of neural generative models is dominated by the highly successful Generative Adversarial Networks (GANs) despite their challenges, such as training instability and mode collapse.
1 code implementation • 30 Aug 2019 • Arnab Kumar Mondal, Aniket Agarwal, Jose Dolz, Christian Desrosiers
In this work, we study the problem of training deep networks for semantic image segmentation using only a fraction of annotated images, which may significantly reduce human annotation efforts.
1 code implementation • 29 Oct 2018 • Arnab Kumar Mondal, Jose Dolz, Christian Desrosiers
In addition, our work presents a comprehensive analysis of different GAN architectures for semi-supervised segmentation, showing recent techniques like feature matching to yield a higher performance than conventional adversarial training approaches.