1 code implementation • 5 Apr 2023 • Rohan Ghosh, Mehul Motani
For a finite $|\mathcal{X}|$, this yields robust entropy measures which satisfy many important properties, such as invariance to bijections, while the same is not true for continuous spaces (where $|\mathcal{X}|=\infty$).
no code implementations • 9 Dec 2022 • Shiyu Liu, Rohan Ghosh, Dylan Tan, Mehul Motani
However, in network pruning, we find that the sparsity introduced by ReLU, which we quantify by a term called dynamic dead neuron rate (DNR), is not beneficial for the pruned network.
no code implementations • 9 Dec 2022 • Shiyu Liu, Rohan Ghosh, John Tan Chong Min, Mehul Motani
(ii) In addition to the strong theoretical motivation, SILO is empirically optimal in the sense of matching an Oracle, which exhaustively searches for the optimal value of max_lr via grid search.
no code implementations • 9 Dec 2022 • Shiyu Liu, Rohan Ghosh, Mehul Motani
In this paper, we propose a new forecasting strategy called Generative Forecasting (GenF), which generates synthetic data for the next few time steps and then makes long-range forecasts based on generated and observed data.
no code implementations • 11 Dec 2021 • Vikrant Malik, Rohan Ghosh, Mehul Motani
The advancement of deep learning has led to the development of neural decoders for low latency communications.
1 code implementation • NeurIPS 2021 • Rohan Ghosh, Mehul Motani
Empirical studies find that conventional training of neural networks, unlike network-to-network regularization, leads to networks of high KG and lower test accuracies.
no code implementations • 17 Oct 2021 • Shiyu Liu, Rohan Ghosh, Mehul Motani
In this paper, we propose a new forecasting strategy called Generative Forecasting (GenF), which generates synthetic data for the next few time steps and then makes long-range forecasts based on generated and observed data.
no code implementations • 1 Jan 2021 • Rohan Ghosh, Mehul Motani
Subsequently, we propose a joint entropy-like measure of complexity between function spaces (classifier and generator), called co-complexity, which leads to tighter bounds on the generalization error in this setting.
no code implementations • 18 Aug 2019 • Rohan Ghosh, Anupam K. Gupta, Mehul Motani
Convolutional Neural Networks (CNN) have been pivotal to the success of many state-of-the-art classification problems, in a wide variety of domains (for e. g. vision, speech, graphs and medical imaging).
1 code implementation • 10 Jun 2019 • Rohan Ghosh, Anupam K. Gupta
Augmenting transformation knowledge onto a convolutional neural network's weights has often yielded significant improvements in performance.
no code implementations • 19 Mar 2019 • Rohan Ghosh, Siyi Tang, Mahdi Rasouli, Nitish Thakor, Sunil Kukreja
Neuromorphic image sensors produce activity-driven spiking output at every pixel.
no code implementations • 17 Mar 2019 • Rohan Ghosh, Anupam Gupta, Andrei Nakagawa, Alcimar Soares, Nitish Thakor
In this work we introduce spatiotemporal filtering in the spike-event domain, as an alternative way of channeling spatiotemporal information through to a convolutional neural network.
no code implementations • 16 Mar 2019 • Rohan Ghosh, Anupam Gupta, Siyi Tang, Alcimar Soares, Nitish Thakor
Unlike conventional frame-based sensors, event-based visual sensors output information through spikes at a high temporal resolution.