You need to log in to edit.

You can create a new account if you don't have one.

Or, discuss a change on Slack.

You can create a new account if you don't have one.

Or, discuss a change on Slack.

no code implementations • 31 Jan 2023 • Ritumbra Manuvie, Saikat Chatterjee

In recent years, there has been a heightened consensus within academia and in the public discourse that Social Media Platforms (SMPs), amplify the spread of hateful and negative sentiment content.

no code implementations • 19 Jul 2022 • Sandipan Das, Alireza M. Javid, Prakash Borpatra Gohain, Yonina C. Eldar, Saikat Chatterjee

NGP is efficient in selecting $N$ features when $N \ll P$, and it provides a notion of feature importance in a descending order following the sequential selection procedure.

no code implementations • 14 May 2022 • Sandipan Das, Navid Mahabadi, Saikat Chatterjee, Maurice Fallon

We propose a robust curb detection and filtering technique based on the fusion of camera semantics and dense lidar point clouds.

no code implementations • 4 May 2022 • Anubhab Ghosh, Mohamed Abdalmoaty, Saikat Chatterjee, Håkan Hjalmarsson

Yet, estimating the unknown parameters of stochastic, nonlinear dynamical models remains a challenging problem.

no code implementations • 6 Oct 2021 • Pol Grau Jurado, Xinyue Liang, Alireza M. Javid, Saikat Chatterjee

For the existing SSFN, a part of each weight matrix is trained using a layer-wise convex optimization approach (a supervised training), while the other part is chosen as a random matrix instance (an unsupervised training).

1 code implementation • 1 Jul 2021 • Anubhab Ghosh, Antoine Honoré, Dong Liu, Gustav Eje Henter, Saikat Chatterjee

For a standard speech phone classification setup involving 39 phones (classes) and the TIMIT dataset, we show that the use of standard features called mel-frequency-cepstral-coeffcients (MFCCs), the proposed generative models, and the decision fusion together can achieve $86. 6\%$ accuracy by generative training only.

no code implementations • 15 Feb 2021 • Anubhab Ghosh, Antoine Honoré, Dong Liu, Gustav Eje Henter, Saikat Chatterjee

We test the robustness of a maximum-likelihood (ML) based classifier where sequential data as observation is corrupted by noise.

no code implementations • 15 Dec 2020 • Indranil Biswas, Saikat Chatterjee, Praphulla Koushik, Frank Neumann

We construct and study general connections on Lie groupoids and differentiable stacks as well as on principal bundles over them using Atiyah sequences associated to transversal tangential distributions.

Differential Geometry Category Theory Primary 53C08, Secondary 22A22, 58H05, 53D50

no code implementations • 15 Dec 2020 • Indranil Biswas, Saikat Chatterjee, Praphulla Koushik, Frank Neumann

Let $\mathbb{X}=[X_1\rightrightarrows X_0]$ be a Lie groupoid equipped with a connection, given by a smooth distribution $\mathcal{H} \subset T X_1$ transversal to the fibers of the source map.

Differential Geometry Category Theory Primary 53C08, Secondary 22A22, 58H05, 53D50

no code implementations • 15 Dec 2020 • Saikat Chatterjee, Amitabha Lahiri, Ambar N. Sengupta

We construct and study pushforwards of categorical connections on categorical principal bundles.

Differential Geometry Mathematical Physics Mathematical Physics Primary: 18D05, Secondary: 20C99

1 code implementation • 18 Nov 2020 • Sandipan Das, Prakash B. Gohain, Alireza M. Javid, Yonina C. Eldar, Saikat Chatterjee

Using a statistical model-based data generation, we develop an experimental setup for the evaluation of neural networks (NNs).

1 code implementation • 22 Oct 2020 • Alireza M. Javid, Sandipan Das, Mikael Skoglund, Saikat Chatterjee

We use a combination of random weights and rectified linear unit (ReLU) activation function to add a ReLU dense (ReDense) layer to the trained neural network such that it can achieve a lower training loss.

no code implementations • 29 Sep 2020 • Xinyue Liang, Alireza M. Javid, Mikael Skoglund, Saikat Chatterjee

We design a low complexity decentralized learning algorithm to train a recently proposed large neural network in distributed processing nodes (workers).

no code implementations • 7 May 2020 • Alireza M. Javid, Xinyue Liang, Arun Venkitaraman, Saikat Chatterjee

We provide a predictive analysis of the spread of COVID-19, also known as SARS-CoV-2, using the dataset made publicly available online by the Johns Hopkins University.

no code implementations • 10 Apr 2020 • Xinyue Liang, Alireza M. Javid, Mikael Skoglund, Saikat Chatterjee

In this work, we exploit an asynchronous computing framework namely ARock to learn a deep neural network called self-size estimating feedforward neural network (SSFN) in a decentralized scenario.

no code implementations • 29 Mar 2020 • Alireza M. Javid, Arun Venkitaraman, Mikael Skoglund, Saikat Chatterjee

We show that the proposed architecture is norm-preserving and provides an invertible feature vector, and therefore, can be used to reduce the training cost of any other learning method which employs linear projection to estimate the target.

no code implementations • 26 Nov 2019 • Arun Venkitaraman, Saikat Chatterjee, Bo Wahlberg

Kernel and linear regression have been recently explored in the prediction of graph signals as the output, given arbitrary input signals that are agnostic to the graph.

no code implementations • 30 Oct 2019 • Antoine Honore, Dong Liu, David Forsberg, Karen Coste, Eric Herlenius, Saikat Chatterjee, Mikael Skoglund

We explore the use of traditional and contemporary hidden Markov models (HMMs) for sequential physiological data analysis and sepsis prediction in preterm infants.

1 code implementation • 13 Oct 2019 • Dong Liu, Antoine Honoré, Saikat Chatterjee, Lars K. Rasmussen

In the proposed GenHMM, each HMM hidden state is associated with a neural network based generative model that has tractability of exact likelihood and provides efficient likelihood computation.

no code implementations • 23 Aug 2019 • Dong Liu, Nima N. Moghadam, Lars K. Rasmussen, Jinliang Huang, Saikat Chatterjee

Belief propagation (BP) can do exact inference in loop-free graphs, but its performance could be poor in graphs with loops, and the understanding of its solution is limited.

1 code implementation • 31 Jul 2019 • Dong Liu, Minh Thành Vu, Saikat Chatterjee, Lars K. Rasmussen

A single latent variable is used as the common input to all the neural networks.

no code implementations • 17 May 2019 • Saikat Chatterjee, Alireza M. Javid, Mostafa Sadeghi, Shumpei Kikuta, Dong Liu, Partha P. Mitra, Mikael Skoglund

We design a self size-estimating feed-forward network (SSFN) using a joint optimization approach for estimation of number of layers, number of nodes and learning of weight matrices.

no code implementations • 16 Nov 2018 • Dong Liu, Minh Thành Vu, Saikat Chatterjee, Lars K. Rasmussen

We investigate the use of entropy-regularized optimal transport (EOT) cost in developing generative models to learn implicit distributions.

no code implementations • 6 Nov 2018 • Arun Venkitaraman, Pascal Frossard, Saikat Chatterjee

In presence of sparse noise we propose kernel regression for predicting output vectors which are smooth over a given graph.

no code implementations • 31 Mar 2018 • Ahmed Zaki, Saikat Chatterjee, Partha P. Mitra, Lars K. Rasmussen

Our expectation is that local estimates in each node improve fast and converge, resulting in a limited demand for communication of estimates between nodes and reducing the processing time.

no code implementations • 15 Mar 2018 • Arun Venkitaraman, Saikat Chatterjee, Peter Händel

We propose Gaussian processes for signals over graphs (GPG) using the apriori knowledge that the target vectors lie over a graph.

no code implementations • 12 Mar 2018 • Arun Venkitaraman, Saikat Chatterjee, Peter Händel

In this article, we improve extreme learning machines for regression tasks using a graph signal processing based regularization.

no code implementations • 12 Mar 2018 • Arun Venkitaraman, Alireza M. Javid, Saikat Chatterjee

We consider a neural network architecture with randomized features, a sign-splitter, followed by rectified linear units (ReLU).

no code implementations • 12 Mar 2018 • Arun Venkitaraman, Saikat Chatterjee, Peter Händel

We develop a multi-kernel based regression method for graph signal processing where the target signal is assumed to be smooth over a graph.

no code implementations • 17 Nov 2017 • Antoine Honoré, Veronica Siljehav, Saikat Chatterjee, Eric Herlenius

Even with a limited and unbalanced training data, the large neural network provides a detection performance level that is feasible to use in clinical care.

1 code implementation • 23 Oct 2017 • Saikat Chatterjee, Alireza M. Javid, Mostafa Sadeghi, Partha P. Mitra, Mikael Skoglund

The developed network is expected to show good generalization power due to appropriate regularization and use of random weights in the layers.

no code implementations • 22 Sep 2017 • Ahmed Zaki, Partha P. Mitra, Lars K. Rasmussen, Saikat Chatterjee

The algorithm is iterative and exchanges intermediate estimates of a sparse signal over a network.

1 code implementation • 29 Aug 2017 • Martin Sundin, Arun Venkitaraman, Magnus Jansson, Saikat Chatterjee

We especially show how the constraint relates to the distributed consensus problem and graph Laplacian learning.

no code implementations • 23 Jan 2015 • Martin Sundin, Cristian R. Rojas, Magnus Jansson, Saikat Chatterjee

We develop latent variable models for Bayesian learning based low-rank matrix completion and reconstruction from linear measurements.

no code implementations • 12 Jan 2015 • Martin Sundin, Saikat Chatterjee, Magnus Jansson

Through simulations, we show the performance and computation efficiency of the new RVM in several applications: recovery of sparse and block sparse signals, housing price prediction and image denoising.

no code implementations • 30 Jun 2014 • Martin Sundin, Saikat Chatterjee, Magnus Jansson, Cristian R. Rojas

In this paper we develop a new Bayesian inference method for low rank matrix reconstruction.

Cannot find the paper you are looking for? You can
Submit a new open access paper.

Contact us on:
hello@paperswithcode.com
.
Papers With Code is a free resource with all data licensed under CC-BY-SA.