You need to log in to edit.

You can create a new account if you don't have one.

Or, discuss a change on Slack.

You can create a new account if you don't have one.

Or, discuss a change on Slack.

1 code implementation • 29 Mar 2022 • Vishnu Suresh Lokhande, Rudrasis Chakraborty, Sathya N. Ravi, Vikas Singh

Pooling multiple neuroimaging datasets across institutions often enables improvements in statistical power when evaluating associations (e. g., between risk factors and disease outcomes) that may otherwise be too weak to detect.

no code implementations • 18 Feb 2022 • Jurijs Nazarovs, Rudrasis Chakraborty, Songwong Tasneeyapant, Sathya N. Ravi, Vikas Singh

Panel data involving longitudinal measurements of the same set of participants taken over multiple time points is common in studies to understand childhood development and disease modeling.

no code implementations • 1 Dec 2021 • Zhichun Huang, Rudrasis Chakraborty, Vikas Singh

Generative models which use explicit density modeling (e. g., variational autoencoders, flow-based generative models) involve finding a mapping from a known distribution, e. g. Gaussian, to the unknown input distribution.

no code implementations • 29 Sep 2021 • Zhichun Huang, Rudrasis Chakraborty, Vikas Singh

Generative models which use explicit density modeling (e. g., variational autoencoders, flow-based generative models) involve finding a mapping from a known distribution, e. g. Gaussian, to the unknown input distribution.

no code implementations • NeurIPS 2021 • Zihang Meng, Rudrasis Chakraborty, Vikas Singh

We present an efficient stochastic algorithm (RSG+) for canonical correlation analysis (CCA) using a reparametrization of the projection matrices.

1 code implementation • 5 Jun 2021 • Monami Banerjee, Rudrasis Chakraborty, Jose Bouza, Baba C. Vemuri

In this paper, we present a novel higher order Volterra convolutional neural network (VolterraNet) for data defined as samples of functions on Riemannian homogeneous spaces.

1 code implementation • CVPR 2021 • Xingjian Zhen, Rudrasis Chakraborty, Vikas Singh

One strategy for adversarially training a robust model is to maximize its certified radius -- the neighborhood around a given training sample for which the model's prediction remains unchanged.

4 code implementations • 7 Feb 2021 • Yunyang Xiong, Zhanpeng Zeng, Rudrasis Chakraborty, Mingxing Tan, Glenn Fung, Yin Li, Vikas Singh

The scalability of Nystr\"{o}mformer enables application to longer sequences with thousands of tokens.

Ranked #11 on Semantic Textual Similarity on MRPC (F1 metric)

no code implementations • 1 Jan 2021 • Zihang Meng, Rudrasis Chakraborty, Vikas Singh

We present an efficient stochastic algorithm (RSG+) for canonical correlation analysis (CCA) derived via a differential geometric perspective of the underlying optimization task.

no code implementations • 1 Jan 2021 • Zhichun Huang, Rudrasis Chakraborty, Xingjian Zhen, Vikas Singh

Flow-based generative models refer to deep generative models with tractable likelihoods, and offer several attractive properties including efficient density estimation and sampling.

1 code implementation • 18 Dec 2020 • Xingjian Zhen, Rudrasis Chakraborty, Liu Yang, Vikas Singh

Partly due to this gap, there are also no modality transfer/translation models for manifold-valued data whereas numerous such methods based on generative models are available for natural images.

no code implementations • 22 Jun 2020 • Yifei Xing, Rudrasis Chakraborty, Minxuan Duan, Stella Yu

We compare C-SURE with SurReal and a real-valued baseline on complex-valued MSTAR and RadioML datasets.

no code implementations • 30 Mar 2020 • Rudrasis Chakraborty

One of the other remedies to deal with the instabilities including gradient explosion is to use normalization techniques including {\it batch norm} and {\it group norm} etc..

1 code implementation • CVPR 2020 • Jiayun Wang, Yubei Chen, Rudrasis Chakraborty, Stella X. Yu

We develop an efficient approach to impose filter orthogonality on a convolutional layer based on the doubly block-Toeplitz matrix representation of the convolutional kernel instead of using the common kernel orthogonality approach, which we show is only necessary but not sufficient for ensuring orthogonal convolutions.

no code implementations • 5 Nov 2019 • Liu Yang, Rudrasis Chakraborty

Though in the medical imaging community, 3D point-cloud processing is not a "go-to" choice, it is a canonical way to preserve rotation invariance.

no code implementations • 5 Nov 2019 • Liu Yang, Rudrasis Chakraborty

Experimental validation has been performed to show that the proposed scheme can generate new 3D structures using interpolation techniques, i. e., given two 3D structures represented as point-clouds, we can generate point-clouds in between.

no code implementations • 29 Oct 2019 • Liu Yang, Rudrasis Chakraborty, Stella X. Yu

Our proposed model is rotationally invariant and can preserve geometric shape of a 3D point-cloud.

no code implementations • 18 Oct 2019 • Rudrasis Chakraborty, Yifei Xing, Stella Yu

We propose to extend the property instead of the form of real-valued functions to the complex domain.

1 code implementation • ICCV 2019 • Xingjian Zhen, Rudrasis Chakraborty, Nicholas Vogt, Barbara B. Bendlin, Vikas Singh

Efforts are underway to study ways via which the power of deep neural networks can be extended to non-standard data types such as structured data (e. g., graphs) or manifold-valued data (e. g., unit vectors or special matrices).

1 code implementation • 26 Jun 2019 • Jiayun Wang, Rudrasis Chakraborty, Stella X. Yu

We propose a novel end-to-end approach to learn different non-rigid transformations of the input point cloud so that optimal local neighborhoods can be adopted at each layer.

no code implementations • 24 Jun 2019 • Rudrasis Chakraborty, Jiayun Wang, Stella X. Yu

On RadioML, our model achieves comparable RF modulation classification accuracy at 10% of the baseline model size.

no code implementations • ICLR 2019 • Rudrasis Chakraborty, Jose Bouza, Jonathan Manton, Baba C. Vemuri

To this end, we present a provably convergent recursive computation of the wFM of the given data, where the weights makeup the convolution mask, to be learned.

1 code implementation • 11 Sep 2018 • Rudrasis Chakraborty, Jose Bouza, Jonathan Manton, Baba C. Vemuri

Thus, there is need to generalize the deep neural networks to cope with input data that reside on curved manifolds where vector space operations are not naturally admissible.

no code implementations • 31 May 2018 • Rudrasis Chakraborty, Chun-Hao Yang, Baba C. Vemuri

The other alternative to increase the performance is to learn multiple weak classifiers and boost their performance using a boosting algorithm or a variant thereof.

1 code implementation • NeurIPS 2018 • Rudrasis Chakraborty, Chun-Hao Yang, Xingjian Zhen, Monami Banerjee, Derek Archer, David Vaillancourt, Vikas Singh, Baba C. Vemuri

We show how recurrent statistical recurrent network models can be defined in such spaces.

no code implementations • 14 May 2018 • Rudrasis Chakraborty, Monami Banerjee, Baba C. Vemuri

(ii) As a corrolary, we prove the equivariance of the correlation operation to group actions admitted by the input domains which are Riemannian homogeneous manifolds.

no code implementations • 3 May 2018 • Rudrasis Chakraborty, Monami Banerjee, Baba C. Vemuri

In this paper, we propose a novel information theoretic framework for dictionary learning (DL) and sparse coding (SC) on a statistical manifold (the manifold of probability distributions).

no code implementations • 15 Apr 2018 • Indrasis Chakraborty, Rudrasis Chakraborty, Draguna Vrabie

Traditional classifier based method does not perform well, because of the inherent difficulty of detecting system level faults for closed loop dynamical system.

no code implementations • ICCV 2017 • Monami Banerjee, Rudrasis Chakraborty, Baba C. Vemuri

In this paper, we present a novel generalization of SPCA, called sparse exact PGA (SEPGA) that can cope with manifold-valued input data and respect the intrinsic geometry of the underlying manifold.

no code implementations • ICCV 2017 • Rudrasis Chakraborty, Vikas Singh, Nagesh Adluru, Baba C. Vemuri

Finally, by using existing algorithms for recursive Frechet mean and exact principal geodesic analysis on the hypersphere, we present several experiments on synthetic and real (vision and medical) data sets showing how group testing on such diversely sampled longitudinal data is possible by analyzing the reconstructed data in the subspace spanned by the first few PGs.

no code implementations • 31 Jul 2017 • Rudrasis Chakraborty, Baba Vemuri

The Stiefel manifold is a Riemannian homogeneous space but not a symmetric space.

no code implementations • CVPR 2017 • Rudrasis Chakraborty, Soren Hauberg, Baba C. Vemuri

We have demonstrated competitive performance of our proposed online subspace algorithm method on one synthetic and two real data sets.

no code implementations • 3 Feb 2017 • Rudrasis Chakraborty, Søren Hauberg, Baba C. Vemuri

In this paper, we present a geometric framework for computing the principal linear subspaces in both situations as well as for the robust PCA case, that amounts to computing the intrinsic average on the space of all subspaces: the Grassmann manifold.

no code implementations • CVPR 2016 • Monami Banerjee, Rudrasis Chakraborty, Edward Ofori, Michael S. Okun, David E. Viallancourt, Baba C. Vemuri

With the exception of a few, most existing methods of regression for manifold valued data are limited to geodesic regression which is a generalization of the linear regression in vector-spaces.

no code implementations • 23 Apr 2016 • Rudrasis Chakraborty, Monami Banerjee, Victoria Crawford, Baba C. Vemuri

In this work, we propose a novel information theoretic framework for dictionary learning (DL) and sparse coding (SC) on a statistical manifold (the manifold of probability distributions).

no code implementations • CVPR 2016 • Rudrasis Chakraborty, Dohyung Seo, Baba C. Vemuri

Recently, an alternative called exact PGA was proposed which tries to solve the optimization without any linearization.

no code implementations • ICCV 2015 • Rudrasis Chakraborty, Baba C. Vemuri

In the limit as the number of samples tends to infinity, we prove that GiFME converges to the FM (this is called the weak consistency result on the Grassmann manifold).

Cannot find the paper you are looking for? You can
Submit a new open access paper.

Contact us on:
hello@paperswithcode.com
.
Papers With Code is a free resource with all data licensed under CC-BY-SA.