You need to log in to edit.

You can create a new account if you don't have one.

Or, discuss a change on Slack.

You can create a new account if you don't have one.

Or, discuss a change on Slack.

1 code implementation • 4 Nov 2021 • Ruijie Jiang, Prakash Ishwar, Shuchin Aeron

We analyze a novel min-max framework that seeks a representation which minimizes the maximum (worst-case) generalized contrastive learning loss over all couplings (joint distributions between positive and negative samples subject to marginal constraints) and prove that the resulting min-max optimum representation will be degenerate.

1 code implementation • 1 Nov 2021 • Ruijie Jiang, Julia Gouvea, Eric Miller, David Hammer, Shuchin Aeron

This paper shows that a popular approach to the supervised embedding of documents for classification, namely, contrastive Word Mover's Embedding, can be significantly enhanced by adding interpretability.

no code implementations • 29 Oct 2021 • Shoaib Bin Masud, Shuchin Aeron

We consider the problem of generating valid knockoffs for knockoff filtering which is a statistical method that provides provable false discovery rate guarantees for any model selection procedure.

1 code implementation • 26 Oct 2021 • Ahmed Ali Abbasi, Abiy Tasissa, Shuchin Aeron

The unlabeled sensing problem is to solve a noisy linear system of equations under unknown permutation of the measurements.

1 code implementation • NeurIPS 2021 • Kevin C. Cheng, Shuchin Aeron, Michael C. Hughes, Eric L. Miller

We propose a dynamical Wasserstein barycentric (DWB) model that estimates the system state over time as well as the data-generating distributions of pure states in an unsupervised manner.

no code implementations • 4 Sep 2021 • Boyang Lyu, Thuan Nguyen, Prakash Ishwar, Matthias Scheutz, Shuchin Aeron

However, the bound contains a term that is not optimized due to its dual dependence on the representation mapping and the unknown optimal labeling function for the unseen domain.

1 code implementation • 16 Mar 2021 • Shoaib Bin Masud, Boyang Lyu, Shuchin Aeron

In this paper, we extend the recently proposed multivariate rank energy distance, based on the theory of optimal transport, for statistical testing of distributional similarity, to soft rank energy distance.

no code implementations • 12 Mar 2021 • Yanting Ma, Petros T. Boufounos, Hassan Mansour, Shuchin Aeron

In several applications, including imaging of deformable objects while in motion, simultaneous localization and mapping, and unlabeled sensing, we encounter the problem of recovering a signal that is measured subject to unknown permutations.

no code implementations • 26 Nov 2020 • Ruijie Jiang, Julia Gouvea, David Hammer, Eric Miller, Shuchin Aeron

This work is a step towards building a statistical machine learning (ML) method for achieving an automated support for qualitative analyses of students' writing, here specifically in score laboratory reports in introductory biology for sophistication of argumentation and reasoning.

no code implementations • 22 Jul 2020 • Ye Wang, Shuchin Aeron, Adnan Siraj Rakin, Toshiaki Koike-Akino, Pierre Moulin

Robust machine learning formulations have emerged to address the prevalent vulnerability of deep neural networks to adversarial examples.

no code implementations • ICML 2020 • Anoop Cherian, Shuchin Aeron

To maximize extraction of such informative cues from the data, we set the problem within the context of contrastive representation learning and to that end propose a novel objective via optimal transport.

no code implementations • 2 Jul 2020 • Boyang Lyu, Thao Pham, Giles Blaney, Zachary Haga, Angelo Sassaroli, Sergio Fantini, Shuchin Aeron

Results: In a sample of six subjects, G-W resulted in an alignment accuracy of 68 $\pm$ 4 % (weighted mean $\pm$ standard error) for session-by-session alignment, FG-W resulted in an alignment accuracy of 55 $\pm$ 2 % for subject-by-subject alignment.

no code implementations • 9 Jun 2020 • Kevin C. Cheng, Eric L. Miller, Michael C. Hughes, Shuchin Aeron

Non-parametric and distribution-free two-sample tests have been the foundation of many change point detection algorithms.

no code implementations • 14 Nov 2019 • Ahmed Abbasi, Abiy Tasissa, Shuchin Aeron

Unlabeled sensing is a linear inverse problem where the measurements are scrambled under an unknown permutation leading to loss of correspondence between the measurements and the rows of the sensing matrix.

no code implementations • 4 Nov 2019 • Kevin C. Cheng, Shuchin Aeron, Michael C. Hughes, Erika Hussey, Eric L. Miller

Two common problems in time series analysis are the decomposition of the data stream into disjoint segments that are each in some sense "homogeneous" - a problem known as Change Point Detection (CPD) - and the grouping of similar nonadjacent segments, a problem that we call Time Series Segment Clustering (TSSC).

no code implementations • 2 Aug 2019 • Yanting Ma, Shuchin Aeron, Hassan Mansour

In this article, we study the convergence of Mirror Descent (MD) and Optimistic Mirror Descent (OMD) for saddle point problems satisfying the notion of coherence as proposed in Mertikopoulos et al. We prove convergence of OMD with exact gradients for coherent saddle point problems, and show that monotone convergence only occurs after some sufficiently large number of iterations.

no code implementations • 20 Aug 2018 • Anuththari Gamage, Brian Rappaport, Shuchin Aeron, Xiaozhe Hu

This data is well represented by multi-view graphs, which consist of several distinct sets of edges over the same nodes.

no code implementations • 13 Mar 2018 • Wenqi Wang, Vaneet Aggarwal, Shuchin Aeron

Tensor train is a hierarchical tensor network structure that helps alleviate the curse of dimensionality by parameterizing large-scale multidimensional data via a set of network of low-rank tensors.

no code implementations • ICLR 2018 • Eric Bailey, Charles Meyer, Shuchin Aeron

We present two new word embedding techniques based on tensor factorization and show that they outperform common methods on several semantic NLP tasks when given the same data.

no code implementations • 3 Dec 2017 • Wenqi Wang, Vaneet Aggarwal, Shuchin Aeron

In this paper, we propose a Tensor Train Neighborhood Preserving Embedding (TTNPE) to embed multi-dimensional tensor data into low dimensional tensor subspace.

no code implementations • 26 Aug 2017 • Brian Rappaport, Anuththari Gamage, Shuchin Aeron

VEC employs a novel application of the state-of-the-art word2vec model to embed a graph in Euclidean space via random walks on the nodes of the graph.

no code implementations • ICCV 2017 • Wenqi Wang, Vaneet Aggarwal, Shuchin Aeron

Using the matrix product state (MPS) representation of the recently proposed tensor ring decompositions, in this paper we propose a tensor completion algorithm, which is an alternating minimization algorithm that alternates over the factors in the MPS representation.

no code implementations • 18 Jun 2017 • Mohammadhossein Chaghazardi, Shuchin Aeron

Our main tool is the use of tensor subspaces, i. e. subspaces with a Kronecker structure, for embedding the data into lower dimensions.

1 code implementation • 10 Apr 2017 • Eric Bailey, Shuchin Aeron

We show that embeddings based on tensor factorization can be used to discern the various meanings of polysemous words without being explicitly trained to do so, and motivate the intuition behind why this works in a way that doesn't with existing methods.

no code implementations • 15 Oct 2016 • Wenqi Wang, Vaneet Aggarwal, Shuchin Aeron

Similar to the Union of Subspaces (UOS) model where each data from each subspace is generated from a (unknown) basis, in the UOPC model each data from each cone is assumed to be generated from a finite number of (unknown) \emph{extreme rays}. To cluster data under this model, we consider several algorithms - (a) Sparse Subspace Clustering by Non-negative constraints Lasso (NCL), (b) Least squares approximation (LSA), and (c) K-nearest neighbor (KNN) algorithm to arrive at affinity between data points.

no code implementations • 5 Oct 2016 • Xiao-Yang Liu, Shuchin Aeron, Vaneet Aggarwal, Xiaodong Wang

The low-tubal-rank tensor model has been recently proposed for real-world multidimensional data.

no code implementations • 29 Sep 2016 • Josh Girson, Shuchin Aeron

In this context we modify an existing algorithm - namely the label propagation algorithm to a variant that uses the distance between the nodes for weighting the label propagation - to identify the categories.

no code implementations • 19 Sep 2016 • Wenqi Wang, Vaneet Aggarwal, Shuchin Aeron

Using the matrix product state (MPS) representation of tensor train decompositions, in this paper we propose a tensor completion algorithm which alternates over the matrices (tensors) in the MPS representation.

no code implementations • 11 Jul 2016 • Wenqi Wang, Shuchin Aeron, Vaneet Aggarwal

In this paper we present deterministic conditions for success of sparse subspace clustering (SSC) under missing data, when data is assumed to come from a Union of Subspaces (UoS) model.

no code implementations • 15 Apr 2016 • Wenqi Wang, Shuchin Aeron, Vaneet Aggarwal

We provide extensive set of simulation results for clustering as well as completion of data under missing entries, under the UoS model.

no code implementations • 31 Dec 2015 • Zemin Zhang, Shuchin Aeron

In this paper a new dictionary learning algorithm for multidimensional data is proposed.

no code implementations • 21 Dec 2015 • Eric Kernfeld, Nathan Majumder, Shuchin Aeron, Misha Kilmer

In this paper we present a new model and an algorithm for unsupervised clustering of 2-D data such as images.

no code implementations • 15 Oct 2015 • Shuchin Aeron, Eric Kernfeld

In this paper we consider the problem of group invariant subspace clustering where the data is assumed to come from a union of group-invariant subspaces of a vector space, i. e. subspaces which are invariant with respect to action of a given group.

no code implementations • 14 Aug 2015 • Vaneet Aggarwal, Shuchin Aeron

In this short note we extend some of the recent results on matrix completion under the assumption that the columns of the matrix can be grouped (clustered) into subspaces (not necessarily disjoint or independent).

no code implementations • 10 Aug 2015 • Xiao-Yang Liu, Shuchin Aeron, Vaneet Aggarwal, Xiaodong Wang, Min-You Wu

In contrast to several existing work that rely on random sampling, this paper shows that adaptivity in sampling can lead to significant improvements in localization accuracy.

no code implementations • 28 Jul 2015 • John Pothier, Josh Girson, Shuchin Aeron

Then following a similar construction as in [3], we exploit this algorithm to propose an online algorithm for learning and prediction of tensors with provable regret guarantees.

no code implementations • 16 Feb 2015 • Zemin Zhang, Shuchin Aeron

Using this factorization one can derive notion of tensor rank, referred to as the tensor tubal rank, which has optimality properties similar to that of matrix rank derived from SVD.

no code implementations • 22 Dec 2014 • Eric Kernfeld, Shuchin Aeron, Misha Kilmer

In this paper, we develop a method for unsupervised clustering of two-way (matrix) data by combining two recent innovations from different fields: the Sparse Subspace Clustering (SSC) algorithm [10], which groups points coming from a union of subspaces into their respective subspaces, and the t-product [18], which was introduced to provide a matrix-like multiplication for third order tensors.

2 code implementations • CVPR 2014 • Zemin Zhang, Gregory Ely, Shuchin Aeron, Ning Hao, Misha Kilmer

Based on t-SVD, the notion of multilinear rank and a related tensor nuclear norm was proposed in [11] to characterize informational and structural complexity of multilinear data.

no code implementations • 24 Mar 2014 • Jason Gejie Liu, Shuchin Aeron

Nonnegative matrix factorization (NMF) has been shown to be identifiable under the separability assumption, under which all the columns(or rows) of the input data matrix belong to the convex cone generated by only a few of these columns(or rows) [1].

no code implementations • 8 Jan 2014 • Jason Gejie Liu, Shuchin Aeron

A robust algorithm for non-negative matrix factorization (NMF) is presented in this paper with the purpose of dealing with large-scale data, where the separability assumption is satisfied.

no code implementations • 2 Jul 2013 • Zemin Zhang, Gregory Ely, Shuchin Aeron, Ning Hao, Misha Kilmer

In this paper we propose novel methods for compression and recovery of multilinear data under limited sampling.

Cannot find the paper you are looking for? You can
Submit a new open access paper.

Contact us on:
hello@paperswithcode.com
.
Papers With Code is a free resource with all data licensed under CC-BY-SA.