You need to log in to edit.

You can create a new account if you don't have one.

Or, discuss a change on Slack.

You can create a new account if you don't have one.

Or, discuss a change on Slack.

no code implementations • 16 Sep 2022 • Boris Landa, Xiuyuan Cheng

In this work, we investigate this normalization in a setting where points are sampled from an unknown density on a low-dimensional manifold embedded in high-dimensional space and corrupted by possibly strong, non-identically distributed, sub-Gaussian noise.

1 code implementation • 7 Jul 2022 • Matthew Repasky, Xiuyuan Cheng, Yao Xie

Theoretically, we relate the training dynamic with large regularization weight to the kernel regression optimization of "lazy training" regime in early training times.

1 code implementation • 22 Jun 2022 • Xiuyuan Cheng, Boris Landa

This paper proves the convergence of the bi-stochastically normalized graph Laplacian to manifold (weighted-)Laplacian with rates when $n$ data points are i. i. d.

1 code implementation • 14 Jun 2022 • Ziyu Chen, Yingzhou Li, Xiuyuan Cheng

The current paper introduces a new neural network approach, named SpecNet2, to compute spectral embedding which optimizes an equivalent objective of the eigen-problem and removes the orthogonalization layer in SpecNet1.

1 code implementation • 2 Jun 2022 • Chen Xu, Xiuyuan Cheng, Yao Xie

In this work, we address conditional generation using deep invertible neural networks.

1 code implementation • 17 Feb 2022 • Chen Xu, Xiuyuan Cheng, Yao Xie

Despite the vast empirical success of neural networks, theoretical understanding of the training procedures remains limited, especially in providing performance guarantees of testing performance due to the non-convex nature of the optimization problem.

no code implementations • 8 Feb 2022 • Sarah Huestis-Mitchell, Xiuyuan Cheng, Yao Xie

We analyze a large corpus of police incident narrative documents in understanding the spatial distribution of the topics.

no code implementations • NeurIPS 2021 • Zichen Miao, Ze Wang, Xiuyuan Cheng, Qiang Qiu

In this paper, we introduce spatiotemporal joint filter decomposition to decouple spatial and temporal learning, while preserving spatiotemporal dependency in a video.

no code implementations • 29 Sep 2021 • Ze Wang, Xiuyuan Cheng, Guillermo Sapiro, Qiang Qiu

In other words, a CNN is now reduced to layers of filter atoms, typically a few hundred of parameters per layer, with a common block of subspace coefficients shared across layers.

1 code implementation • ICLR 2022 • Shixiang Zhu, Haoyun Wang, Zheng Dong, Xiuyuan Cheng, Yao Xie

In this paper, we introduce a novel and general neural network-based non-stationary influence kernel with high expressiveness for handling complex discrete events data while providing theoretical performance guarantees.

1 code implementation • NeurIPS 2021 • Xiuyuan Cheng, Yao Xie

We present a novel neural network Maximum Mean Discrepancy (MMD) statistic by identifying a new connection between neural tangent kernel (NTK) and MMD.

no code implementations • 7 May 2021 • Xiuyuan Cheng, Yao Xie

Specifically, we show that when data densities are supported on a $d$-dimensional sub-manifold $\mathcal{M}$ embedded in an $m$-dimensional space, the kernel two-sample test for data sampled from a pair of distributions $(p, q)$ that are H\"older with order $\beta$ is consistent and powerful when the number of samples $n$ is greater than $\delta_2(p, q)^{-2-d/\beta}$ up to certain constant, where $\delta_2$ is the squared $\ell_2$-divergence between two distributions on manifold.

no code implementations • 28 Feb 2021 • Yixing Zhang, Xiuyuan Cheng, Galen Reeves

The Gaussian-smoothed optimal transport (GOT) framework, recently proposed by Goldfeld et al., scales to high dimensions in estimation and provides an alternative to entropy regularization.

1 code implementation • 25 Jan 2021 • Xiuyuan Cheng, Nan Wu

The result holds for un-normalized and random-walk graph Laplacians when data are uniformly sampled on the manifold, as well as the density-corrected graph Laplacian (where the affinity matrix is normalized by the degree matrix from both sides) with non-uniformly sampled data.

no code implementations • 3 Nov 2020 • Xiuyuan Cheng, Hau-Tieng Wu

This paper proves the convergence of graph Laplacian operator $L_N$ to manifold (weighted-)Laplacian for a new family of kNN self-tuned kernels $W^{(\alpha)}_{ij} = k_0( \frac{ \| x_i - x_j \|^2}{ \epsilon \hat{\rho}(x_i) \hat{\rho}(x_j)})/\hat{\rho}(x_i)^\alpha \hat{\rho}(x_j)^\alpha$, where $\hat{\rho}$ is the estimated bandwidth function {by kNN}, and the limiting operator is also parametrized by $\alpha$.

no code implementations • 4 Sep 2020 • Ze Wang, Xiuyuan Cheng, Guillermo Sapiro, Qiang Qiu

We then explicitly regularize CNN kernels by enforcing decomposed coefficients to be shared across sub-structures, while leaving each sub-structure only its own dictionary atoms, a few hundreds of parameters typically, which leads to dramatic model reductions.

2 code implementations • ICLR 2021 • Xiuyuan Cheng, Zichen Miao, Qiang Qiu

Recent deep models using graph convolutions provide an appropriate framework to handle such non-Euclidean data, but many of them, particularly those based on global graph Laplacians, lack expressiveness to capture local features required for representation of signals lying on the non-Euclidean grid.

1 code implementation • 9 Dec 2019 • Zhongshu Xu, Yingzhou Li, Xiuyuan Cheng

Structured CNN designed using the prior information of problems potentially improves efficiency over conventional CNNs in various tasks in solving PDEs and inverse problems in signal processing.

no code implementations • 25 Sep 2019 • Xiuyuan Cheng, Alexander Cloninger

The recent success of generative adversarial networks and variational learning suggests training a classifier network may work well in addressing the classical two-sample problem.

no code implementations • 25 Sep 2019 • Ze Wang, Xiuyuan Cheng, Guillermo Sapiro, Qiang Qiu

Domain shifts are frequently encountered in real-world scenarios.

no code implementations • 25 Sep 2019 • Wei Zhu, Qiang Qiu, Robert Calderbank, Guillermo Sapiro, Xiuyuan Cheng

Encoding the input scale information explicitly into the representation learned by a convolutional neural network (CNN) is beneficial for many vision tasks especially when dealing with multiscale input signals.

no code implementations • ICLR 2020 • Ze Wang, Xiuyuan Cheng, Guillermo Sapiro, Qiang Qiu

One of these questions is how to efficiently achieve proper diversity and sampling of the multi-mode data space.

no code implementations • NeurIPS 2020 • Ze Wang, Xiuyuan Cheng, Guillermo Sapiro, Qiang Qiu

In this paper, we consider domain-invariant deep learning by explicitly modeling domain shifts with only a small amount of domain-specific parameters in a Convolutional Neural Network (CNN).

no code implementations • 24 Sep 2019 • Wei Zhu, Qiang Qiu, Robert Calderbank, Guillermo Sapiro, Xiuyuan Cheng

Encoding the scale information explicitly into the representation learned by a convolutional neural network (CNN) is beneficial for many computer vision tasks especially when dealing with multiscale inputs.

1 code implementation • ECCV 2020 • Henry Li, Ofir Lindenbaum, Xiuyuan Cheng, Alexander Cloninger

Variational autoencoders (VAEs) and generative adversarial networks (GANs) enjoy an intuitive connection to manifold learning: in training the decoder/generator is optimized to approximate a homeomorphism between the data distribution and the sampling space.

1 code implementation • 25 Oct 2018 • Xiuyuan Cheng, Gal Mishne

The extraction of clusters from a dataset which includes multiple clusters and a significant background component is a non-trivial task of practical importance.

1 code implementation • 18 May 2018 • Yingzhou Li, Xiuyuan Cheng, Jianfeng Lu

Theoretical analysis of the approximation power of Butterfly-Net to the Fourier representation of input data shows that the error decays exponentially as the depth increases.

no code implementations • ICLR 2019 • Xiuyuan Cheng, Qiang Qiu, Robert Calderbank, Guillermo Sapiro

Explicit encoding of group actions in deep features makes it possible for convolutional neural networks (CNNs) to handle global deformations of images, which is critical to success in many vision tasks.

1 code implementation • 28 Mar 2018 • Uri Shaham, James Garritano, Yutaro Yamada, Ethan Weinberger, Alex Cloninger, Xiuyuan Cheng, Kelly Stanton, Yuval Kluger

We study the effectiveness of various approaches that defend against adversarial attacks on deep networks via manipulations based on basis function representations of images.

1 code implementation • ICML 2018 • Qiang Qiu, Xiuyuan Cheng, Robert Calderbank, Guillermo Sapiro

In this paper, we suggest to decompose convolutional filters in CNN as a truncated expansion with pre-fixed bases, namely the Decomposed Convolutional Filters network (DCFNet), where the expansion coefficients remain learned from data.

1 code implementation • 14 Sep 2017 • Xiuyuan Cheng, Alexander Cloninger, Ronald R. Coifman

The paper introduces a new kernel-based Maximum Mean Discrepancy (MMD) statistic for measuring the distance between two distributions given finitely-many multivariate samples.

no code implementations • 5 Jun 2017 • Xiuyuan Cheng, Gal Mishne, Stefan Steinerberger

Let $(M, g)$ be a compact manifold and let $-\Delta \phi_k = \lambda_k \phi_k$ be the sequence of Laplacian eigenfunctions.

no code implementations • 24 May 2017 • Bowei Yan, Purnamrita Sarkar, Xiuyuan Cheng

Community detection is a fundamental unsupervised learning problem for unlabeled networks which has a broad range of applications.

no code implementations • 9 Nov 2016 • Xiuyuan Cheng, Manas Rachh, Stefan Steinerberger

We study directed, weighted graphs $G=(V, E)$ and consider the (not necessarily symmetric) averaging operator $$ (\mathcal{L}u)(i) = -\sum_{j \sim_{} i}{p_{ij} (u(j) - u(i))},$$ where $p_{ij}$ are normalized edge weights.

1 code implementation • 6 Feb 2016 • Uri Shaham, Xiuyuan Cheng, Omer Dror, Ariel Jaffe, Boaz Nadler, Joseph Chang, Yuval Kluger

We show how deep learning methods can be applied in the context of crowdsourcing and unsupervised ensemble learning.

no code implementations • 30 Sep 2015 • Xiuyuan Cheng, Xu Chen, Stephane Mallat

An orthogonal Haar scattering transform is a deep network, computed with a hierarchy of additions, subtractions and absolute values, over pairs of coefficients.

no code implementations • NeurIPS 2014 • Xu Chen, Xiuyuan Cheng, Stéphane Mallat

The classification of high-dimensional data defined on graphs is particularly difficult when the graph geometry is unknown.

Cannot find the paper you are looking for? You can
Submit a new open access paper.

Contact us on:
hello@paperswithcode.com
.
Papers With Code is a free resource with all data licensed under CC-BY-SA.