Search Results for author: Xiuyuan Cheng

Found 47 papers, 22 papers with code

Unsupervised Deep Haar Scattering on Graphs

no code implementations NeurIPS 2014 Xu Chen, Xiuyuan Cheng, Stéphane Mallat

The classification of high-dimensional data defined on graphs is particularly difficult when the graph geometry is unknown.

Classification Dimensionality Reduction +1

Deep Haar Scattering Networks

no code implementations30 Sep 2015 Xiuyuan Cheng, Xu Chen, Stephane Mallat

An orthogonal Haar scattering transform is a deep network, computed with a hierarchy of additions, subtractions and absolute values, over pairs of coefficients.

Classification General Classification

A Deep Learning Approach to Unsupervised Ensemble Learning

1 code implementation6 Feb 2016 Uri Shaham, Xiuyuan Cheng, Omer Dror, Ariel Jaffe, Boaz Nadler, Joseph Chang, Yuval Kluger

We show how deep learning methods can be applied in the context of crowdsourcing and unsupervised ensemble learning.

Ensemble Learning

On the Diffusion Geometry of Graph Laplacians and Applications

no code implementations9 Nov 2016 Xiuyuan Cheng, Manas Rachh, Stefan Steinerberger

We study directed, weighted graphs $G=(V, E)$ and consider the (not necessarily symmetric) averaging operator $$ (\mathcal{L}u)(i) = -\sum_{j \sim_{} i}{p_{ij} (u(j) - u(i))},$$ where $p_{ij}$ are normalized edge weights.

Provable Estimation of the Number of Blocks in Block Models

no code implementations24 May 2017 Bowei Yan, Purnamrita Sarkar, Xiuyuan Cheng

Community detection is a fundamental unsupervised learning problem for unlabeled networks which has a broad range of applications.

Clustering Community Detection

The Geometry of Nodal Sets and Outlier Detection

no code implementations5 Jun 2017 Xiuyuan Cheng, Gal Mishne, Stefan Steinerberger

Let $(M, g)$ be a compact manifold and let $-\Delta \phi_k = \lambda_k \phi_k$ be the sequence of Laplacian eigenfunctions.

Outlier Detection

Two-sample Statistics Based on Anisotropic Kernels

1 code implementation14 Sep 2017 Xiuyuan Cheng, Alexander Cloninger, Ronald R. Coifman

The paper introduces a new kernel-based Maximum Mean Discrepancy (MMD) statistic for measuring the distance between two distributions given finitely-many multivariate samples.

Vocal Bursts Valence Prediction

DCFNet: Deep Neural Network with Decomposed Convolutional Filters

1 code implementation ICML 2018 Qiang Qiu, Xiuyuan Cheng, Robert Calderbank, Guillermo Sapiro

In this paper, we suggest to decompose convolutional filters in CNN as a truncated expansion with pre-fixed bases, namely the Decomposed Convolutional Filters network (DCFNet), where the expansion coefficients remain learned from data.

General Classification Image Classification

Defending against Adversarial Images using Basis Functions Transformations

1 code implementation28 Mar 2018 Uri Shaham, James Garritano, Yutaro Yamada, Ethan Weinberger, Alex Cloninger, Xiuyuan Cheng, Kelly Stanton, Yuval Kluger

We study the effectiveness of various approaches that defend against adversarial attacks on deep networks via manipulations based on basis function representations of images.

RotDCF: Decomposition of Convolutional Filters for Rotation-Equivariant Deep Networks

no code implementations ICLR 2019 Xiuyuan Cheng, Qiang Qiu, Robert Calderbank, Guillermo Sapiro

Explicit encoding of group actions in deep features makes it possible for convolutional neural networks (CNNs) to handle global deformations of images, which is critical to success in many vision tasks.

Butterfly-Net: Optimal Function Representation Based on Convolutional Neural Networks

1 code implementation18 May 2018 Yingzhou Li, Xiuyuan Cheng, Jianfeng Lu

Theoretical analysis of the approximation power of Butterfly-Net to the Fourier representation of input data shows that the error decays exponentially as the depth increases.

Spectral Embedding Norm: Looking Deep into the Spectrum of the Graph Laplacian

1 code implementation25 Oct 2018 Xiuyuan Cheng, Gal Mishne

The extraction of clusters from a dataset which includes multiple clusters and a significant background component is a non-trivial task of practical importance.

Anomaly Detection Clustering +1

Variational Diffusion Autoencoders with Random Walk Sampling

1 code implementation ECCV 2020 Henry Li, Ofir Lindenbaum, Xiuyuan Cheng, Alexander Cloninger

Variational autoencoders (VAEs) and generative adversarial networks (GANs) enjoy an intuitive connection to manifold learning: in training the decoder/generator is optimized to approximate a homeomorphism between the data distribution and the sampling space.

Scaling-Translation-Equivariant Networks with Decomposed Convolutional Filters

no code implementations24 Sep 2019 Wei Zhu, Qiang Qiu, Robert Calderbank, Guillermo Sapiro, Xiuyuan Cheng

Encoding the scale information explicitly into the representation learned by a convolutional neural network (CNN) is beneficial for many computer vision tasks especially when dealing with multiscale inputs.

Image Classification Translation

A Dictionary Approach to Domain-Invariant Learning in Deep Networks

no code implementations NeurIPS 2020 Ze Wang, Xiuyuan Cheng, Guillermo Sapiro, Qiang Qiu

In this paper, we consider domain-invariant deep learning by explicitly modeling domain shifts with only a small amount of domain-specific parameters in a Convolutional Neural Network (CNN).

Domain Adaptation

Classification Logit Two-sample Testing by Neural Networks

1 code implementation25 Sep 2019 Xiuyuan Cheng, Alexander Cloninger

The recent success of generative adversarial networks and variational learning suggests training a classifier network may work well in addressing the classical two-sample problem.

Classification General Classification +2

Scale-Equivariant Neural Networks with Decomposed Convolutional Filters

no code implementations25 Sep 2019 Wei Zhu, Qiang Qiu, Robert Calderbank, Guillermo Sapiro, Xiuyuan Cheng

Encoding the input scale information explicitly into the representation learned by a convolutional neural network (CNN) is beneficial for many vision tasks especially when dealing with multiscale input signals.

Image Classification

Butterfly-Net2: Simplified Butterfly-Net and Fourier Transform Initialization

1 code implementation9 Dec 2019 Zhongshu Xu, Yingzhou Li, Xiuyuan Cheng

Structured CNN designed using the prior information of problems potentially improves efficiency over conventional CNNs in various tasks in solving PDEs and inverse problems in signal processing.

Deblurring Denoising

Graph Convolution with Low-rank Learnable Local Filters

2 code implementations ICLR 2021 Xiuyuan Cheng, Zichen Miao, Qiang Qiu

Recent deep models using graph convolutions provide an appropriate framework to handle such non-Euclidean data, but many of them, particularly those based on global graph Laplacians, lack expressiveness to capture local features required for representation of signals lying on the non-Euclidean grid.

Action Recognition Facial Expression Recognition +2

ACDC: Weight Sharing in Atom-Coefficient Decomposed Convolution

no code implementations4 Sep 2020 Ze Wang, Xiuyuan Cheng, Guillermo Sapiro, Qiang Qiu

We then explicitly regularize CNN kernels by enforcing decomposed coefficients to be shared across sub-structures, while leaving each sub-structure only its own dictionary atoms, a few hundreds of parameters typically, which leads to dramatic model reductions.

Image Classification

Convergence of Graph Laplacian with kNN Self-tuned Kernels

no code implementations3 Nov 2020 Xiuyuan Cheng, Hau-Tieng Wu

This paper proves the convergence of graph Laplacian operator $L_N$ to manifold (weighted-)Laplacian for a new family of kNN self-tuned kernels $W^{(\alpha)}_{ij} = k_0( \frac{ \| x_i - x_j \|^2}{ \epsilon \hat{\rho}(x_i) \hat{\rho}(x_j)})/\hat{\rho}(x_i)^\alpha \hat{\rho}(x_j)^\alpha$, where $\hat{\rho}$ is the estimated bandwidth function {by kNN}, and the limiting operator is also parametrized by $\alpha$.

Eigen-convergence of Gaussian kernelized graph Laplacian by manifold heat interpolation

1 code implementation25 Jan 2021 Xiuyuan Cheng, Nan Wu

The result holds for un-normalized and random-walk graph Laplacians when data are uniformly sampled on the manifold, as well as the density-corrected graph Laplacian (where the affinity matrix is normalized by the degree matrix from both sides) with non-uniformly sampled data.

Convergence of Gaussian-smoothed optimal transport distance with sub-gamma distributions and dependent samples

no code implementations28 Feb 2021 Yixing Zhang, Xiuyuan Cheng, Galen Reeves

The Gaussian-smoothed optimal transport (GOT) framework, recently proposed by Goldfeld et al., scales to high dimensions in estimation and provides an alternative to entropy regularization.

Kernel Two-Sample Tests for Manifold Data

1 code implementation7 May 2021 Xiuyuan Cheng, Yao Xie

Specifically, when data densities $p$ and $q$ are supported on a $d$-dimensional sub-manifold ${M}$ embedded in an $m$-dimensional space and are H\"older with order $\beta$ (up to 2) on ${M}$, we prove a guarantee of the test power for finite sample size $n$ that exceeds a threshold depending on $d$, $\beta$, and $\Delta_2$ the squared $L^2$-divergence between $p$ and $q$ on the manifold, and with a properly chosen kernel bandwidth $\gamma$.

Vocal Bursts Valence Prediction

Neural Tangent Kernel Maximum Mean Discrepancy

1 code implementation NeurIPS 2021 Xiuyuan Cheng, Yao Xie

We present a novel neural network Maximum Mean Discrepancy (MMD) statistic by identifying a new connection between neural tangent kernel (NTK) and MMD.

Neural Spectral Marked Point Processes

1 code implementation ICLR 2022 Shixiang Zhu, Haoyun Wang, Zheng Dong, Xiuyuan Cheng, Yao Xie

In this paper, we introduce a novel and general neural network-based non-stationary influence kernel with high expressiveness for handling complex discrete events data while providing theoretical performance guarantees.

Point Processes

A Joint Subspace View to Convolutional Neural Networks

no code implementations29 Sep 2021 Ze Wang, Xiuyuan Cheng, Guillermo Sapiro, Qiang Qiu

In other words, a CNN is now reduced to layers of filter atoms, typically a few hundred of parameters per layer, with a common block of subspace coefficients shared across layers.

Spatiotemporal Joint Filter Decomposition in 3D Convolutional Neural Networks

no code implementations NeurIPS 2021 Zichen Miao, Ze Wang, Xiuyuan Cheng, Qiang Qiu

In this paper, we introduce spatiotemporal joint filter decomposition to decouple spatial and temporal learning, while preserving spatiotemporal dependency in a video.

Action Recognition

Crime Hot-Spot Modeling via Topic Modeling and Relative Density Estimation

no code implementations8 Feb 2022 Jonathan Zhou, Sarah Huestis-Mitchell, Xiuyuan Cheng, Yao Xie

We present a method to capture groupings of similar calls and determine their relative spatial distribution from a collection of crime record narratives.

Density Estimation Density Ratio Estimation

An alternative approach to train neural networks using monotone variational inequality

1 code implementation17 Feb 2022 Chen Xu, Xiuyuan Cheng, Yao Xie

We propose an alternative approach to neural network training using the monotone vector field, an idea inspired by the seminal work of Juditsky and Nemirovski [Juditsky & Nemirovsky, 2019] developed originally to solve parameter estimation problems for generalized linear models (GLM) by reducing the original non-convex problem to a convex problem of solving a monotone variational inequality (VI).

Invertible Neural Networks for Graph Prediction

1 code implementation2 Jun 2022 Chen Xu, Xiuyuan Cheng, Yao Xie

The proposed model consists of an invertible sub-network that maps one-to-one from data to an intermediate encoded feature, which allows forward prediction by a linear classification sub-network as well as efficient generation from output labels via a parametric mixture model.

Anomaly Detection

SpecNet2: Orthogonalization-free spectral embedding by neural networks

1 code implementation14 Jun 2022 Ziyu Chen, Yingzhou Li, Xiuyuan Cheng

The current paper introduces a new neural network approach, named SpecNet2, to compute spectral embedding which optimizes an equivalent objective of the eigen-problem and removes the orthogonalization layer in SpecNet1.

Computational Efficiency

Bi-stochastically normalized graph Laplacian: convergence to manifold Laplacian and robustness to outlier noise

1 code implementation22 Jun 2022 Xiuyuan Cheng, Boris Landa

This paper proves the convergence of bi-stochastically normalized graph Laplacian to manifold (weighted-)Laplacian with rates, when $n$ data points are i. i. d.

Neural Stein critics with staged $L^2$-regularization

1 code implementation7 Jul 2022 Matthew Repasky, Xiuyuan Cheng, Yao Xie

In this paper, we investigate the role of $L^2$ regularization in training a neural network Stein critic so as to distinguish between data sampled from an unknown probability distribution and a nominal model distribution.

Robust Inference of Manifold Density and Geometry by Doubly Stochastic Scaling

no code implementations16 Sep 2022 Boris Landa, Xiuyuan Cheng

The Gaussian kernel and its traditional normalizations (e. g., row-stochastic) are popular approaches for assessing similarities between data points.

Neural network-based CUSUM for online change-point detection

no code implementations31 Oct 2022 Tingnan Gong, Junghwan Lee, Xiuyuan Cheng, Yao Xie

Change-point detection, detecting an abrupt change in the data distribution from sequential data, is a fundamental problem in statistics and machine learning.

Change Point Detection Computational Efficiency

Spatio-temporal point processes with deep non-stationary kernels

no code implementations21 Nov 2022 Zheng Dong, Xiuyuan Cheng, Yao Xie

Another popular type of deep model for point process data is based on representing the influence kernel (rather than the intensity function) by neural networks.

Computational Efficiency Point Processes

Normalizing flow neural networks by JKO scheme

1 code implementation NeurIPS 2023 Chen Xu, Xiuyuan Cheng, Yao Xie

Normalizing flow is a class of deep generative models for efficient sampling and likelihood estimation, which achieves attractive performance, particularly in high dimensions.

The G-invariant graph Laplacian

no code implementations29 Mar 2023 Eitan Rosen, Paulina Hoyos, Xiuyuan Cheng, Joe Kileel, Yoel Shkolnisky

We introduce the G-invariant graph Laplacian that generalizes the graph Laplacian by accounting for the action of the group on the data set.

Denoising Dimensionality Reduction

Computing high-dimensional optimal transport by flow neural networks

no code implementations19 May 2023 Chen Xu, Xiuyuan Cheng, Yao Xie

Flow-based models are widely used in generative tasks, including normalizing flow, where a neural network transports from a data distribution $P$ to a normal distribution.

Density Ratio Estimation Image-to-Image Translation +1

Neural Differential Recurrent Neural Network with Adaptive Time Steps

1 code implementation2 Jun 2023 Yixuan Tan, Liyan Xie, Xiuyuan Cheng

We propose an RNN-based model, called RNN-ODE-Adap, that uses a neural ODE to represent the time development of the hidden states, and we adaptively select time steps based on the steepness of changes of the data over time so as to train the model more efficiently for the "spike-like" time series.

Time Series

G-invariant diffusion maps

no code implementations12 Jun 2023 Eitan Rosen, Xiuyuan Cheng, Yoel Shkolnisky

The diffusion maps embedding of data lying on a manifold have shown success in tasks ranging from dimensionality reduction and clustering, to data visualization.

Data Visualization Dimensionality Reduction

Deep graph kernel point processes

no code implementations20 Jun 2023 Zheng Dong, Matthew Repasky, Xiuyuan Cheng, Yao Xie

Point process models are widely used for continuous asynchronous event data, where each data point includes time and additional information called "marks", which can be locations, nodes, or event types.

Point Processes

Convergence of flow-based generative models via proximal gradient descent in Wasserstein space

no code implementations26 Oct 2023 Xiuyuan Cheng, Jianfeng Lu, Yixin Tan, Yao Xie

Flow-based generative models enjoy certain advantages in computing the data generation and the likelihood, and have recently shown competitive empirical performance.

Flow-based Distributionally Robust Optimization

1 code implementation30 Oct 2023 Chen Xu, JongHyeok Lee, Xiuyuan Cheng, Yao Xie

We present a computationally efficient framework, called $\texttt{FlowDRO}$, for solving flow-based distributionally robust optimization (DRO) problems with Wasserstein uncertainty sets while aiming to find continuous worst-case distribution (also called the Least Favorable Distribution, LFD) and sample from it.

Cannot find the paper you are looking for? You can Submit a new open access paper.