Search Results for author: Yizhe Zhu

Found 20 papers, 6 papers with code

Sparse random hypergraphs: Non-backtracking spectra and community detection

no code implementations14 Mar 2022 Ludovic Stephan, Yizhe Zhu

We prove that a spectral method based on the non-backtracking operator for hypergraphs works with high probability down to the generalized Kesten-Stigum detection threshold conjectured by Angelini et al. We characterize the spectrum of the non-backtracking operator for the sparse HSBM, and provide an efficient dimension reduction procedure using the Ihara-Bass formula for hypergraphs.

Community Detection Dimensionality Reduction +1

Partial recovery and weak consistency in the non-uniform hypergraph Stochastic Block Model

no code implementations22 Dec 2021 Ioana Dumitriu, Haixiao Wang, Yizhe Zhu

When the random hypergraph has bounded expected degrees, we provide a spectral algorithm that outputs a partition with at least a $\gamma$ fraction of the vertices classified correctly, where $\gamma\in (0. 5, 1)$ depends on the signal-to-noise ratio (SNR) of the model.

Community Detection Stochastic Block Model

PIVQGAN: Posture and Identity Disentangled Image-to-Image Translation via Vector Quantization

no code implementations29 Sep 2021 Bingchen Liu, Yizhe Zhu, Xiao Yang, Ahmed Elgammal

The VQSN module facilitates a more delicate separation of posture and identity, while the training scheme ensures the VQSN module learns the pose-related representations.

Disentanglement Image-to-Image Translation +2

Deformed semicircle law and concentration of nonlinear random matrices for ultra-wide neural networks

no code implementations20 Sep 2021 Zhichao Wang, Yizhe Zhu

In this paper, we study the two-layer fully connected neural network given by $f(X)=\frac{1}{\sqrt{d_1}}\boldsymbol{a}^\top\sigma\left(WX\right)$, where $X\in\mathbb{R}^{d_0\times n}$ is a deterministic data matrix, $W\in\mathbb{R}^{d_1\times d_0}$ and $\boldsymbol{a}\in\mathbb{R}^{d_1}$ are random Gaussian weights, and $\sigma$ is a nonlinear activation function.

Towards Faster and Stabilized GAN Training for High-fidelity Few-shot Image Synthesis

5 code implementations ICLR 2021 Bingchen Liu, Yizhe Zhu, Kunpeng Song, Ahmed Elgammal

Training Generative Adversarial Networks (GAN) on high-fidelity images usually requires large-scale GPU-clusters and a vast number of training images.

Image Generation

Self-Supervised Sketch-to-Image Synthesis

1 code implementation16 Dec 2020 Bingchen Liu, Yizhe Zhu, Kunpeng Song, Ahmed Elgammal

Moreover, with the proposed sketch generator, the model shows a promising performance on style mixing and style transfer, which require synthesized images to be both style-consistent and semantically meaningful.

Image Generation Self-Supervised Learning +1

Global eigenvalue fluctuations of random biregular bipartite graphs

no code implementations26 Aug 2020 Ioana Dumitriu, Yizhe Zhu

We compute the eigenvalue fluctuations of uniformly distributed random biregular bipartite graphs with fixed and growing degrees for a large class of analytic functions.

Probability Combinatorics

TIME: Text and Image Mutual-Translation Adversarial Networks

no code implementations27 May 2020 Bingchen Liu, Kunpeng Song, Yizhe Zhu, Gerard de Melo, Ahmed Elgammal

Focusing on text-to-image (T2I) generation, we propose Text and Image Mutual-Translation Adversarial Networks (TIME), a lightweight but effective model that jointly learns a T2I generator G and an image captioning discriminator D under the Generative Adversarial Network framework.

Image Captioning Language Modelling +2

S3VAE: Self-Supervised Sequential VAE for Representation Disentanglement and Data Generation

no code implementations CVPR 2020 Yizhe Zhu, Martin Renqiang Min, Asim Kadav, Hans Peter Graf

We propose a sequential variational autoencoder to learn disentangled representations of sequential data (e. g., videos and audios) under self-supervision.

Disentanglement

Federated Adversarial Domain Adaptation

no code implementations ICLR 2020 Xingchao Peng, Zijun Huang, Yizhe Zhu, Kate Saenko

In this work, we present a principled approach to the problem of federated domain adaptation, which aims to align the representations learned among the different nodes with the data distribution of the target node.

Disentanglement Domain Adaptation +3

Deterministic tensor completion with hypergraph expanders

2 code implementations23 Oct 2019 Kameron Decker Harris, Yizhe Zhu

We provide a novel analysis of low-rank tensor completion based on hypergraph expanders.

OOGAN: Disentangling GAN with One-Hot Sampling and Orthogonal Regularization

1 code implementation26 May 2019 Bingchen Liu, Yizhe Zhu, Zuohui Fu, Gerard de Melo, Ahmed Elgammal

Exploring the potential of GANs for unsupervised disentanglement learning, this paper proposes a novel GAN-based disentanglement framework with One-Hot Sampling and Orthogonal Regularization (OOGAN).

Disentanglement

Learning Feature-to-Feature Translator by Alternating Back-Propagation for Generative Zero-Shot Learning

1 code implementation ICCV 2019 Yizhe Zhu, Jianwen Xie, Bingchen Liu, Ahmed Elgammal

We investigate learning feature-to-feature translator networks by alternating back-propagation as a general-purpose solution to zero-shot learning (ZSL) problems.

Zero-Shot Learning

Semantic-Guided Multi-Attention Localization for Zero-Shot Learning

no code implementations NeurIPS 2019 Yizhe Zhu, Jianwen Xie, Zhiqiang Tang, Xi Peng, Ahmed Elgammal

Zero-shot learning extends the conventional object classification to the unseen class recognition by introducing semantic representations of classes.

Zero-Shot Learning

Exact Recovery in the Hypergraph Stochastic Block Model: a Spectral Algorithm

no code implementations16 Nov 2018 Sam Cole, Yizhe Zhu

We consider the exact recovery problem in the hypergraph stochastic block model (HSBM) with $k$ blocks of equal size.

Stochastic Block Model

Link the head to the "beak": Zero Shot Learning from Noisy Text Description at Part Precision

no code implementations CVPR 2017 Mohamed Elhoseiny, Yizhe Zhu, Han Zhang, Ahmed Elgammal

We propose a learning framework that is able to connect text terms to its relevant parts and suppress connections to non-visual text terms without any part-text annotations.

Zero-Shot Learning

A Multilayer-Based Framework for Online Background Subtraction with Freely Moving Cameras

no code implementations ICCV 2017 Yizhe Zhu, Ahmed Elgammal

The exponentially increasing use of moving platforms for video capture introduces the urgent need to develop the general background subtraction algorithms with the capability to deal with the moving background.

Cannot find the paper you are looking for? You can Submit a new open access paper.