no code implementations • 28 Dec 2022 • Ryan Aponte, Ryan A. Rossi, Shunan Guo, Jane Hoffswell, Nedim Lipka, Chang Xiao, Gromit Chan, Eunyee Koh, Nesreen Ahmed
In this work, we introduce a hypergraph representation learning framework called Hypergraph Neural Networks (HNN) that jointly learns hyperedge embeddings along with a set of hyperedge-dependent embeddings for each node in the hypergraph.
no code implementations • 15 Sep 2021 • Henrique Teles Maia, Chang Xiao, DIngzeyu Li, Eitan Grinspun, Changxi Zheng
We find that each layer component's evaluation produces an identifiable magnetic signal signature, from which layer topology, width, function type, and sequence order can be inferred using a suitably trained classifier and a joint consistency optimization based on integer programming.
1 code implementation • 17 Jul 2021 • Hengguan Huang, Hongfu Liu, Hao Wang, Chang Xiao, Ye Wang
In this paper, we present a probabilistic ordinary differential equation (ODE), called STochastic boundaRy ODE (STRODE), that learns both the timings and the dynamics of time series data without requiring any timing annotations during training.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +4
1 code implementation • ICCV 2021 • Rundi Wu, Chang Xiao, Changxi Zheng
We present the first 3D generative model for a drastically different shape representation --- describing a shape as a sequence of computer-aided design (CAD) operations.
no code implementations • 22 Jun 2020 • Jingtian Peng, Chang Xiao, YiFan Li
We introduce RP2K, a new large-scale retail product dataset for fine-grained image classification.
1 code implementation • CVPR 2020 • Chang Xiao, Changxi Zheng
To defend against adversarial examples, a plausible idea is to obfuscate the network's gradient with respect to the input image.
1 code implementation • ICLR 2020 • Chang Xiao, Peilin Zhong, Changxi Zheng
In all cases, the robustness of k-WTA networks outperforms that of traditional networks under white-box attacks.
no code implementations • NeurIPS 2019 • Peilin Zhong, Yuchen Mo, Chang Xiao, Peng-Yu Chen, Changxi Zheng
The conventional wisdom to this end is by reducing through training a statistical distance (such as $f$-divergence) between the generated distribution and provided data distribution.
1 code implementation • NeurIPS 2018 • Chang Xiao, Peilin Zhong, Changxi Zheng
This paper addresses the mode collapse for generative adversarial networks (GANs).
no code implementations • 28 Jul 2017 • Chang Xiao, Cheng Zhang, Changxi Zheng
We then introduce an algorithm that embeds a user-provided message in the text document and produces an encoded document whose appearance is minimally perturbed from the original document.