no code implementations • 10 Mar 2023 • Si-An Chen, Chun-Liang Li, Nate Yoder, Sercan O. Arik, Tomas Pfister
Our results underline the importance of efficiently utilizing cross-variate and auxiliary information for improving the performance of time series forecasting.
1 code implementation • 6 Feb 2023 • Kuniaki Saito, Kihyuk Sohn, Xiang Zhang, Chun-Liang Li, Chen-Yu Lee, Kate Saenko, Tomas Pfister
Existing methods rely on supervised learning of CIR models using labeled triplets consisting of the query image, text specification, and the target image.
no code implementations • 12 Jan 2023 • Ruoxi Sun, Chun-Liang Li, Sercan O. Arik, Michael W. Dusenberry, Chen-Yu Lee, Tomas Pfister
Accurate estimation of output quantiles is crucial in many use cases, where it is desired to model the range of possibility.
1 code implementation • 1 Dec 2022 • Songwei Ge, Shlok Mishra, Simon Kornblith, Chun-Liang Li, David Jacobs
To exploit such a structure, we propose a contrastive learning framework where a Euclidean loss is used to learn object representations and a hyperbolic loss is used to encourage representations of scenes to lie close to representations of their constituent objects in a hyperbolic space.
no code implementations • 30 Nov 2022 • Jinsung Yoon, Kihyuk Sohn, Chun-Liang Li, Sercan O. Arik, Tomas Pfister
Semi-supervised anomaly detection is a common problem, as often the datasets containing anomalies are partially labeled.
Semi-supervised Anomaly Detection
supervised anomaly detection
no code implementations • 2 Jun 2022 • Kuniaki Saito, Kihyuk Sohn, Xiang Zhang, Chun-Liang Li, Chen-Yu Lee, Kate Saenko, Tomas Pfister
However, a naive unification of the real caption and the prompt sentences could lead to a complication in learning, as the distribution shift in text may not be handled properly in the language encoder.
no code implementations • 30 Mar 2022 • Yuliang Zou, Zizhao Zhang, Chun-Liang Li, Han Zhang, Tomas Pfister, Jia-Bin Huang
We propose a test-time adaptation method for cross-domain image segmentation.
no code implementations • ACL 2022 • Chen-Yu Lee, Chun-Liang Li, Timothy Dozat, Vincent Perot, Guolong Su, Nan Hua, Joshua Ainslie, Renshen Wang, Yasuhisa Fujii, Tomas Pfister
Sequence modeling has demonstrated state-of-the-art performance on natural language and document understanding tasks.
1 code implementation • 4 Feb 2022 • Sana Tonekaboni, Chun-Liang Li, Sercan Arik, Anna Goldenberg, Tomas Pfister
Learning representations that capture the factors contributing to this variability enables a better understanding of the data via its underlying generative process and improves performance on downstream machine learning tasks.
no code implementations • 21 Dec 2021 • Kihyuk Sohn, Jinsung Yoon, Chun-Liang Li, Chen-Yu Lee, Tomas Pfister
We define a distance function between images, each of which is represented as a bag of embeddings, by the Euclidean distance between weighted averaged embeddings.
no code implementations • 3 Nov 2021 • Si-An Chen, Chun-Liang Li, Hsuan-Tien Lin
To improve GAN in terms of model compatibility, we propose Boundary-Calibration GANs (BCGANs), which leverage the boundary information from a set of pre-trained classifiers using the original data.
1 code implementation • NeurIPS 2021 • Si-An Chen, Chun-Liang Li, Hsuan-Tien Lin
Conditional Generative Adversarial Networks (cGANs) are implicit generative models which allow to sample from class-conditional distributions.
1 code implementation • NeurIPS 2021 • Songwei Ge, Shlok Mishra, Haohan Wang, Chun-Liang Li, David Jacobs
We also show that model bias favors texture and shape features differently under different test settings.
no code implementations • 29 Sep 2021 • Justin Lazarow, Kihyuk Sohn, Chun-Liang Li, Zizhao Zhang, Chen-Yu Lee, Tomas Pfister
While remarkable progress in imbalanced supervised learning has been made recently, less attention has been given to the setting of imbalanced semi-supervised learning (SSL) where not only is a few labeled data provided, but the underlying data distribution can be severely imbalanced.
1 code implementation • NeurIPS 2021 • Sangwoo Mo, Hyunwoo Kang, Kihyuk Sohn, Chun-Liang Li, Jinwoo Shin
Contrastive self-supervised learning has shown impressive results in learning visual representations from unlabeled images by enforcing invariance against different data augmentations.
no code implementations • ACL 2021 • Chen-Yu Lee, Chun-Liang Li, Chu Wang, Renshen Wang, Yasuhisa Fujii, Siyang Qin, Ashok Popat, Tomas Pfister
Natural reading orders of words are crucial for information extraction from form-like documents.
no code implementations • 11 Jun 2021 • Jinsung Yoon, Kihyuk Sohn, Chun-Liang Li, Sercan O. Arik, Chen-Yu Lee, Tomas Pfister
We demonstrate our method on various unsupervised AD tasks with image and tabular data.
1 code implementation • ICLR 2022 • Asma Ghandeharioun, Been Kim, Chun-Liang Li, Brendan Jou, Brian Eoff, Rosalind W. Picard
Explaining deep learning model inferences is a promising venue for scientific understanding, improving safety, uncovering hidden biases, evaluating fairness, and beyond, as argued by many scholars.
2 code implementations • CVPR 2021 • Chun-Liang Li, Kihyuk Sohn, Jinsung Yoon, Tomas Pfister
We aim at constructing a high performance model for defect detection that detects unknown anomalous patterns of an image without anomalous data.
Ranked #8 on
Unsupervised Anomaly Detection
on DAGM2007
(using extra training data)
1 code implementation • ICLR 2021 • Kihyuk Sohn, Chun-Liang Li, Jinsung Yoon, Minho Jin, Tomas Pfister
We first learn self-supervised representations from one-class data, and then build one-class classifiers on learned representations.
Ranked #7 on
Anomaly Detection
on One-class CIFAR-100
2 code implementations • ICLR 2021 • Yuliang Zou, Zizhao Zhang, Han Zhang, Chun-Liang Li, Xiao Bian, Jia-Bin Huang, Tomas Pfister
We demonstrate the effectiveness of the proposed pseudo-labeling strategy in both low-data and high-data regimes.
3 code implementations • ICLR 2021 • Kibok Lee, Yian Zhu, Kihyuk Sohn, Chun-Liang Li, Jinwoo Shin, Honglak Lee
Contrastive representation learning has shown to be effective to learn representations from unlabeled data.
no code implementations • NeurIPS 2020 • Sercan O. Arik, Chun-Liang Li, Jinsung Yoon, Rajarishi Sinha, Arkady Epshteyn, Long T. Le, Vikas Menon, Shashank Singh, Leyou Zhang, Nate Yoder, Martin Nikoltchev, Yash Sonthalia, Hootan Nakhost, Elli Kanal, Tomas Pfister
We propose a novel approach that integrates machine learning into compartmental disease modeling to predict the progression of COVID-19.
no code implementations • 6 Jul 2020 • Wei-Cheng Chang, Chun-Liang Li, Youssef Mroueh, Yiming Yang
NCK is crucial for successful inference with SVGD in high dimension, as it adapts the kernel to the noise level of the score estimate.
6 code implementations • 10 May 2020 • Kihyuk Sohn, Zizhao Zhang, Chun-Liang Li, Han Zhang, Chen-Yu Lee, Tomas Pfister
Semi-supervised learning (SSL) has a potential to improve the predictive performance of machine learning models using unlabeled data.
Ranked #10 on
Semi-Supervised Object Detection
on COCO 100% labeled data
(using extra training data)
no code implementations • 27 Jan 2020 • Chenghui Zhou, Chun-Liang Li, Barnabas Poczos
However, they struggle with the inherent sparsity of meaningful programs in the search space.
24 code implementations • NeurIPS 2020 • Kihyuk Sohn, David Berthelot, Chun-Liang Li, Zizhao Zhang, Nicholas Carlini, Ekin D. Cubuk, Alex Kurakin, Han Zhang, Colin Raffel
Semi-supervised learning (SSL) provides an effective means of leveraging unlabeled data to improve a model's performance.
no code implementations • 8 Dec 2019 • Austin Dill, Chun-Liang Li, Songwei Ge, Eunsu Kang
In this work, we explore the idea that effective generative models for point clouds under the autoencoding framework must acknowledge the relationship between a continuous surface, a discretized mesh, and a set of points sampled from the surface.
no code implementations • 8 Dec 2019 • Austin Dill, Songwei Ge, Eunsu Kang, Chun-Liang Li, Barnabas Poczos
The typical approach for incorporating this creative process is to interpolate in a learned latent space so as to avoid the problem of generating unrealistic instances by exploiting the model's learned structure.
2 code implementations • NeurIPS 2020 • Chih-Kuan Yeh, Been Kim, Sercan O. Arik, Chun-Liang Li, Tomas Pfister, Pradeep Ravikumar
Next, we propose a concept discovery method that aims to infer a complete set of concepts that are additionally encouraged to be interpretable, which addresses the limitations of existing methods on concept explanations.
no code implementations • 25 Sep 2019 • Chih-Kuan Yeh, Been Kim, Sercan Arik, Chun-Liang Li, Pradeep Ravikumar, Tomas Pfister
Next, we propose a concept discovery method that considers two additional constraints to encourage the interpretability of the discovered concepts.
no code implementations • 20 Aug 2019 • Songwei Ge, Austin Dill, Eunsu Kang, Chun-Liang Li, Lingyao Zhang, Manzil Zaheer, Barnabas Poczos
We explore the intersection of human and machine creativity by generating sculptural objects through machine learning.
no code implementations • CVPR 2019 • Chun-Liang Li, Tomas Simon, Jason Saragih, Barnabás Póczos, Yaser Sheikh
As input, we take a sequence of point clouds to be registered as well as an artist-rigged mesh, i. e. a template mesh equipped with a linear-blend skinning (LBS) deformation space parameterized by a skeleton hierarchy.
no code implementations • 26 Feb 2019 • Chun-Liang Li, Wei-Cheng Chang, Youssef Mroueh, Yiming Yang, Barnabás Póczos
While learning the kernel in a data driven way has been investigated, in this paper we explore learning the spectral distribution of kernel via implicit generative models parametrized by deep neural networks.
2 code implementations • ICLR 2019 • Wei-Cheng Chang, Chun-Liang Li, Yiming Yang, Barnabás Póczos
Detecting the emergence of abrupt property changes in time series is a challenging problem.
no code implementations • 13 Nov 2018 • Chun-Liang Li, Eunsu Kang, Songwei Ge, Lingyao Zhang, Austin Dill, Manzil Zaheer, Barnabas Poczos
Our approach extends DeepDream from images to 3D point clouds.
1 code implementation • 13 Oct 2018 • Chun-Liang Li, Manzil Zaheer, Yang Zhang, Barnabas Poczos, Ruslan Salakhutdinov
In this paper, we first show a straightforward extension of existing GAN algorithm is not applicable to point clouds, because the constraint required for discriminators is undefined for set data.
no code implementations • ICLR 2019 • Hsueh-Ti Derek Liu, Michael Tao, Chun-Liang Li, Derek Nowrouzezahrai, Alec Jacobson
As such, we propose the direct perturbation of physical parameters that underly image formation: lighting and geometry.
no code implementations • NeurIPS 2018 • Shashank Singh, Ananya Uppal, Boyue Li, Chun-Liang Li, Manzil Zaheer, Barnabás Póczos
We study minimax convergence rates of nonparametric density estimation under a large class of loss functions called "adversarial losses", which, besides classical $\mathcal{L}^p$ losses, includes maximum mean discrepancy (MMD), Wasserstein distance, and total variation distance.
3 code implementations • 5 Apr 2018 • Xi Ouyang, Yu Cheng, Yifan Jiang, Chun-Liang Li, Pan Zhou
The results show that our framework can smoothly synthesize pedestrians on background images of variations and different levels of details.
Ranked #2 on
Scene Text Recognition
on MSDA
1 code implementation • ICLR 2018 • Youssef Mroueh, Chun-Liang Li, Tom Sercu, Anant Raj, Yu Cheng
We show that the Sobolev IPM compares two distributions in high dimensions based on weighted conditional Cumulative Distribution Functions (CDF) of each coordinate on a leave one out basis.
1 code implementation • ICCV 2017 • J. H. Rick Chang, Chun-Liang Li, Barnabas Poczos, B. V. K. Vijaya Kumar, Aswin C. Sankaranarayanan
While deep learning methods have achieved state-of-the-art performance in many challenging inverse problems like image inpainting and super-resolution, they invariably involve problem-specific training of the networks.
2 code implementations • NeurIPS 2017 • Chun-Liang Li, Wei-Cheng Chang, Yu Cheng, Yiming Yang, Barnabás Póczos
In this paper, we propose to improve both the model expressiveness of GMMN and its computational efficiency by introducing adversarial kernel learning techniques, as the replacement of a fixed Gaussian kernel in the original GMMN.
no code implementations • 23 May 2017 • Wei-Cheng Chang, Chun-Liang Li, Yiming Yang, Barnabas Poczos
Large-scale kernel approximation is an important problem in machine learning research.
2 code implementations • 29 Mar 2017 • J. H. Rick Chang, Chun-Liang Li, Barnabas Poczos, B. V. K. Vijaya Kumar, Aswin C. Sankaranarayanan
On the other hand, traditional methods using signal priors can be used in all linear inverse problems but often have worse performance on challenging tasks.
1 code implementation • 8 Mar 2017 • Francois Lanusse, Quanbin Ma, Nan Li, Thomas E. Collett, Chun-Liang Li, Siamak Ravanbakhsh, Rachel Mandelbaum, Barnabas Poczos
We find on our simulated data set that for a rejection rate of non-lenses of 99%, a completeness of 90% can be achieved for lenses with Einstein radii larger than 1. 4" and S/N larger than 20 on individual $g$-band LSST exposures.
Instrumentation and Methods for Astrophysics Cosmology and Nongalactic Astrophysics Astrophysics of Galaxies
no code implementations • 11 Nov 2016 • Chun-Liang Li, Siamak Ravanbakhsh, Barnabas Poczos
Due to numerical stability and quantifiability of the likelihood, RBM is commonly used with Bernoulli units.
no code implementations • 4 Jun 2015 • Chun-Liang Li, Hsuan-Tien Lin, Chi-Jen Lu
In this paper, we analyze the convergence rate of a representative algorithm with decayed learning rate (Oja and Karhunen, 1985) in the first family for the general $k>1$ case.