Search Results for author: Chun-Liang Li

Found 48 papers, 22 papers with code

TSMixer: An all-MLP Architecture for Time Series Forecasting

no code implementations10 Mar 2023 Si-An Chen, Chun-Liang Li, Nate Yoder, Sercan O. Arik, Tomas Pfister

Our results underline the importance of efficiently utilizing cross-variate and auxiliary information for improving the performance of time series forecasting.

Time Series Forecasting

Pic2Word: Mapping Pictures to Words for Zero-shot Composed Image Retrieval

1 code implementation6 Feb 2023 Kuniaki Saito, Kihyuk Sohn, Xiang Zhang, Chun-Liang Li, Chen-Yu Lee, Kate Saenko, Tomas Pfister

Existing methods rely on supervised learning of CIR models using labeled triplets consisting of the query image, text specification, and the target image.

Image Retrieval Retrieval

Neural Spline Search for Quantile Probabilistic Modeling

no code implementations12 Jan 2023 Ruoxi Sun, Chun-Liang Li, Sercan O. Arik, Michael W. Dusenberry, Chen-Yu Lee, Tomas Pfister

Accurate estimation of output quantiles is crucial in many use cases, where it is desired to model the range of possibility.

regression Time Series Forecasting

Hyperbolic Contrastive Learning for Visual Representations beyond Objects

1 code implementation1 Dec 2022 Songwei Ge, Shlok Mishra, Simon Kornblith, Chun-Liang Li, David Jacobs

To exploit such a structure, we propose a contrastive learning framework where a Euclidean loss is used to learn object representations and a hyperbolic loss is used to encourage representations of scenes to lie close to representations of their constituent objects in a hyperbolic space.

Contrastive Learning Image Classification +4

Prefix Conditioning Unifies Language and Label Supervision

no code implementations2 Jun 2022 Kuniaki Saito, Kihyuk Sohn, Xiang Zhang, Chun-Liang Li, Chen-Yu Lee, Kate Saenko, Tomas Pfister

However, a naive unification of the real caption and the prompt sentences could lead to a complication in learning, as the distribution shift in text may not be handled properly in the language encoder.

Contrastive Learning

Decoupling Local and Global Representations of Time Series

1 code implementation4 Feb 2022 Sana Tonekaboni, Chun-Liang Li, Sercan Arik, Anna Goldenberg, Tomas Pfister

Learning representations that capture the factors contributing to this variability enables a better understanding of the data via its underlying generative process and improves performance on downstream machine learning tasks.

Time Series Analysis

Anomaly Clustering: Grouping Images into Coherent Clusters of Anomaly Types

no code implementations21 Dec 2021 Kihyuk Sohn, Jinsung Yoon, Chun-Liang Li, Chen-Yu Lee, Tomas Pfister

We define a distance function between images, each of which is represented as a bag of embeddings, by the Euclidean distance between weighted averaged embeddings.

Anomaly Detection Deep Clustering +1

Improving Model Compatibility of Generative Adversarial Networks by Boundary Calibration

no code implementations3 Nov 2021 Si-An Chen, Chun-Liang Li, Hsuan-Tien Lin

To improve GAN in terms of model compatibility, we propose Boundary-Calibration GANs (BCGANs), which leverage the boundary information from a set of pre-trained classifiers using the original data.

A Unified View of cGANs with and without Classifiers

1 code implementation NeurIPS 2021 Si-An Chen, Chun-Liang Li, Hsuan-Tien Lin

Conditional Generative Adversarial Networks (cGANs) are implicit generative models which allow to sample from class-conditional distributions.

Unifying Distribution Alignment as a Loss for Imbalanced Semi-supervised Learning

no code implementations29 Sep 2021 Justin Lazarow, Kihyuk Sohn, Chun-Liang Li, Zizhao Zhang, Chen-Yu Lee, Tomas Pfister

While remarkable progress in imbalanced supervised learning has been made recently, less attention has been given to the setting of imbalanced semi-supervised learning (SSL) where not only is a few labeled data provided, but the underlying data distribution can be severely imbalanced.

Pseudo Label

Object-aware Contrastive Learning for Debiased Scene Representation

1 code implementation NeurIPS 2021 Sangwoo Mo, Hyunwoo Kang, Kihyuk Sohn, Chun-Liang Li, Jinwoo Shin

Contrastive self-supervised learning has shown impressive results in learning visual representations from unlabeled images by enforcing invariance against different data augmentations.

Contrastive Learning Representation Learning +1

DISSECT: Disentangled Simultaneous Explanations via Concept Traversals

1 code implementation ICLR 2022 Asma Ghandeharioun, Been Kim, Chun-Liang Li, Brendan Jou, Brian Eoff, Rosalind W. Picard

Explaining deep learning model inferences is a promising venue for scientific understanding, improving safety, uncovering hidden biases, evaluating fairness, and beyond, as argued by many scholars.

Fairness Interpretability Techniques for Deep Learning +1

CutPaste: Self-Supervised Learning for Anomaly Detection and Localization

2 code implementations CVPR 2021 Chun-Liang Li, Kihyuk Sohn, Jinsung Yoon, Tomas Pfister

We aim at constructing a high performance model for defect detection that detects unknown anomalous patterns of an image without anomalous data.

Ranked #8 on Unsupervised Anomaly Detection on DAGM2007 (using extra training data)

Data Augmentation Defect Detection +4

Kernel Stein Generative Modeling

no code implementations6 Jul 2020 Wei-Cheng Chang, Chun-Liang Li, Youssef Mroueh, Yiming Yang

NCK is crucial for successful inference with SVGD in high dimension, as it adapts the kernel to the noise level of the score estimate.

Bayesian Inference

A Simple Semi-Supervised Learning Framework for Object Detection

6 code implementations10 May 2020 Kihyuk Sohn, Zizhao Zhang, Chun-Liang Li, Han Zhang, Chen-Yu Lee, Tomas Pfister

Semi-supervised learning (SSL) has a potential to improve the predictive performance of machine learning models using unlabeled data.

Ranked #10 on Semi-Supervised Object Detection on COCO 100% labeled data (using extra training data)

Data Augmentation Image Classification +3

Getting Topology and Point Cloud Generation to Mesh

no code implementations8 Dec 2019 Austin Dill, Chun-Liang Li, Songwei Ge, Eunsu Kang

In this work, we explore the idea that effective generative models for point clouds under the autoencoding framework must acknowledge the relationship between a continuous surface, a discretized mesh, and a set of points sampled from the surface.

Point Cloud Generation

Learned Interpolation for 3D Generation

no code implementations8 Dec 2019 Austin Dill, Songwei Ge, Eunsu Kang, Chun-Liang Li, Barnabas Poczos

The typical approach for incorporating this creative process is to interpolate in a learned latent space so as to avoid the problem of generating unrealistic instances by exploiting the model's learned structure.

On Completeness-aware Concept-Based Explanations in Deep Neural Networks

2 code implementations NeurIPS 2020 Chih-Kuan Yeh, Been Kim, Sercan O. Arik, Chun-Liang Li, Tomas Pfister, Pradeep Ravikumar

Next, we propose a concept discovery method that aims to infer a complete set of concepts that are additionally encouraged to be interpretable, which addresses the limitations of existing methods on concept explanations.

On Concept-Based Explanations in Deep Neural Networks

no code implementations25 Sep 2019 Chih-Kuan Yeh, Been Kim, Sercan Arik, Chun-Liang Li, Pradeep Ravikumar, Tomas Pfister

Next, we propose a concept discovery method that considers two additional constraints to encourage the interpretability of the discovered concepts.

Developing Creative AI to Generate Sculptural Objects

no code implementations20 Aug 2019 Songwei Ge, Austin Dill, Eunsu Kang, Chun-Liang Li, Lingyao Zhang, Manzil Zaheer, Barnabas Poczos

We explore the intersection of human and machine creativity by generating sculptural objects through machine learning.

Generating 3D Point Clouds

LBS Autoencoder: Self-supervised Fitting of Articulated Meshes to Point Clouds

no code implementations CVPR 2019 Chun-Liang Li, Tomas Simon, Jason Saragih, Barnabás Póczos, Yaser Sheikh

As input, we take a sequence of point clouds to be registered as well as an artist-rigged mesh, i. e. a template mesh equipped with a linear-blend skinning (LBS) deformation space parameterized by a skeleton hierarchy.

Implicit Kernel Learning

no code implementations26 Feb 2019 Chun-Liang Li, Wei-Cheng Chang, Youssef Mroueh, Yiming Yang, Barnabás Póczos

While learning the kernel in a data driven way has been investigated, in this paper we explore learning the spectral distribution of kernel via implicit generative models parametrized by deep neural networks.

Text Generation

Point Cloud GAN

1 code implementation13 Oct 2018 Chun-Liang Li, Manzil Zaheer, Yang Zhang, Barnabas Poczos, Ruslan Salakhutdinov

In this paper, we first show a straightforward extension of existing GAN algorithm is not applicable to point clouds, because the constraint required for discriminators is undefined for set data.

Object Recognition

Nonparametric Density Estimation under Adversarial Losses

no code implementations NeurIPS 2018 Shashank Singh, Ananya Uppal, Boyue Li, Chun-Liang Li, Manzil Zaheer, Barnabás Póczos

We study minimax convergence rates of nonparametric density estimation under a large class of loss functions called "adversarial losses", which, besides classical $\mathcal{L}^p$ losses, includes maximum mean discrepancy (MMD), Wasserstein distance, and total variation distance.

Density Estimation

Pedestrian-Synthesis-GAN: Generating Pedestrian Data in Real Scene and Beyond

3 code implementations5 Apr 2018 Xi Ouyang, Yu Cheng, Yifan Jiang, Chun-Liang Li, Pan Zhou

The results show that our framework can smoothly synthesize pedestrians on background images of variations and different levels of details.

Pedestrian Detection Scene Text Recognition

Sobolev GAN

1 code implementation ICLR 2018 Youssef Mroueh, Chun-Liang Li, Tom Sercu, Anant Raj, Yu Cheng

We show that the Sobolev IPM compares two distributions in high dimensions based on weighted conditional Cumulative Distribution Functions (CDF) of each coordinate on a leave one out basis.

Text Generation

One Network to Solve Them All -- Solving Linear Inverse Problems Using Deep Projection Models

1 code implementation ICCV 2017 J. H. Rick Chang, Chun-Liang Li, Barnabas Poczos, B. V. K. Vijaya Kumar, Aswin C. Sankaranarayanan

While deep learning methods have achieved state-of-the-art performance in many challenging inverse problems like image inpainting and super-resolution, they invariably involve problem-specific training of the networks.

Compressive Sensing Image Inpainting +1

MMD GAN: Towards Deeper Understanding of Moment Matching Network

2 code implementations NeurIPS 2017 Chun-Liang Li, Wei-Cheng Chang, Yu Cheng, Yiming Yang, Barnabás Póczos

In this paper, we propose to improve both the model expressiveness of GMMN and its computational efficiency by introducing adversarial kernel learning techniques, as the replacement of a fixed Gaussian kernel in the original GMMN.

Data-driven Random Fourier Features using Stein Effect

no code implementations23 May 2017 Wei-Cheng Chang, Chun-Liang Li, Yiming Yang, Barnabas Poczos

Large-scale kernel approximation is an important problem in machine learning research.

One Network to Solve Them All --- Solving Linear Inverse Problems using Deep Projection Models

2 code implementations29 Mar 2017 J. H. Rick Chang, Chun-Liang Li, Barnabas Poczos, B. V. K. Vijaya Kumar, Aswin C. Sankaranarayanan

On the other hand, traditional methods using signal priors can be used in all linear inverse problems but often have worse performance on challenging tasks.

Compressive Sensing Image Inpainting +1

CMU DeepLens: Deep Learning For Automatic Image-based Galaxy-Galaxy Strong Lens Finding

1 code implementation8 Mar 2017 Francois Lanusse, Quanbin Ma, Nan Li, Thomas E. Collett, Chun-Liang Li, Siamak Ravanbakhsh, Rachel Mandelbaum, Barnabas Poczos

We find on our simulated data set that for a rejection rate of non-lenses of 99%, a completeness of 90% can be achieved for lenses with Einstein radii larger than 1. 4" and S/N larger than 20 on individual $g$-band LSST exposures.

Instrumentation and Methods for Astrophysics Cosmology and Nongalactic Astrophysics Astrophysics of Galaxies

Annealing Gaussian into ReLU: a New Sampling Strategy for Leaky-ReLU RBM

no code implementations11 Nov 2016 Chun-Liang Li, Siamak Ravanbakhsh, Barnabas Poczos

Due to numerical stability and quantifiability of the likelihood, RBM is commonly used with Bernoulli units.

Rivalry of Two Families of Algorithms for Memory-Restricted Streaming PCA

no code implementations4 Jun 2015 Chun-Liang Li, Hsuan-Tien Lin, Chi-Jen Lu

In this paper, we analyze the convergence rate of a representative algorithm with decayed learning rate (Oja and Karhunen, 1985) in the first family for the general $k>1$ case.

Cannot find the paper you are looking for? You can Submit a new open access paper.