Search Results for author: Xingchao Liu

Found 14 papers, 8 papers with code

ALLSH: Active Learning Guided by Local Sensitivity and Hardness

1 code implementation10 May 2022 Shujian Zhang, Chengyue Gong, Xingchao Liu, Pengcheng He, Weizhu Chen, Mingyuan Zhou

Active learning, which effectively collects informative unlabeled data for annotation, reduces the demand for labeled data.

Active Learning Few-Shot Learning

FuseDream: Training-Free Text-to-Image Generation with Improved CLIP+GAN Space Optimization

1 code implementation2 Dec 2021 Xingchao Liu, Chengyue Gong, Lemeng Wu, Shujian Zhang, Hao Su, Qiang Liu

We approach text-to-image generation by combining the power of the retrained CLIP representation with an off-the-shelf image generator (GANs), optimizing in the latent space of GAN to find images that achieve maximum CLIP score with the given input text.

Text to image generation Zero-Shot Text-to-Image Generation

Sampling with Trusthworthy Constraints: A Variational Gradient Framework

1 code implementation NeurIPS 2021 Xingchao Liu, Xin Tong, Qiang Liu

In this work, we propose a family of constrained sampling algorithms which generalize Langevin Dynamics (LD) and Stein Variational Gradient Descent (SVGD) to incorporate a moment constraint specified by a general nonlinear function.

Bayesian Inference Fairness

Automatic and Harmless Regularization with Constrained and Lexicographic Optimization: A Dynamic Barrier Approach

no code implementations NeurIPS 2021 Chengyue Gong, Xingchao Liu, Qiang Liu

In this work, we consider constrained optimization as a more principled approach for trading off two losses, with a special emphasis on lexicographic optimization, a degenerated limit of constrained optimization which optimizes a secondary loss inside the optimal set of the main loss.

Profiling Pareto Front With Multi-Objective Stein Variational Gradient Descent

1 code implementation NeurIPS 2021 Xingchao Liu, Xin Tong, Qiang Liu

Finding diverse and representative Pareto solutions from the Pareto front is a key challenge in multi-objective optimization (MOO).

Conflict-Averse Gradient Descent for Multi-task Learning

2 code implementations NeurIPS 2021 Bo Liu, Xingchao Liu, Xiaojie Jin, Peter Stone, Qiang Liu

The goal of multi-task learning is to enable more efficient learning than single task learning by sharing model structures for a diverse set of tasks.

Multi-Task Learning

Sampling with Trusthworthy Constraints: A Variational Gradient Framework

1 code implementation NeurIPS 2021 Xingchao Liu, Xin Tong, Qiang Liu

In this work, we propose a family of constrained sampling algorithms which generalize Langevin Dynamics (LD) and Stein Variational Gradient Descent (SVGD) to incorporate a moment constraint specified by a general nonlinear function.

Bayesian Inference Fairness

Automatic and Harmless Regularization with Constrained and Lexicographic Optimization: A Dynamic Barrier Approach

no code implementations NeurIPS 2021 Chengyue Gong, Xingchao Liu, Qiang Liu

In this work, we consider constrained optimization as a more principled approach for trading off two losses, with a special emphasis on lexicographic optimization, a degenerated limit of constrained optimization which optimizes a secondary loss inside the optimal set of the main loss.

Profiling Pareto Front With Multi-Objective Stein Variational Gradient Descent

1 code implementation NeurIPS 2021 Xingchao Liu, Xin Tong, Qiang Liu

Finding diverse and representative Pareto solutions from the Pareto front is a key challenge in multi-objective optimization (MOO).

Centroid Transformers: Learning to Abstract with Attention

no code implementations17 Feb 2021 Lemeng Wu, Xingchao Liu, Qiang Liu

Self-attention, as the key block of transformers, is a powerful mechanism for extracting features from the inputs.

Abstractive Text Summarization Image Classification

Fast Training of Contrastive Learning with Intermediate Contrastive Loss

no code implementations1 Jan 2021 Chengyue Gong, Xingchao Liu, Qiang Liu

We apply our method to recently-proposed MOCO, SimCLR, SwAV and notice that we can reduce the computational cost with little loss on the performance of ImageNet linear classification and other downstream tasks.

Contrastive Learning

Certified Monotonic Neural Networks

1 code implementation NeurIPS 2020 Xingchao Liu, Xing Han, Na Zhang, Qiang Liu

In this work, we propose to certify the monotonicity of the general piece-wise linear neural networks by solving a mixed integer linear programming problem. This provides a new general approach for learning monotonic neural networks with arbitrary model structures.

Fairness

Post-training Quantization with Multiple Points: Mixed Precision without Mixed Precision

no code implementations20 Feb 2020 Xingchao Liu, Mao Ye, Dengyong Zhou, Qiang Liu

We propose multipoint quantization, a quantization method that approximates a full-precision weight vector using a linear combination of multiple vectors of low-bit numbers; this is in contrast to typical quantization methods that approximate each weight using a single low precision number.

Object Detection Quantization

Transfer Value or Policy? A Value-centric Framework Towards Transferrable Continuous Reinforcement Learning

no code implementations27 Sep 2018 Xingchao Liu, Tongzhou Mu, Hao Su

In this paper, we investigate the problem of transfer learning across environments with different dynamics while accomplishing the same task in the continuous control domain.

Continuous Control reinforcement-learning +1

Cannot find the paper you are looking for? You can Submit a new open access paper.