1 code implementation • 10 May 2022 • Shujian Zhang, Chengyue Gong, Xingchao Liu, Pengcheng He, Weizhu Chen, Mingyuan Zhou
Active learning, which effectively collects informative unlabeled data for annotation, reduces the demand for labeled data.
1 code implementation • 2 Dec 2021 • Xingchao Liu, Chengyue Gong, Lemeng Wu, Shujian Zhang, Hao Su, Qiang Liu
We approach text-to-image generation by combining the power of the retrained CLIP representation with an off-the-shelf image generator (GANs), optimizing in the latent space of GAN to find images that achieve maximum CLIP score with the given input text.
Ranked #14 on
Text-to-Image Generation
on COCO
1 code implementation • NeurIPS 2021 • Xingchao Liu, Xin Tong, Qiang Liu
In this work, we propose a family of constrained sampling algorithms which generalize Langevin Dynamics (LD) and Stein Variational Gradient Descent (SVGD) to incorporate a moment constraint specified by a general nonlinear function.
no code implementations • NeurIPS 2021 • Chengyue Gong, Xingchao Liu, Qiang Liu
In this work, we consider constrained optimization as a more principled approach for trading off two losses, with a special emphasis on lexicographic optimization, a degenerated limit of constrained optimization which optimizes a secondary loss inside the optimal set of the main loss.
1 code implementation • NeurIPS 2021 • Xingchao Liu, Xin Tong, Qiang Liu
Finding diverse and representative Pareto solutions from the Pareto front is a key challenge in multi-objective optimization (MOO).
2 code implementations • NeurIPS 2021 • Bo Liu, Xingchao Liu, Xiaojie Jin, Peter Stone, Qiang Liu
The goal of multi-task learning is to enable more efficient learning than single task learning by sharing model structures for a diverse set of tasks.
1 code implementation • NeurIPS 2021 • Xingchao Liu, Xin Tong, Qiang Liu
In this work, we propose a family of constrained sampling algorithms which generalize Langevin Dynamics (LD) and Stein Variational Gradient Descent (SVGD) to incorporate a moment constraint specified by a general nonlinear function.
no code implementations • NeurIPS 2021 • Chengyue Gong, Xingchao Liu, Qiang Liu
In this work, we consider constrained optimization as a more principled approach for trading off two losses, with a special emphasis on lexicographic optimization, a degenerated limit of constrained optimization which optimizes a secondary loss inside the optimal set of the main loss.
1 code implementation • NeurIPS 2021 • Xingchao Liu, Xin Tong, Qiang Liu
Finding diverse and representative Pareto solutions from the Pareto front is a key challenge in multi-objective optimization (MOO).
no code implementations • 17 Feb 2021 • Lemeng Wu, Xingchao Liu, Qiang Liu
Self-attention, as the key block of transformers, is a powerful mechanism for extracting features from the inputs.
Ranked #349 on
Image Classification
on ImageNet
no code implementations • 1 Jan 2021 • Chengyue Gong, Xingchao Liu, Qiang Liu
We apply our method to recently-proposed MOCO, SimCLR, SwAV and notice that we can reduce the computational cost with little loss on the performance of ImageNet linear classification and other downstream tasks.
1 code implementation • NeurIPS 2020 • Xingchao Liu, Xing Han, Na Zhang, Qiang Liu
In this work, we propose to certify the monotonicity of the general piece-wise linear neural networks by solving a mixed integer linear programming problem. This provides a new general approach for learning monotonic neural networks with arbitrary model structures.
no code implementations • 20 Feb 2020 • Xingchao Liu, Mao Ye, Dengyong Zhou, Qiang Liu
We propose multipoint quantization, a quantization method that approximates a full-precision weight vector using a linear combination of multiple vectors of low-bit numbers; this is in contrast to typical quantization methods that approximate each weight using a single low precision number.
no code implementations • 27 Sep 2018 • Xingchao Liu, Tongzhou Mu, Hao Su
In this paper, we investigate the problem of transfer learning across environments with different dynamics while accomplishing the same task in the continuous control domain.