1 code implementation • 12 Sep 2023 • Xingchao Liu, Xiwen Zhang, Jianzhu Ma, Jian Peng, Qiang Liu
Leveraging our new pipeline, we create, to the best of our knowledge, the first one-step diffusion-based text-to-image generator with SD-level image quality, achieving an FID (Frechet Inception Distance) of $23. 3$ on MS COCO 2017-5k, surpassing the previous state-of-the-art technique, progressive distillation, by a significant margin ($37. 2$ $\rightarrow$ $23. 3$ in FID).
no code implementations • 4 May 2023 • Shujian Zhang, Chengyue Gong, Lemeng Wu, Xingchao Liu, Mingyuan Zhou
Ultimately, with this prompt paragraph, AutoML-GPT will automatically conduct the experiments from data processing to model architecture, hyperparameter tuning, and predicted training log.
no code implementations • CVPR 2023 • Xingchao Liu, Lemeng Wu, Shujian Zhang, Chengyue Gong, Wei Ping, Qiang Liu
To further accelerate the computation of the back-propagation, we propose to use a non-uniform discretization to approximate the ODE trajectory, where we measure how straight the trajectory is and gather the straight parts into one discretization step.
1 code implementation • CVPR 2023 • Lemeng Wu, Dilin Wang, Chengyue Gong, Xingchao Liu, Yunyang Xiong, Rakesh Ranjan, Raghuraman Krishnamoorthi, Vikas Chandra, Qiang Liu
We perform evaluations on multiple 3D tasks and find that our PSF performs comparably to the standard diffusion model, outperforming other efficient 3D point cloud generation methods.
no code implementations • 2 Nov 2022 • Shujian Zhang, Chengyue Gong, Xingchao Liu
Experiments on different tasks across open question answering, dialogue conversation, and fact verification show that our method consistently outperforms its baselines.
no code implementations • 6 Oct 2022 • Yan Zheng, Lemeng Wu, Xingchao Liu, Zhen Chen, Qiang Liu, QiXing Huang
We first propose a diffusion-based generative model to tackle this problem by generating voxelized shapes with close-to-reality outlines and structures.
2 code implementations • 7 Sep 2022 • Xingchao Liu, Chengyue Gong, Qiang Liu
The idea of rectified flow is to learn the ODE to follow the straight paths connecting the points drawn from \pi_0 and \pi_1 as much as possible.
no code implementations • 2 Sep 2022 • Lemeng Wu, Chengyue Gong, Xingchao Liu, Mao Ye, Qiang Liu
AI-based molecule generation provides a promising approach to a large area of biomedical sciences and engineering, such as antibody design, hydrolase engineering, or vaccine development.
no code implementations • 31 Aug 2022 • Xingchao Liu, Lemeng Wu, Mao Ye, Qiang Liu
Diffusion-based generative models have achieved promising results recently, but raise an array of open questions in terms of conceptual understanding, theoretical analysis, algorithm improvement and extensions to discrete, structured, non-Euclidean domains.
1 code implementation • 20 Jun 2022 • Ruqi Zhang, Xingchao Liu, Qiang Liu
We propose discrete Langevin proposal (DLP), a simple and scalable gradient-based proposal for sampling complex high-dimensional discrete distributions.
no code implementations • Findings (NAACL) 2022 • Shujian Zhang, Chengyue Gong, Xingchao Liu, Pengcheng He, Weizhu Chen, Mingyuan Zhou
Active learning, which effectively collects informative unlabeled data for annotation, reduces the demand for labeled data.
1 code implementation • 2 Dec 2021 • Xingchao Liu, Chengyue Gong, Lemeng Wu, Shujian Zhang, Hao Su, Qiang Liu
We approach text-to-image generation by combining the power of the retrained CLIP representation with an off-the-shelf image generator (GANs), optimizing in the latent space of GAN to find images that achieve maximum CLIP score with the given input text.
Ranked #46 on
Text-to-Image Generation
on COCO
1 code implementation • NeurIPS 2021 • Xingchao Liu, Xin Tong, Qiang Liu
Finding diverse and representative Pareto solutions from the Pareto front is a key challenge in multi-objective optimization (MOO).
no code implementations • NeurIPS 2021 • Chengyue Gong, Xingchao Liu, Qiang Liu
In this work, we consider constrained optimization as a more principled approach for trading off two losses, with a special emphasis on lexicographic optimization, a degenerated limit of constrained optimization which optimizes a secondary loss inside the optimal set of the main loss.
1 code implementation • NeurIPS 2021 • Xingchao Liu, Xin Tong, Qiang Liu
In this work, we propose a family of constrained sampling algorithms which generalize Langevin Dynamics (LD) and Stein Variational Gradient Descent (SVGD) to incorporate a moment constraint specified by a general nonlinear function.
3 code implementations • NeurIPS 2021 • Bo Liu, Xingchao Liu, Xiaojie Jin, Peter Stone, Qiang Liu
The goal of multi-task learning is to enable more efficient learning than single task learning by sharing model structures for a diverse set of tasks.
1 code implementation • NeurIPS 2021 • Xingchao Liu, Xin Tong, Qiang Liu
In this work, we propose a family of constrained sampling algorithms which generalize Langevin Dynamics (LD) and Stein Variational Gradient Descent (SVGD) to incorporate a moment constraint specified by a general nonlinear function.
1 code implementation • NeurIPS 2021 • Xingchao Liu, Xin Tong, Qiang Liu
Finding diverse and representative Pareto solutions from the Pareto front is a key challenge in multi-objective optimization (MOO).
no code implementations • NeurIPS 2021 • Chengyue Gong, Xingchao Liu, Qiang Liu
In this work, we consider constrained optimization as a more principled approach for trading off two losses, with a special emphasis on lexicographic optimization, a degenerated limit of constrained optimization which optimizes a secondary loss inside the optimal set of the main loss.
no code implementations • 17 Feb 2021 • Lemeng Wu, Xingchao Liu, Qiang Liu
Self-attention, as the key block of transformers, is a powerful mechanism for extracting features from the inputs.
Ranked #576 on
Image Classification
on ImageNet
no code implementations • 1 Jan 2021 • Chengyue Gong, Xingchao Liu, Qiang Liu
We apply our method to recently-proposed MOCO, SimCLR, SwAV and notice that we can reduce the computational cost with little loss on the performance of ImageNet linear classification and other downstream tasks.
1 code implementation • NeurIPS 2020 • Xingchao Liu, Xing Han, Na Zhang, Qiang Liu
In this work, we propose to certify the monotonicity of the general piece-wise linear neural networks by solving a mixed integer linear programming problem. This provides a new general approach for learning monotonic neural networks with arbitrary model structures.
no code implementations • 20 Feb 2020 • Xingchao Liu, Mao Ye, Dengyong Zhou, Qiang Liu
We propose multipoint quantization, a quantization method that approximates a full-precision weight vector using a linear combination of multiple vectors of low-bit numbers; this is in contrast to typical quantization methods that approximate each weight using a single low precision number.
no code implementations • 27 Sep 2018 • Xingchao Liu, Tongzhou Mu, Hao Su
In this paper, we investigate the problem of transfer learning across environments with different dynamics while accomplishing the same task in the continuous control domain.