no code implementations • 27 Feb 2024 • Shuchen Xue, Zhaoqiang Liu, Fei Chen, Shifeng Zhang, Tianyang Hu, Enze Xie, Zhenguo Li
While this is a significant development, most sampling methods still employ uniform time steps, which is not optimal when using a small number of steps.
no code implementations • 26 Feb 2024 • Xuantong Liu, Tianyang Hu, Wenjia Wang, Kenji Kawaguchi, Yuan YAO
In this work, we aim to address this alignment challenge for conditional generation tasks.
no code implementations • 23 Feb 2024 • Jiajun Ma, Shuchen Xue, Tianyang Hu, Wenjia Wang, Zhaoqiang Liu, Zhenguo Li, Zhi-Ming Ma, Kenji Kawaguchi
Surprisingly, the improvement persists when we increase the number of sampling steps and can even surpass the best result from EDM-2 (1. 58) with only 39 NFEs (1. 57).
no code implementations • 21 Feb 2024 • Yihang Gao, Chuanyang Zheng, Enze Xie, Han Shi, Tianyang Hu, Yu Li, Michael K. Ng, Zhenguo Li, Zhaoqiang Liu
Previous works try to explain this from the expressive power and capability perspectives that standard transformers are capable of performing some algorithms.
1 code implementation • 17 Oct 2023 • Jiajun Ma, Tianyang Hu, Wenjia Wang, Jiacheng Sun
Guidance in conditional diffusion generation is of great importance for sample quality and controllability.
Ranked #1 on Conditional Image Generation on ImageNet 128x128
no code implementations • 4 Jul 2023 • Weijian Luo, Hao Jiang, Tianyang Hu, Jiacheng Sun, Zhenguo Li, Zhihua Zhang
In image generation experiments, the proposed DCD is capable of training an energy-based model for generating the Celab-A $32\times 32$ dataset, which is comparable to existing EBMs.
no code implementations • 15 Jun 2023 • Paweł Piwek, Adam Klukowski, Tianyang Hu
Classic learning theory suggests that proper regularization is the key to good generalization and robustness.
no code implementations • 5 Jun 2023 • Yimeng Chen, Tianyang Hu, Fengwei Zhou, Zhenguo Li, ZhiMing Ma
The proliferation of pretrained models, as a result of advancements in pretraining techniques, has led to the emergence of a vast zoo of publicly available models.
1 code implementation • NeurIPS 2023 • Weijian Luo, Tianyang Hu, Shifeng Zhang, Jiacheng Sun, Zhenguo Li, Zhihua Zhang
To demonstrate the effectiveness and universality of Diff-Instruct, we consider two scenarios: distilling pre-trained diffusion models and refining existing GAN models.
1 code implementation • 18 May 2023 • Shoukang Hu, Kaichen Zhou, Kaiyu Li, Longhui Yu, Lanqing Hong, Tianyang Hu, Zhenguo Li, Gim Hee Lee, Ziwei Liu
In this paper, we propose ConsistentNeRF, a method that leverages depth information to regularize both multi-view and single-view 3D consistency among pixels.
1 code implementation • 9 May 2023 • Haonan Wang, Minbin Huang, Runhui Huang, Lanqing Hong, Hang Xu, Tianyang Hu, Xiaodan Liang, Zhenguo Li, Hong Cheng, Kenji Kawaguchi
In this work, we present HELIP, a cost-effective strategy tailored to enhance the performance of existing CLIP models without the need for training a model from scratch or collecting additional data.
no code implementations • 5 May 2023 • Liang Ding, Tianyang Hu, Jiahang Jiang, Donghao Li, Wenjia Wang, Yuan YAO
In this paper, we aim to bridge this gap by presenting a framework for random smoothing regularization that can adaptively and effectively learn a wide range of ground truth functions belonging to the classical Sobolev spaces.
no code implementations • CVPR 2023 • Hao Yang, Lanqing Hong, Aoxue Li, Tianyang Hu, Zhenguo Li, Gim Hee Lee, LiWei Wang
In this work, we first investigate the effects of synthetic data in synthetic-to-real novel view synthesis and surprisingly observe that models trained with synthetic data tend to produce sharper but less accurate volume densities.
1 code implementation • 24 Feb 2023 • Xuantong Liu, Jianfeng Zhang, Tianyang Hu, He Cao, Lujia Pan, Yuan YAO
One of the reasons is that the learned representations (i. e. features) from the imbalanced datasets are less effective than those from balanced datasets.
no code implementations • 28 Jan 2023 • Jiajun Ma, Tianyang Hu, Wenjia Wang
In this work, we systematically investigate the role of the projection head in SSL.
no code implementations • 17 Oct 2022 • Qishi Dong, Awais Muhammad, Fengwei Zhou, Chuanlong Xie, Tianyang Hu, Yongxin Yang, Sung-Ho Bae, Zhenguo Li
We evaluate our paradigm on a diverse model zoo consisting of 35 models for various OoD tasks and demonstrate: (i) model ranking is better correlated with fine-tuning ranking than previous methods and up to 9859x faster than brute-force fine-tuning; (ii) OoD generalization after model ensemble with feature selection outperforms the state-of-the-art methods and the accuracy on most challenging task DomainNet is improved from 46. 5\% to 50. 6\%.
1 code implementation • 11 Oct 2022 • Longhui Yu, Tianyang Hu, Lanqing Hong, Zhen Liu, Adrian Weller, Weiyang Liu
It has been observed that neural networks perform poorly when the data or tasks are presented sequentially.
2 code implementations • 30 May 2022 • Tianyang Hu, Zhili Liu, Fengwei Zhou, Wenjia Wang, Weiran Huang
Contrastive learning, especially self-supervised contrastive learning (SSCL), has achieved great success in extracting powerful features from unlabeled data.
no code implementations • 7 Dec 2021 • Tianyang Hu, Jun Wang, Wenjia Wang, Zhenguo Li
Comparing to cross-entropy, square loss has comparable generalization error but noticeable advantages in robustness and model calibration.
no code implementations • 6 Jul 2020 • Tianyang Hu, Wenjia Wang, Cong Lin, Guang Cheng
Overparametrized neural networks trained by gradient descent (GD) can provably overfit any training data.
no code implementations • 19 Jan 2020 • Tianyang Hu, Zuofeng Shang, Guang Cheng
In this paper, we attempt to understand this empirical success in high dimensional classification by deriving the convergence rates of excess risk.
1 code implementation • 8 Oct 2018 • Tianyang Hu, Zixiang Chen, Hanxi Sun, Jincheng Bai, Mao Ye, Guang Cheng
We propose two novel samplers to generate high-quality samples from a given (un-normalized) probability density.