Search Results for author: Tianyang Hu

Found 22 papers, 7 papers with code

Accelerating Diffusion Sampling with Optimized Time Steps

no code implementations27 Feb 2024 Shuchen Xue, Zhaoqiang Liu, Fei Chen, Shifeng Zhang, Tianyang Hu, Enze Xie, Zhenguo Li

While this is a significant development, most sampling methods still employ uniform time steps, which is not optimal when using a small number of steps.

Image Generation

The Surprising Effectiveness of Skip-Tuning in Diffusion Sampling

no code implementations23 Feb 2024 Jiajun Ma, Shuchen Xue, Tianyang Hu, Wenjia Wang, Zhaoqiang Liu, Zhenguo Li, Zhi-Ming Ma, Kenji Kawaguchi

Surprisingly, the improvement persists when we increase the number of sampling steps and can even surpass the best result from EDM-2 (1. 58) with only 39 NFEs (1. 57).

Image Generation

On the Expressive Power of a Variant of the Looped Transformer

no code implementations21 Feb 2024 Yihang Gao, Chuanyang Zheng, Enze Xie, Han Shi, Tianyang Hu, Yu Li, Michael K. Ng, Zhenguo Li, Zhaoqiang Liu

Previous works try to explain this from the expressive power and capability perspectives that standard transformers are capable of performing some algorithms.

Training Energy-Based Models with Diffusion Contrastive Divergences

no code implementations4 Jul 2023 Weijian Luo, Hao Jiang, Tianyang Hu, Jiacheng Sun, Zhenguo Li, Zhihua Zhang

In image generation experiments, the proposed DCD is capable of training an energy-based model for generating the Celab-A $32\times 32$ dataset, which is comparable to existing EBMs.

Image Denoising Image Generation

Exact Count of Boundary Pieces of ReLU Classifiers: Towards the Proper Complexity Measure for Classification

no code implementations15 Jun 2023 Paweł Piwek, Adam Klukowski, Tianyang Hu

Classic learning theory suggests that proper regularization is the key to good generalization and robustness.

Learning Theory

Explore and Exploit the Diverse Knowledge in Model Zoo for Domain Generalization

no code implementations5 Jun 2023 Yimeng Chen, Tianyang Hu, Fengwei Zhou, Zhenguo Li, ZhiMing Ma

The proliferation of pretrained models, as a result of advancements in pretraining techniques, has led to the emergence of a vast zoo of publicly available models.

Domain Generalization Out-of-Distribution Generalization

Diff-Instruct: A Universal Approach for Transferring Knowledge From Pre-trained Diffusion Models

no code implementations NeurIPS 2023 Weijian Luo, Tianyang Hu, Shifeng Zhang, Jiacheng Sun, Zhenguo Li, Zhihua Zhang

To demonstrate the effectiveness and universality of Diff-Instruct, we consider two scenarios: distilling pre-trained diffusion models and refining existing GAN models.

ConsistentNeRF: Enhancing Neural Radiance Fields with 3D Consistency for Sparse View Synthesis

1 code implementation18 May 2023 Shoukang Hu, Kaichen Zhou, Kaiyu Li, Longhui Yu, Lanqing Hong, Tianyang Hu, Zhenguo Li, Gim Hee Lee, Ziwei Liu

In this paper, we propose ConsistentNeRF, a method that leverages depth information to regularize both multi-view and single-view 3D consistency among pixels.

3D Reconstruction SSIM

Boosting Visual-Language Models by Exploiting Hard Samples

1 code implementation9 May 2023 Haonan Wang, Minbin Huang, Runhui Huang, Lanqing Hong, Hang Xu, Tianyang Hu, Xiaodan Liang, Zhenguo Li, Hong Cheng, Kenji Kawaguchi

In this work, we present HELIP, a cost-effective strategy tailored to enhance the performance of existing CLIP models without the need for training a model from scratch or collecting additional data.

Retrieval Zero-Shot Learning

Random Smoothing Regularization in Kernel Gradient Descent Learning

no code implementations5 May 2023 Liang Ding, Tianyang Hu, Jiahang Jiang, Donghao Li, Wenjia Wang, Yuan YAO

In this paper, we aim to bridge this gap by presenting a framework for random smoothing regularization that can adaptively and effectively learn a wide range of ground truth functions belonging to the classical Sobolev spaces.

Data Augmentation

ContraNeRF: Generalizable Neural Radiance Fields for Synthetic-to-real Novel View Synthesis via Contrastive Learning

no code implementations CVPR 2023 Hao Yang, Lanqing Hong, Aoxue Li, Tianyang Hu, Zhenguo Li, Gim Hee Lee, LiWei Wang

In this work, we first investigate the effects of synthetic data in synthetic-to-real novel view synthesis and surprisingly observe that models trained with synthetic data tend to produce sharper but less accurate volume densities.

Contrastive Learning Generalizable Novel View Synthesis +2

Inducing Neural Collapse in Deep Long-tailed Learning

1 code implementation24 Feb 2023 Xuantong Liu, Jianfeng Zhang, Tianyang Hu, He Cao, Lujia Pan, Yuan YAO

One of the reasons is that the learned representations (i. e. features) from the imbalanced datasets are less effective than those from balanced datasets.

ZooD: Exploiting Model Zoo for Out-of-Distribution Generalization

no code implementations17 Oct 2022 Qishi Dong, Awais Muhammad, Fengwei Zhou, Chuanlong Xie, Tianyang Hu, Yongxin Yang, Sung-Ho Bae, Zhenguo Li

We evaluate our paradigm on a diverse model zoo consisting of 35 models for various OoD tasks and demonstrate: (i) model ranking is better correlated with fine-tuning ranking than previous methods and up to 9859x faster than brute-force fine-tuning; (ii) OoD generalization after model ensemble with feature selection outperforms the state-of-the-art methods and the accuracy on most challenging task DomainNet is improved from 46. 5\% to 50. 6\%.

feature selection Out-of-Distribution Generalization

Continual Learning by Modeling Intra-Class Variation

1 code implementation11 Oct 2022 Longhui Yu, Tianyang Hu, Lanqing Hong, Zhen Liu, Adrian Weller, Weiyang Liu

It has been observed that neural networks perform poorly when the data or tasks are presented sequentially.

Continual Learning

Your Contrastive Learning Is Secretly Doing Stochastic Neighbor Embedding

2 code implementations30 May 2022 Tianyang Hu, Zhili Liu, Fengwei Zhou, Wenjia Wang, Weiran Huang

Contrastive learning, especially self-supervised contrastive learning (SSCL), has achieved great success in extracting powerful features from unlabeled data.

Contrastive Learning Data Augmentation +2

Understanding Square Loss in Training Overparametrized Neural Network Classifiers

no code implementations7 Dec 2021 Tianyang Hu, Jun Wang, Wenjia Wang, Zhenguo Li

Comparing to cross-entropy, square loss has comparable generalization error but noticeable advantages in robustness and model calibration.

Regularization Matters: A Nonparametric Perspective on Overparametrized Neural Network

no code implementations6 Jul 2020 Tianyang Hu, Wenjia Wang, Cong Lin, Guang Cheng

Overparametrized neural networks trained by gradient descent (GD) can provably overfit any training data.

Sharp Rate of Convergence for Deep Neural Network Classifiers under the Teacher-Student Setting

no code implementations19 Jan 2020 Tianyang Hu, Zuofeng Shang, Guang Cheng

In this paper, we attempt to understand this empirical success in high dimensional classification by deriving the convergence rates of excess risk.

General Classification

Stein Neural Sampler

1 code implementation8 Oct 2018 Tianyang Hu, Zixiang Chen, Hanxi Sun, Jincheng Bai, Mao Ye, Guang Cheng

We propose two novel samplers to generate high-quality samples from a given (un-normalized) probability density.

Cannot find the paper you are looking for? You can Submit a new open access paper.