Search Results for author: Hanting Chen

Found 33 papers, 16 papers with code

Distilling Semantic Priors from SAM to Efficient Image Restoration Models

no code implementations25 Mar 2024 Quan Zhang, Xiaoyu Liu, Wei Li, Hanting Chen, Junchao Liu, Jie Hu, Zhiwei Xiong, Chun Yuan, Yunhe Wang

SPD leverages a self-distillation manner to distill the fused semantic priors to boost the performance of original IR models.

Deblurring Denoising +2

Vision Superalignment: Weak-to-Strong Generalization for Vision Foundation Models

1 code implementation6 Feb 2024 Jianyuan Guo, Hanting Chen, Chengcheng Wang, Kai Han, Chang Xu, Yunhe Wang

Recent advancements in large language models have sparked interest in their extraordinary and near-superhuman capabilities, leading researchers to explore methods for evaluating and optimizing these abilities, which is called superalignment.

Few-Shot Learning Knowledge Distillation +1

PanGu-$π$: Enhancing Language Model Architectures via Nonlinearity Compensation

no code implementations27 Dec 2023 Yunhe Wang, Hanting Chen, Yehui Tang, Tianyu Guo, Kai Han, Ying Nie, Xutao Wang, Hailin Hu, Zheyuan Bai, Yun Wang, Fangcheng Liu, Zhicheng Liu, Jianyuan Guo, Sinan Zeng, Yinchen Zhang, Qinghua Xu, Qun Liu, Jun Yao, Chao Xu, DaCheng Tao

We then demonstrate that the proposed approach is significantly effective for enhancing the model nonlinearity through carefully designed ablations; thus, we present a new efficient model architecture for establishing modern, namely, PanGu-$\pi$.

Language Modelling

CBQ: Cross-Block Quantization for Large Language Models

no code implementations13 Dec 2023 Xin Ding, Xiaoyu Liu, Zhijun Tu, Yun Zhang, Wei Li, Jie Hu, Hanting Chen, Yehui Tang, Zhiwei Xiong, Baoqun Yin, Yunhe Wang

Post-training quantization (PTQ) has played a key role in compressing large language models (LLMs) with ultra-low costs.

Quantization

GenDet: Towards Good Generalizations for AI-Generated Image Detection

1 code implementation12 Dec 2023 Mingjian Zhu, Hanting Chen, Mouxiao Huang, Wei Li, Hailin Hu, Jie Hu, Yunhe Wang

The misuse of AI imagery can have harmful societal effects, prompting the creation of detectors to combat issues like the spread of fake news.

Anomaly Detection

Towards Higher Ranks via Adversarial Weight Pruning

1 code implementation NeurIPS 2023 Yuchuan Tian, Hanting Chen, Tianyu Guo, Chao Xu, Yunhe Wang

To this end, we propose a Rank-based PruninG (RPG) method to maintain the ranks of sparse weights in an adversarial manner.

Model Compression Network Pruning

IFT: Image Fusion Transformer for Ghost-free High Dynamic Range Imaging

no code implementations26 Sep 2023 Hailing Wang, Wei Li, Yuanyuan Xi, Jie Hu, Hanting Chen, Longyu Li, Yunhe Wang

By matching similar patches between frames, objects with large motion ranges in dynamic scenes can be aligned, which can effectively alleviate the generation of artifacts.

Data Upcycling Knowledge Distillation for Image Super-Resolution

no code implementations25 Sep 2023 Yun Zhang, Wei Li, Simiao Li, Jie Hu, Hanting Chen, Hailing Wang, Zhijun Tu, Wenjia Wang, BingYi Jing, Yunhe Wang

In this paper, we put forth an approach from the perspective of effective data utilization, namely, the Data Upcycling Knowledge Distillation (DUKD), which facilitates the student model by the prior knowledge the teacher provided through the upcycled in-domain data derived from the input images.

Image Super-Resolution Knowledge Distillation +1

Multiscale Positive-Unlabeled Detection of AI-Generated Texts

3 code implementations29 May 2023 Yuchuan Tian, Hanting Chen, Xutao Wang, Zheyuan Bai, Qinghua Zhang, Ruifeng Li, Chao Xu, Yunhe Wang

Recent releases of Large Language Models (LLMs), e. g. ChatGPT, are astonishing at generating human-like texts, but they may impact the authenticity of texts.

Language Modelling text-classification +2

VanillaNet: the Power of Minimalism in Deep Learning

4 code implementations NeurIPS 2023 Hanting Chen, Yunhe Wang, Jianyuan Guo, DaCheng Tao

In this study, we introduce VanillaNet, a neural network architecture that embraces elegance in design.

Philosophy

Toward Accurate Post-Training Quantization for Image Super Resolution

2 code implementations CVPR 2023 Zhijun Tu, Jie Hu, Hanting Chen, Yunhe Wang

In this paper, we study post-training quantization(PTQ) for image super resolution using only a few unlabeled calibration images.

Image Super-Resolution Quantization

RefSR-NeRF: Towards High Fidelity and Super Resolution View Synthesis

1 code implementation CVPR 2023 Xudong Huang, Wei Li, Jie Hu, Hanting Chen, Yunhe Wang

We present Reference-guided Super-Resolution Neural Radiance Field (RefSR-NeRF) that extends NeRF to super resolution and photorealistic novel view synthesis.

Neural Rendering Novel View Synthesis +1

Brain-inspired Multilayer Perceptron with Spiking Neurons

4 code implementations CVPR 2022 Wenshuo Li, Hanting Chen, Jianyuan Guo, Ziyang Zhang, Yunhe Wang

However, due to the simplicity of their structures, the performance highly depends on the local features communication machenism.

Inductive Bias

Adder Attention for Vision Transformer

4 code implementations NeurIPS 2021 Han Shu, Jiahao Wang, Hanting Chen, Lin Li, Yujiu Yang, Yunhe Wang

With the new operation, vision transformers constructed using additions can also provide powerful feature representations.

Positive and Unlabeled Federated Learning

no code implementations29 Sep 2021 Lin Xinyang, Hanting Chen, Yixing Xu, Chao Xu, Xiaolin Gui, Yiping Deng, Yunhe Wang

We study the problem of learning from positive and unlabeled (PU) data in the federated setting, where each client only labels a little part of their dataset due to the limitation of resources and time.

Federated Learning

Federated Learning with Positive and Unlabeled Data

1 code implementation21 Jun 2021 Xinyang Lin, Hanting Chen, Yixing Xu, Chao Xu, Xiaolin Gui, Yiping Deng, Yunhe Wang

We study the problem of learning from positive and unlabeled (PU) data in the federated setting, where each client only labels a little part of their dataset due to the limitation of resources and time.

Federated Learning

Data-Free Knowledge Distillation for Image Super-Resolution

no code implementations CVPR 2021 Yiman Zhang, Hanting Chen, Xinghao Chen, Yiping Deng, Chunjing Xu, Yunhe Wang

Experiments on various datasets and architectures demonstrate that the proposed method is able to be utilized for effectively learning portable student networks without the original data, e. g., with 0. 16dB PSNR drop on Set5 for x2 super resolution.

Data-free Knowledge Distillation Image Super-Resolution +1

Learning Student Networks in the Wild

1 code implementation CVPR 2021 Hanting Chen, Tianyu Guo, Chang Xu, Wenshuo Li, Chunjing Xu, Chao Xu, Yunhe Wang

Experiments on various datasets demonstrate that the student networks learned by the proposed method can achieve comparable performance with those using the original dataset.

Knowledge Distillation Model Compression

Universal Adder Neural Networks

no code implementations29 May 2021 Hanting Chen, Yunhe Wang, Chang Xu, Chao Xu, Chunjing Xu, Tong Zhang

The widely-used convolutions in deep neural networks are exactly cross-correlation to measure the similarity between input feature and convolution filters, which involves massive multiplications between float values.

Winograd Algorithm for AdderNet

no code implementations12 May 2021 Wenshuo Li, Hanting Chen, Mingqiang Huang, Xinghao Chen, Chunjing Xu, Yunhe Wang

Adder neural network (AdderNet) is a new kind of deep model that replaces the original massive multiplications in convolutions by additions while preserving the high performance.

valid

AdderNet and its Minimalist Hardware Design for Energy-Efficient Artificial Intelligence

no code implementations25 Jan 2021 Yunhe Wang, Mingqiang Huang, Kai Han, Hanting Chen, Wei zhang, Chunjing Xu, DaCheng Tao

With a comprehensive comparison on the performance, power consumption, hardware resource consumption and network generalization capability, we conclude the AdderNet is able to surpass all the other competitors including the classical CNN, novel memristor-network, XNOR-Net and the shift-kernel based network, indicating its great potential in future high performance and energy-efficient artificial intelligence applications.

Quantization

A Survey on Visual Transformer

no code implementations23 Dec 2020 Kai Han, Yunhe Wang, Hanting Chen, Xinghao Chen, Jianyuan Guo, Zhenhua Liu, Yehui Tang, An Xiao, Chunjing Xu, Yixing Xu, Zhaohui Yang, Yiman Zhang, DaCheng Tao

Transformer, first applied to the field of natural language processing, is a type of deep neural network mainly based on the self-attention mechanism.

Image Classification Inductive Bias

Pre-Trained Image Processing Transformer

6 code implementations CVPR 2021 Hanting Chen, Yunhe Wang, Tianyu Guo, Chang Xu, Yiping Deng, Zhenhua Liu, Siwei Ma, Chunjing Xu, Chao Xu, Wen Gao

To maximally excavate the capability of transformer, we present to utilize the well-known ImageNet benchmark for generating a large amount of corrupted image pairs.

 Ranked #1 on Single Image Deraining on Rain100L (using extra training data)

Color Image Denoising Contrastive Learning +2

AdderSR: Towards Energy Efficient Image Super-Resolution

no code implementations CVPR 2021 Dehua Song, Yunhe Wang, Hanting Chen, Chang Xu, Chunjing Xu, DaCheng Tao

To this end, we thoroughly analyze the relationship between an adder operation and the identity mapping and insert shortcuts to enhance the performance of SR models using adder networks.

Image Classification Image Super-Resolution

A Semi-Supervised Assessor of Neural Architectures

no code implementations CVPR 2020 Yehui Tang, Yunhe Wang, Yixing Xu, Hanting Chen, Chunjing Xu, Boxin Shi, Chao Xu, Qi Tian, Chang Xu

A graph convolutional neural network is introduced to predict the performance of architectures based on the learned representations and their relation modeled by the graph.

Neural Architecture Search

Distilling portable Generative Adversarial Networks for Image Translation

no code implementations7 Mar 2020 Hanting Chen, Yunhe Wang, Han Shu, Changyuan Wen, Chunjing Xu, Boxin Shi, Chao Xu, Chang Xu

To promote the capability of student generator, we include a student discriminator to measure the distances between real images, and images generated by student and teacher generators.

Image-to-Image Translation Knowledge Distillation +1

Widening and Squeezing: Towards Accurate and Efficient QNNs

no code implementations3 Feb 2020 Chuanjian Liu, Kai Han, Yunhe Wang, Hanting Chen, Qi Tian, Chunjing Xu

Quantization neural networks (QNNs) are very attractive to the industry because their extremely cheap calculation and storage overhead, but their performance is still worse than that of networks with full-precision parameters.

Quantization

AdderNet: Do We Really Need Multiplications in Deep Learning?

7 code implementations CVPR 2020 Hanting Chen, Yunhe Wang, Chunjing Xu, Boxin Shi, Chao Xu, Qi Tian, Chang Xu

The widely-used convolutions in deep neural networks are exactly cross-correlation to measure the similarity between input feature and convolution filters, which involves massive multiplications between float values.

Positive-Unlabeled Compression on the Cloud

2 code implementations NeurIPS 2019 Yixing Xu, Yunhe Wang, Hanting Chen, Kai Han, Chunjing Xu, DaCheng Tao, Chang Xu

In practice, only a small portion of the original training set is required as positive examples and more useful training examples can be obtained from the massive unlabeled data on the cloud through a PU classifier with an attention based multi-scale feature extractor.

Knowledge Distillation

Co-Evolutionary Compression for Unpaired Image Translation

2 code implementations ICCV 2019 Han Shu, Yunhe Wang, Xu Jia, Kai Han, Hanting Chen, Chunjing Xu, Qi Tian, Chang Xu

Generative adversarial networks (GANs) have been successfully used for considerable computer vision tasks, especially the image-to-image translation.

Image-to-Image Translation Translation

Data-Free Learning of Student Networks

3 code implementations ICCV 2019 Hanting Chen, Yunhe Wang, Chang Xu, Zhaohui Yang, Chuanjian Liu, Boxin Shi, Chunjing Xu, Chao Xu, Qi Tian

Learning portable neural networks is very essential for computer vision for the purpose that pre-trained heavy deep models can be well applied on edge devices such as mobile phones and micro sensors.

Neural Network Compression

Learning Student Networks via Feature Embedding

no code implementations17 Dec 2018 Hanting Chen, Yunhe Wang, Chang Xu, Chao Xu, DaCheng Tao

Experiments on benchmark datasets and well-trained networks suggest that the proposed algorithm is superior to state-of-the-art teacher-student learning methods in terms of computational and storage complexity.

Knowledge Distillation

Cannot find the paper you are looking for? You can Submit a new open access paper.