Search Results for author: Zhongzhan Huang

Found 23 papers, 11 papers with code

AnimateZoo: Zero-shot Video Generation of Cross-Species Animation via Subject Alignment

no code implementations7 Apr 2024 Yuanfeng Xu, Yuhao Chen, Zhongzhan Huang, Zijian He, Guangrun Wang, Philip Torr, Liang Lin

In this paper, we present AnimateZoo, a zero-shot diffusion-based video generator to address this challenging cross-species animation issue, aiming to accurately produce animal animations while preserving the background.

Video Editing Video Generation

Mirror Gradient: Towards Robust Multimodal Recommender Systems via Exploring Flat Local Minima

1 code implementation17 Feb 2024 Shanshan Zhong, Zhongzhan Huang, Daifeng Li, Wushao Wen, Jinghui Qin, Liang Lin

This strategy can implicitly enhance the model's robustness during the optimization process, mitigating instability risks arising from multimodal information inputs.

Multimodal Recommendation

Let's Think Outside the Box: Exploring Leap-of-Thought in Large Language Models with Creative Humor Generation

1 code implementation5 Dec 2023 Shanshan Zhong, Zhongzhan Huang, ShangHua Gao, Wushao Wen, Liang Lin, Marinka Zitnik, Pan Zhou

To this end, we study LLMs on the popular Oogiri game which needs participants to have good creativity and strong associative thinking for responding unexpectedly and humorously to the given image, text, or both, and thus is suitable for LoT study.

Logical Reasoning

ScaleLong: Towards More Stable Training of Diffusion Model via Scaling Network Long Skip Connection

2 code implementations NeurIPS 2023 Zhongzhan Huang, Pan Zhou, Shuicheng Yan, Liang Lin

Besides, we also observe the theoretical benefits of the LSC coefficient scaling of UNet in the stableness of hidden features and gradient and also robustness.

Understanding Self-attention Mechanism via Dynamical System Perspective

no code implementations ICCV 2023 Zhongzhan Huang, Mingfu Liang, Jinghui Qin, Shanshan Zhong, Liang Lin

The self-attention mechanism (SAM) is widely used in various fields of artificial intelligence and has successfully boosted the performance of different models.

LSAS: Lightweight Sub-attention Strategy for Alleviating Attention Bias Problem

1 code implementation9 May 2023 Shanshan Zhong, Wushao Wen, Jinghui Qin, Qiangpu Chen, Zhongzhan Huang

In computer vision, the performance of deep neural networks (DNNs) is highly related to the feature extraction ability, i. e., the ability to recognize and focus on key pixel regions in an image.

SUR-adapter: Enhancing Text-to-Image Pre-trained Diffusion Models with Large Language Models

1 code implementation9 May 2023 Shanshan Zhong, Zhongzhan Huang, Wushao Wen, Jinghui Qin, Liang Lin

Our approach can make text-to-image diffusion models easier to use with better user experience, which demonstrates our approach has the potential for further advancing the development of user-friendly text-to-image generation models by bridging the semantic gap between simple narrative prompts and complex keyword-based prompts.

Knowledge Distillation Text-to-Image Generation

ASR: Attention-alike Structural Re-parameterization

no code implementations13 Apr 2023 Shanshan Zhong, Zhongzhan Huang, Wushao Wen, Jinghui Qin, Liang Lin

This technique enables the mitigation of the extra costs for performance improvement during training, such as parameter size and inference time, through these transformations during inference, and therefore SRP has great potential for industrial and practical applications.

On Robust Numerical Solver for ODE via Self-Attention Mechanism

no code implementations5 Feb 2023 Zhongzhan Huang, Mingfu Liang, Liang Lin

With the development of deep learning techniques, AI-enhanced numerical solvers are expected to become a new paradigm for solving differential equations due to their versatility and effectiveness in alleviating the accuracy-speed trade-off in traditional numerical solvers.

Deepening Neural Networks Implicitly and Locally via Recurrent Attention Strategy

no code implementations27 Oct 2022 Shanshan Zhong, Wushao Wen, Jinghui Qin, Zhongzhan Huang

More and more empirical and theoretical evidence shows that deepening neural networks can effectively improve their performance under suitable training settings.

A Generic Shared Attention Mechanism for Various Backbone Neural Networks

no code implementations27 Oct 2022 Zhongzhan Huang, Senwei Liang, Mingfu Liang, Liang Lin

The self-attention mechanism has emerged as a critical component for improving the performance of various backbone neural networks.

Data Augmentation Image Classification +3

Causal Inference for Chatting Handoff

1 code implementation6 Oct 2022 Shanshan Zhong, Jinghui Qin, Zhongzhan Huang, Daifeng Li

However, most existing methods mainly focus on the dialogue context or assist with global satisfaction prediction based on multi-task learning, which ignore the grounded relationships among the causal variables, like the user state and labor cost.

Causal Inference Chatbot +2

On Fast Simulation of Dynamical System with Neural Vector Enhanced Numerical Solver

1 code implementation7 Aug 2022 Zhongzhan Huang, Senwei Liang, Hong Zhang, Haizhao Yang, Liang Lin

The large-scale simulation of dynamical systems is critical in numerous scientific and engineering disciplines.

Computational Efficiency

The Lottery Ticket Hypothesis for Self-attention in Convolutional Neural Network

no code implementations16 Jul 2022 Zhongzhan Huang, Senwei Liang, Mingfu Liang, wei he, Haizhao Yang, Liang Lin

Recently many plug-and-play self-attention modules (SAMs) are proposed to enhance the model generalization by exploiting the internal information of deep convolutional neural networks (CNNs).

Crowd Counting

Stiffness-aware neural network for learning Hamiltonian systems

no code implementations ICLR 2022 Senwei Liang, Zhongzhan Huang, Hong Zhang

We propose stiffness-aware neural network (SANN), a new method for learning Hamiltonian dynamical systems from data.

AlterSGD: Finding Flat Minima for Continual Learning by Alternative Training

no code implementations13 Jul 2021 Zhongzhan Huang, Mingfu Liang, Senwei Liang, wei he

Deep neural networks suffer from catastrophic forgetting when learning multiple knowledge sequentially, and a growing number of approaches have been proposed to mitigate this problem.

Continual Learning Semantic Segmentation

Blending Pruning Criteria for Convolutional Neural Networks

no code implementations11 Jul 2021 wei he, Zhongzhan Huang, Mingfu Liang, Senwei Liang, Haizhao Yang

One filter could be important according to a certain criterion, while it is unnecessary according to another one, which indicates that each criterion is only a partial view of the comprehensive "importance".

Clustering Network Pruning

Rethinking the Pruning Criteria for Convolutional Neural Network

no code implementations NeurIPS 2021 Zhongzhan Huang, Xinjiang Wang, Ping Luo

Channel pruning is a popular technique for compressing convolutional neural networks (CNNs), and various pruning criteria have been proposed to remove the redundant filters of CNNs.

Continuous Transition: Improving Sample Efficiency for Continuous Control Problems via MixUp

1 code implementation30 Nov 2020 Junfan Lin, Zhongzhan Huang, Keze Wang, Xiaodan Liang, Weiwei Chen, Liang Lin

Although deep reinforcement learning (RL) has been successfully applied to a variety of robotic control tasks, it's still challenging to apply it to real-world tasks, due to the poor sample efficiency.

Continuous Control Reinforcement Learning (RL)

Efficient Attention Network: Accelerate Attention by Searching Where to Plug

1 code implementation28 Nov 2020 Zhongzhan Huang, Senwei Liang, Mingfu Liang, wei he, Haizhao Yang

Recently, many plug-and-play self-attention modules are proposed to enhance the model generalization by exploiting the internal information of deep convolutional neural networks (CNNs).

Convolution-Weight-Distribution Assumption: Rethinking the Criteria of Channel Pruning

no code implementations24 Apr 2020 Zhongzhan Huang, Wenqi Shao, Xinjiang Wang, Liang Lin, Ping Luo

Channel pruning is a popular technique for compressing convolutional neural networks (CNNs), where various pruning criteria have been proposed to remove the redundant filters.

Instance Enhancement Batch Normalization: an Adaptive Regulator of Batch Noise

2 code implementations12 Aug 2019 Senwei Liang, Zhongzhan Huang, Mingfu Liang, Haizhao Yang

Batch Normalization (BN)(Ioffe and Szegedy 2015) normalizes the features of an input image via statistics of a batch of images and hence BN will bring the noise to the gradient of the training loss.

Image Classification

DIANet: Dense-and-Implicit Attention Network

3 code implementations25 May 2019 Zhongzhan Huang, Senwei Liang, Mingfu Liang, Haizhao Yang

Attention networks have successfully boosted the performance in various vision problems.

Ranked #139 on Image Classification on CIFAR-100 (using extra training data)

Image Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.