no code implementations • 7 Apr 2024 • Yuanfeng Xu, Yuhao Chen, Zhongzhan Huang, Zijian He, Guangrun Wang, Philip Torr, Liang Lin
In this paper, we present AnimateZoo, a zero-shot diffusion-based video generator to address this challenging cross-species animation issue, aiming to accurately produce animal animations while preserving the background.
1 code implementation • 17 Feb 2024 • Shanshan Zhong, Zhongzhan Huang, Daifeng Li, Wushao Wen, Jinghui Qin, Liang Lin
This strategy can implicitly enhance the model's robustness during the optimization process, mitigating instability risks arising from multimodal information inputs.
1 code implementation • 5 Dec 2023 • Shanshan Zhong, Zhongzhan Huang, ShangHua Gao, Wushao Wen, Liang Lin, Marinka Zitnik, Pan Zhou
To this end, we study LLMs on the popular Oogiri game which needs participants to have good creativity and strong associative thinking for responding unexpectedly and humorously to the given image, text, or both, and thus is suitable for LoT study.
2 code implementations • NeurIPS 2023 • Zhongzhan Huang, Pan Zhou, Shuicheng Yan, Liang Lin
Besides, we also observe the theoretical benefits of the LSC coefficient scaling of UNet in the stableness of hidden features and gradient and also robustness.
no code implementations • ICCV 2023 • Zhongzhan Huang, Mingfu Liang, Jinghui Qin, Shanshan Zhong, Liang Lin
The self-attention mechanism (SAM) is widely used in various fields of artificial intelligence and has successfully boosted the performance of different models.
1 code implementation • 9 May 2023 • Shanshan Zhong, Wushao Wen, Jinghui Qin, Qiangpu Chen, Zhongzhan Huang
In computer vision, the performance of deep neural networks (DNNs) is highly related to the feature extraction ability, i. e., the ability to recognize and focus on key pixel regions in an image.
1 code implementation • 9 May 2023 • Shanshan Zhong, Zhongzhan Huang, Wushao Wen, Jinghui Qin, Liang Lin
Our approach can make text-to-image diffusion models easier to use with better user experience, which demonstrates our approach has the potential for further advancing the development of user-friendly text-to-image generation models by bridging the semantic gap between simple narrative prompts and complex keyword-based prompts.
no code implementations • 13 Apr 2023 • Shanshan Zhong, Zhongzhan Huang, Wushao Wen, Jinghui Qin, Liang Lin
This technique enables the mitigation of the extra costs for performance improvement during training, such as parameter size and inference time, through these transformations during inference, and therefore SRP has great potential for industrial and practical applications.
no code implementations • 5 Feb 2023 • Zhongzhan Huang, Mingfu Liang, Liang Lin
With the development of deep learning techniques, AI-enhanced numerical solvers are expected to become a new paradigm for solving differential equations due to their versatility and effectiveness in alleviating the accuracy-speed trade-off in traditional numerical solvers.
no code implementations • 27 Oct 2022 • Shanshan Zhong, Wushao Wen, Jinghui Qin, Zhongzhan Huang
More and more empirical and theoretical evidence shows that deepening neural networks can effectively improve their performance under suitable training settings.
no code implementations • 27 Oct 2022 • Zhongzhan Huang, Senwei Liang, Mingfu Liang, Liang Lin
The self-attention mechanism has emerged as a critical component for improving the performance of various backbone neural networks.
1 code implementation • 6 Oct 2022 • Shanshan Zhong, Jinghui Qin, Zhongzhan Huang, Daifeng Li
However, most existing methods mainly focus on the dialogue context or assist with global satisfaction prediction based on multi-task learning, which ignore the grounded relationships among the causal variables, like the user state and labor cost.
1 code implementation • 7 Aug 2022 • Zhongzhan Huang, Senwei Liang, Hong Zhang, Haizhao Yang, Liang Lin
The large-scale simulation of dynamical systems is critical in numerous scientific and engineering disciplines.
no code implementations • 16 Jul 2022 • Zhongzhan Huang, Senwei Liang, Mingfu Liang, wei he, Haizhao Yang, Liang Lin
Recently many plug-and-play self-attention modules (SAMs) are proposed to enhance the model generalization by exploiting the internal information of deep convolutional neural networks (CNNs).
no code implementations • ICLR 2022 • Senwei Liang, Zhongzhan Huang, Hong Zhang
We propose stiffness-aware neural network (SANN), a new method for learning Hamiltonian dynamical systems from data.
no code implementations • 13 Jul 2021 • Zhongzhan Huang, Mingfu Liang, Senwei Liang, wei he
Deep neural networks suffer from catastrophic forgetting when learning multiple knowledge sequentially, and a growing number of approaches have been proposed to mitigate this problem.
no code implementations • 11 Jul 2021 • wei he, Zhongzhan Huang, Mingfu Liang, Senwei Liang, Haizhao Yang
One filter could be important according to a certain criterion, while it is unnecessary according to another one, which indicates that each criterion is only a partial view of the comprehensive "importance".
no code implementations • NeurIPS 2021 • Zhongzhan Huang, Xinjiang Wang, Ping Luo
Channel pruning is a popular technique for compressing convolutional neural networks (CNNs), and various pruning criteria have been proposed to remove the redundant filters of CNNs.
1 code implementation • 30 Nov 2020 • Junfan Lin, Zhongzhan Huang, Keze Wang, Xiaodan Liang, Weiwei Chen, Liang Lin
Although deep reinforcement learning (RL) has been successfully applied to a variety of robotic control tasks, it's still challenging to apply it to real-world tasks, due to the poor sample efficiency.
1 code implementation • 28 Nov 2020 • Zhongzhan Huang, Senwei Liang, Mingfu Liang, wei he, Haizhao Yang
Recently, many plug-and-play self-attention modules are proposed to enhance the model generalization by exploiting the internal information of deep convolutional neural networks (CNNs).
no code implementations • 24 Apr 2020 • Zhongzhan Huang, Wenqi Shao, Xinjiang Wang, Liang Lin, Ping Luo
Channel pruning is a popular technique for compressing convolutional neural networks (CNNs), where various pruning criteria have been proposed to remove the redundant filters.
2 code implementations • 12 Aug 2019 • Senwei Liang, Zhongzhan Huang, Mingfu Liang, Haizhao Yang
Batch Normalization (BN)(Ioffe and Szegedy 2015) normalizes the features of an input image via statistics of a batch of images and hence BN will bring the noise to the gradient of the training loss.
3 code implementations • 25 May 2019 • Zhongzhan Huang, Senwei Liang, Mingfu Liang, Haizhao Yang
Attention networks have successfully boosted the performance in various vision problems.
Ranked #139 on Image Classification on CIFAR-100 (using extra training data)