no code implementations • 19 Jan 2023 • Shizun Wang, Weihong Zeng, Xu Wang, Hao Yang, Li Chen, Yi Yuan, Yunzhao Zeng, Min Zheng, Chuang Zhang, Ming Wu
To this end, we propose SwiftAvatar, a novel avatar auto-creation framework that is evidently superior to previous works.
1 code implementation • 22 Aug 2022 • Jie Qin, Jie Wu, Ming Li, Xuefeng Xiao, Min Zheng, Xingang Wang
Consequently, we offer the first attempt to provide lightweight SSSS models via a novel multi-granularity distillation (MGD) scheme, where multi-granularity is captured from three aspects: i) complementary teacher structure; ii) labeled-unlabeled data cooperative distillation; iii) hierarchical and multi-levels loss setting.
Knowledge Distillation
Semi-Supervised Semantic Segmentation
3 code implementations • 12 Jul 2022 • Jiashi Li, Xin Xia, Wei Li, Huixia Li, Xing Wang, Xuefeng Xiao, Rui Wang, Min Zheng, Xin Pan
Then, Next Hybrid Strategy (NHS) is designed to stack NCB and NTB in an efficient hybrid paradigm, which boosts performance in various downstream tasks.
Ranked #253 on
Image Classification
on ImageNet
no code implementations • 22 Jun 2022 • Ming Li, Jie Wu, Jinhang Cai, Jie Qin, Yuxi Ren, Xuefeng Xiao, Min Zheng, Rui Wang, Xin Pan
Recently, Synthetic data-based Instance Segmentation has become an exceedingly favorable optimization paradigm since it leverages simulation rendering and physics to generate high-quality image-annotation pairs.
1 code implementation • 25 May 2022 • Hailong Ma, Xin Xia, Xing Wang, Xuefeng Xiao, Jiashi Li, Min Zheng
Recently, Transformer networks have achieved impressive results on a variety of vision tasks.
no code implementations • 19 May 2022 • Xin Xia, Jiashi Li, Jie Wu, Xing Wang, Xuefeng Xiao, Min Zheng, Rui Wang
We revisit the existing excellent Transformers from the perspective of practical application.
no code implementations • CVPR 2022 • Xin Dong, Fuwei Zhao, Zhenyu Xie, Xijin Zhang, Daniel K. Du, Min Zheng, Xiang Long, Xiaodan Liang, Jianchao Yang
While significant progress has been made in garment transfer, one of the most applicable directions of human-centric image generation, existing works overlook the in-the-wild imagery, presenting severe garment-person misalignment as well as noticeable degradation in fine texture details.
1 code implementation • 29 Mar 2022 • Wei Li, Xing Wang, Xin Xia, Jie Wu, Xuefeng Xiao, Min Zheng, Shiping Wen
SepViT helps to carry out the information interaction within and among the windows via a depthwise separable self-attention.
2 code implementations • 21 Mar 2022 • Rui Yang, Hailong Ma, Jie Wu, Yansong Tang, Xuefeng Xiao, Min Zheng, Xiu Li
The vanilla self-attention mechanism inherently relies on pre-defined and steadfast computational dimensions.
no code implementations • 21 Jan 2022 • Feng Ren, Xiao Ding, Min Zheng, Mikhail Korzinkin, Xin Cai, Wei Zhu, Alexey Mantsyzov, Alex Aliper, Vladimir Aladinskiy, Zhongying Cao, Shanshan Kong, Xi Long, Bonnie Hei Man Liu, Yingtao Liu, Vladimir Naumov, Anastasia Shneyderman, Ivan V. Ozerov, Ju Wang, Frank W. Pun, Alan Aspuru-Guzik, Michael Levitt, Alex Zhavoronkov
The AlphaFold computer program predicted protein structures for the whole human genome, which has been considered as a remarkable breakthrough both in artificial intelligence (AI) application and structural biology.
no code implementations • CVPR 2021 • Dongsheng Ruan, Daiyin Wang, Yuan Zheng, Nenggan Zheng, Min Zheng
These approaches commonly learn the relationship between global contexts and attention activations by using fully-connected layers or linear transformations.
no code implementations • 15 May 2020 • Xin Xia, Xuefeng Xiao, Xing Wang, Min Zheng
In this way, PAD-NAS can automatically design the operations for each layer and achieve a trade-off between search space quality and model diversity.
no code implementations • 6 Sep 2019 • Dongsheng Ruan, Jun Wen, Nenggan Zheng, Min Zheng
In this work, we first revisit the SE block, and then present a detailed empirical study of the relationship between global context and attention distribution, based on which we propose a simple yet effective module, called Linear Context Transform (LCT) block.
no code implementations • 16 May 2019 • Zahra Ebrahimzadeh, Min Zheng, Selcuk Karakas, Samantha Kleinberg
Many real-world time series, such as in health, have changepoints where the system's structure or parameters change.
no code implementations • ICLR 2019 • Zahra Ebrahimzadeh, Min Zheng, Selcuk Karakas, Samantha Kleinberg
To address this, we show how changepoint detection can be treated as a supervised learning problem, and propose a new deep neural network architecture that can efficiently identify both abrupt and gradual changes at multiple scales.
no code implementations • 28 Feb 2013 • Min Zheng, Mingshen Sun, John C. S. Lui
In this paper, we present the design and implementation of DroidAnalytics, a signature based analytic system to automatically collect, manage, analyze and extract android malware.
Cryptography and Security