no code implementations • 17 Apr 2024 • Yiqun Xie, Zhihao Wang, Weiye Chen, Zhili Li, Xiaowei Jia, Yanhua Li, Ruichen Wang, Kangyang Chai, Ruohan Li, Sergii Skakun
This work aims to enhance the understanding of the status and suitability of foundation models for pixel-level classification using multispectral imagery at moderate resolution, through comparisons with traditional machine learning (ML) and regular-size deep learning models.
no code implementations • 21 Mar 2024 • Zhihao Wang, Yulin Zhou, Ningyu Zhang, Xiaosong Yang, Jun Xiao, Zhao Wang
We believe our work could provide a novel perspective to consider the uncertainty quality for the general motion prediction task and encourage the studies in this field.
no code implementations • 27 Jan 2024 • Zhihao Wang, Yiqun Xie, Zhili Li, Xiaowei Jia, Zhe Jiang, Aolin Jia, Shuo Xu
Fairness-awareness has emerged as an essential building block for the responsible use of artificial intelligence in real applications.
no code implementations • 16 Oct 2023 • Lihui Xue, Zhihao Wang, Xueqian Wang, Gang Li
In addition, our method reduces more than 60% memory costs of the subsequent pixel-level CD processing stage.
1 code implementation • 31 May 2023 • Tong Li, Zhihao Wang, Liangying Shao, Xuling Zheng, Xiaoli Wang, Jinsong Su
Specifically, in addition to a text encoder encoding the input text, our model is equipped with a table header generator to first output a table header, i. e., the first row of the table, in the manner of sequence generation.
1 code implementation • 25 May 2023 • Zhihao Wang, Longyue Wang, Jinsong Su, Junfeng Yao, Zhaopeng Tu
Experimental results on the large-scale WMT20 En-De show that the asymmetric architecture (e. g. bigger encoder and smaller decoder) can achieve comparable performance with the scaling model, while maintaining the superiority of decoding speed with standard NAT models.
1 code implementation • 28 Dec 2022 • Zhihao Wang, Zongyu Lin, Peiqi Liu, Guidong Zheng, Junjie Wen, Xianxin Chen, Yujun Chen, Zhilin Yang
Label noise is ubiquitous in various machine learning scenarios such as self-labeling with model predictions and erroneous data annotation.
no code implementations • 7 Oct 2022 • Zhihao Wang, Chuang Zhu
In TCNL, the shallow feature extractor gets preliminary features first.
no code implementations • 21 Jul 2022 • Boyang xia, Zhihao Wang, Wenhao Wu, Haoran Wang, Jungong Han
For each category, the common pattern of it is employed as a query and the most salient frames are responded to it.
Ranked #5 on Action Recognition on ActivityNet
1 code implementation • 27 Jun 2022 • Zechen Wang, Liangzhen Zheng, Sheng Wang, Mingzhi Lin, Zhihao Wang, Adams Wai-Kin Kong, Yuguang Mu, Yanjie Wei, Weifeng Li
In this work, we propose a fully differentiable framework for ligand pose optimization based on a hybrid scoring function (SF) combined with a multi-layer perceptron (DeepRMSD) and the traditional AutoDock Vina SF.
no code implementations • 22 Mar 2022 • Zhihao Wang, Tangjian Duan, ZiHao Wang, Minghui Yang, Zujie Wen, Yongliang Wang
Context modeling plays a significant role in building multi-turn dialogue systems.
no code implementations • 14 Jul 2021 • Meng Xu, Zhihao Wang, Jiasong Zhu, Xiuping Jia, Sen Jia
The main body of the generator contains two blocks; one is the pyramidal convolution in the residual-dense block (PCRDB), and the other is the attention-based upsample (AUP) block.
1 code implementation • 4 Jul 2021 • Zhihao Wang, Yanwei Yu, Yibo Wang, Haixu Long, Fazheng Wang
Offline Chinese handwriting text recognition is a long-standing research topic in the field of pattern recognition.
no code implementations • ECCV 2020 • Chenghao Liu, Zhihao Wang, Doyen Sahoo, Yuan Fang, Kun Zhang, Steven C. H. Hoi
Meta-learning methods have been extensively studied and applied in computer vision, especially for few-shot classification tasks.
5 code implementations • 16 Feb 2019 • Zhihao Wang, Jian Chen, Steven C. H. Hoi
Image Super-Resolution (SR) is an important class of image processing techniques to enhance the resolution of images and videos in computer vision.