1 code implementation • 11 Apr 2023 • Danwei Li, Zhengyu Zhang, Siyang Yuan, Mingze Gao, Weilin Zhang, Chaofei Yang, Xi Liu, Jiyan Yang
However, MTL research faces two challenges: 1) effectively modeling the relationships between tasks to enable knowledge sharing, and 2) jointly learning task-specific and shared knowledge.
no code implementations • 23 May 2022 • Youjun Xu, Jinchuan Xiao, Chia-Han Chou, Jianhang Zhang, Jintao Zhu, Qiwan Hu, Hemin Li, Ningsheng Han, Bingyu Liu, Shuaipeng Zhang, Jinyu Han, Zhen Zhang, Shuhao Zhang, Weilin Zhang, Luhua Lai, Jianfeng Pei
Due to a backlog of decades and an increasing amount of these printed literature, there is a high demand for the translation of printed depictions into machine-readable formats, which is known as Optical Chemical Structure Recognition (OCSR).
no code implementations • 11 Mar 2022 • Buyun Zhang, Liang Luo, Xi Liu, Jay Li, Zeliang Chen, Weilin Zhang, Xiaohan Wei, Yuchen Hao, Michael Tsang, Wenjun Wang, Yang Liu, Huayu Li, Yasmine Badr, Jongsoo Park, Jiyan Yang, Dheevatsa Mudigere, Ellie Wen
To overcome the challenge brought by DHEN's deeper and multi-layer structure in training, we propose a novel co-designed training system that can further improve the training efficiency of DHEN.
no code implementations • 21 Oct 2021 • Wei Ma, Qin Xie, Jianhang Zhang, Shiliang Li, Youjun Xu, Xiaobing Deng, Weilin Zhang
Howerver, a majority of compouds with low docking scores could waste most of the computational resources.
no code implementations • CVPR 2021 • Weilin Zhang, Yu-Xiong Wang
One critical factor in improving few-shot detection is to address the lack of variation in training data.
no code implementations • 19 Nov 2020 • Weilin Zhang, Yu-Xiong Wang, David A. Forsyth
Learning to detect an object in an image from very few training examples - few-shot object detection - is challenging, because the classifier that sees proposal boxes has very little training data.
1 code implementation • 20 Nov 2018 • Ke Chen, Weilin Zhang, Shlomo Dubnov, Gus Xia, Wei Li
With recent breakthroughs in artificial neural networks, deep generative models have become one of the leading techniques for computational creativity.