1 code implementation • 3 Apr 2024 • Yifan Xu, Xiao Liu, Xinghan Liu, Zhenyu Hou, Yueyan Li, Xiaohan Zhang, Zihan Wang, Aohan Zeng, Zhengxiao Du, Wenyi Zhao, Jie Tang, Yuxiao Dong
Large language models (LLMs) have shown excellent mastering of human language, but still struggle in real-world applications that require mathematical problem-solving.
1 code implementation • 8 Oct 2023 • Wei Li, Ruifeng Bian, Wenyi Zhao, Weijin Xu, Huihua Yang
To address these concerns, we propose a novel Cross-head mutual mean-teaching Network (CMMT-Net) incorporated strong-weak data augmentation, thereby benefitting both self-training and consistency learning.
1 code implementation • 21 Jun 2023 • Wentao Liu, Tong Tian, Lemeng Wang, Weijin Xu, Lei LI, Haoyuan Li, Wenyi Zhao, Siyu Tian, Xipeng Pan, Huihua Yang, Feng Gao, Yiming Deng, Ruisheng Su
In this paper, we introduces DIAS, a dataset specifically developed for IA segmentation in DSA sequences.
1 code implementation • 17 Oct 2022 • Wenhe Jia, Lu Yang, Zilong Jia, Wenyi Zhao, Yilin Zhou, Qing Song
More importantly, as the fundamental model abilities demanded by the task, spatial segmentation and temporal association are still understudied in both evaluation and interaction mechanisms.
no code implementations • 23 Sep 2020 • Zhen Zheng, Pengzhan Zhao, Guoping Long, Feiwen Zhu, Kai Zhu, Wenyi Zhao, Lansong Diao, Jun Yang, Wei. Lin
We show in this work that memory intensive computations can result in severe performance problems due to off-chip memory access and CPU-GPU context switch overheads in a wide range of deep learning models.