1 code implementation • 10 Oct 2023 • Cong Yang, Bipin Indurkhya, John See, Bo Gao, Yan Ke, Zeyd Boukhers, Zhenyu Yang, Marcin Grzegorzek
However, most existing shape and image datasets suffer from the lack of skeleton GT and inconsistency of GT standards.
no code implementations • 19 Mar 2023 • Yang Chen, Zhenyu Yang, Jingtong Zhao, Justus Adamson, Yang Sheng, Fang-Fang Yin, Chunhao Wang
Four deep neural networks as sub-models following the U-Net architecture were trained for the segmenting of a region-of-interest (ROI): each sub-model utilizes the mp-MRI and 1 of the 4 PCs as a 5-channel input for a 2D execution.
no code implementations • 14 Jan 2023 • Zhenyu Yang, Ge Zhang, Jia Wu, Jian Yang, Quan Z. Sheng, Shan Xue, Chuan Zhou, Charu Aggarwal, Hao Peng, Wenbin Hu, Edwin Hancock, Pietro Liò
Traditional approaches to learning a set of graphs heavily rely on hand-crafted features, such as substructures.
no code implementations • 29 Nov 2022 • Xiaochuan Ni, Xiaoling Zhang, Xu Zhan, Zhenyu Yang, Jun Shi, Shunjun Wei, Tianjiao Zeng
To avoid missed tracking, a detection method based on deep learning is designed to thoroughly learn shadows' features, thus increasing the accurate estimation.
no code implementations • 12 Oct 2022 • Zhenyu Yang, Kyle Lafata, Eugene Vaios, Zongsheng Hu, Trey Mullikin, Fang-Fang Yin, Chunhao Wang
The SPU-Net model was compared with (1) the classic U-Net model with test-time augmentation (TTA) and (2) linear scaling-based U-Net (LSU-Net) segmentation models in terms of both segmentation accuracy (Dice coefficient, sensitivity, specificity, and accuracy) and segmentation uncertainty (uncertainty map and uncertainty score).
no code implementations • 21 Sep 2022 • Zhenyu Yang, Xiaoling Zhang, Xu Zhan
The existing Video Synthetic Aperture Radar (ViSAR) moving target shadow detection methods based on deep neural networks mostly generate numerous false alarms and missing detections, because of the foreground-background indistinguishability.
1 code implementation • 8 Aug 2022 • Jian Guan, Zhenyu Yang, Rongsheng Zhang, Zhipeng Hu, Minlie Huang
Despite advances in generating fluent texts, existing pretraining models tend to attach incoherent event sequences to involved entities when generating narratives such as stories and news.
no code implementations • 7 Jul 2022 • Xiaowo Xu, Xiaoling Zhang, Tianwen Zhang, Zhenyu Yang, Jun Shi, Xu Zhan
Moving target shadows among video synthetic aperture radar (Video-SAR) images are always interfered by low scattering backgrounds and cluttered noises, causing poor detec-tion-tracking accuracy.
1 code implementation • 6 Jun 2022 • Pei Ke, Haozhe Ji, Zhenyu Yang, Yi Huang, Junlan Feng, Xiaoyan Zhu, Minlie Huang
Despite the success of text-to-text pre-trained models in various natural language generation (NLG) tasks, the generation performance is largely restricted by the number of labeled data in downstream tasks, particularly in data-to-text generation tasks.
1 code implementation • NAACL 2022 • Haozhe Ji, Rongsheng Zhang, Zhenyu Yang, Zhipeng Hu, Minlie Huang
Although Transformers with fully connected self-attentions are powerful to model long-term dependencies, they are struggling to scale to long texts with thousands of words in language modeling.
no code implementations • 1 Mar 2022 • Zhenyu Yang, Zongsheng Hu, Hangjie Ji, Kyle Lafata, Scott Floyd, Fang-Fang Yin, Chunhao Wang
Methods: By hypothesizing that deep feature extraction can be modeled as a spatiotemporally continuous process, we designed a novel deep learning model, neural ODE, in which deep feature extraction was governed by an ODE without explicit expression.
no code implementations • 19 Jul 2021 • Zongsheng Hu, Zhenyu Yang, Kyle J. Lafata, Fang-Fang Yin, Chunhao Wang
To develop a deep-learning model that integrates radiomics analysis for enhanced performance of COVID-19 and Non-COVID-19 pneumonia detection using chest X-ray image, two deep-learning models were trained based on a pre-trained VGG-16 architecture: in the 1st model, X-ray image was the sole input; in the 2nd model, X-ray image and 2 radiomic feature maps (RFM) selected by the saliency map analysis of the 1st model were stacked as the input.
no code implementations • 6 Jun 2021 • Yinhe Zheng, Yida Wang, Pei Ke, Zhenyu Yang, Minlie Huang
This paper propose to combine pretrained language models with the modular dialogue paradigm for open-domain dialogue modeling.
no code implementations • 31 Jan 2021 • Liqun Yang, Yijun Yang, Yao Wang, Zhenyu Yang, Wei Zeng
In the application of neural networks, we need to select a suitable model based on the problem complexity and the dataset scale.
no code implementations • frontiers 2020 • Xueli Xu, Zhongming Xie, Zhenyu Yang, Dongfang Li, Ximing Xu
This study presented a t-SNE based classification approach for compositional microbiome data, which enabled us to build classifiers and classify new samples in the reduced dimensional space produced by t-SNE.