Search Results for author: Ran Zhou

Found 7 papers, 6 papers with code

WAL-Net: Weakly supervised auxiliary task learning network for carotid plaques classification

1 code implementation25 Jan 2024 Haitao Gan, Lingchao Fu, Ran Zhou, Weiyan Gan, Furong Wang, Xiaoyan Wu, Zhi Yang, Zhongwei Huang

Specifically, the accuracy of mixed-echoic plaques classification increased by approximately 3. 3%, demonstrating the effectiveness of our approach.

Classification Segmentation +1

GroundingGPT:Language Enhanced Multi-modal Grounding Model

2 code implementations11 Jan 2024 Zhaowei Li, Qi Xu, Dong Zhang, Hang Song, Yiqing Cai, Qi Qi, Ran Zhou, Junting Pan, Zefeng Li, Van Tu Vu, Zhida Huang, Tao Wang

Beyond capturing global information like other multi-modal models, our proposed model excels at tasks demanding a detailed understanding of local information within the input.

Language Modelling Large Language Model

A region and category confidence-based multi-task network for carotid ultrasound image segmentation and classification

no code implementations2 Jul 2023 Haitao Gan, Ran Zhou, Yanghan Ou, Furong Wang, Xinyao Cheng, Aaron Fenster

The segmentation and classification of carotid plaques in ultrasound images play important roles in the treatment of atherosclerosis and assessment for the risk of stroke.

Classification Image Segmentation +3

Improving Self-training for Cross-lingual Named Entity Recognition with Contrastive and Prototype Learning

1 code implementation23 May 2023 Ran Zhou, Xin Li, Lidong Bing, Erik Cambria, Chunyan Miao

In cross-lingual named entity recognition (NER), self-training is commonly used to bridge the linguistic gap by training on pseudo-labeled target-language data.

Cross-Lingual NER named-entity-recognition +4

ConNER: Consistency Training for Cross-lingual Named Entity Recognition

1 code implementation17 Nov 2022 Ran Zhou, Xin Li, Lidong Bing, Erik Cambria, Luo Si, Chunyan Miao

We propose ConNER as a novel consistency training framework for cross-lingual NER, which comprises of: (1) translation-based consistency training on unlabeled target-language data, and (2) dropoutbased consistency training on labeled source-language data.

Cross-Lingual NER Knowledge Distillation +3

MReD: A Meta-Review Dataset for Structure-Controllable Text Generation

1 code implementation Findings (ACL) 2022 Chenhui Shen, Liying Cheng, Ran Zhou, Lidong Bing, Yang You, Luo Si

A more useful text generator should leverage both the input text and the control signal to guide the generation, which can only be built with a deep understanding of the domain knowledge.

Text Generation Text Summarization

Cannot find the paper you are looking for? You can Submit a new open access paper.