no code implementations • NAACL (SIGTYP) 2021 • Zhong Zhou, Alexander Waibel
In other words, given a text in 124 source languages, we translate it into a severely low resource language using only ∼1, 000 lines of low resource data without any external help.
1 code implementation • 26 Jun 2024 • Zhaowei Wu, Binyi Su, Hua Zhang, Zhong Zhou
To mitigate this issue, we develop a two-stage open-set object detection framework with prompt learning, which delves into conditional evidence decoupling for the unknown rejection.
no code implementations • 29 Jan 2024 • Zhong Zhou
Performance gain comes from massive source parallelism by careful choice of close-by language families, style-consistent corpus-level paraphrases within the same language and strategic adaptation of existing large pretrained multilingual models to the domain first and then to the language.
no code implementations • 5 May 2023 • Zhong Zhou, Jan Niehues, Alex Waibel
We examine two approaches: 1. best selection of seed sentences to jump start translations in a new language in view of best generalization to the remainder of a larger targeted text(s), and 2. we adapt large general multilingual translation engines from many other languages to focus on a specific text in a new, unknown language.
no code implementations • 2 Feb 2023 • Weimin Shi, Mingchen Zhuge, Dehong Gao, Zhong Zhou, Ming-Ming Cheng, Deng-Ping Fan
Daily images may convey abstract meanings that require us to memorize and infer profound information from them.
2 code implementations • 28 Oct 2022 • Binyi Su, Hua Zhang, Jingzhi Li, Zhong Zhou
In this paper, we seek a solution for the generalized few-shot open-set object detection (G-FOOD), which aims to avoid detecting unknown classes as known classes with a high confidence score while maintaining the performance of few-shot detection.
no code implementations • MTSummit 2021 • Zhong Zhou, Alex Waibel
We compare the portion-based approach that optimizes coherence of the text locally with the random sampling approach that increases coverage of the text globally.
no code implementations • 12 Apr 2021 • Zhong Zhou, Alex Waibel
In other words, given a text in 124 source languages, we translate it into a severely low resource language using only ~1, 000 lines of low resource data without any external help.
no code implementations • 11 Apr 2021 • Binyi Su, Zhong Zhou, Haiyong Chen, Xiaochun Cao
Moreover, we release a new solar cell EL image dataset named as EL-2019, which includes three types of images: crack, finger interruption and defect-free.
1 code implementation • 19 Dec 2020 • Binyi Su, Haiyong Chen, Zhong Zhou
Finally, the experimental results on a large-scale EL dataset including 3629 images, 2129 of which are defective, show that the proposed method achieves 98. 70% (F-measure), 88. 07% (mAP), and 73. 29% (IoU) in terms of multi-scale defects classification and detection results in raw PV cell EL images.
no code implementations • 23 Oct 2020 • Qichuan Geng, Hong Zhang, Na Jiang, Xiaojuan Qi, Liangjun Zhang, Zhong Zhou
As a consequence, augmenting features with such prior knowledge can effectively improve the classification and localization performance.
no code implementations • 29 Jan 2020 • Zhong Zhou, Isak Czeresnia Etinger, Florian Metze, Alexander Hauptmann, Alexander Waibel
We have interesting results both in bounding the shooter as well as detecting the gun smoke.
no code implementations • 19 Jan 2020 • Qichuan Geng, Hong Zhang, Xiaojuan Qi, Ruigang Yang, Zhong Zhou, Gao Huang
Semantic segmentation is a challenging task that needs to handle large scale variations, deformations and different viewpoints.
no code implementations • 7 Nov 2019 • Zhong Zhou, Lori Levin, David R. Mortensen, Alex Waibel
Firstly, we pool IGT for 1, 497 languages in ODIN (54, 545 glosses) and 70, 918 glosses in Arapaho and train a gloss-to-target NMT system from IGT to English, with a BLEU score of 25. 94.
no code implementations • 2 Aug 2019 • Xinjian Li, Zhong Zhou, Siddharth Dalmia, Alan W. black, Florian Metze
In this work, we present SANTLR: Speech Annotation Toolkit for Low Resource Languages.
no code implementations • 27 Nov 2018 • Qichuan Geng, Hong Zhang, Xinyu Huang, Sen Wang, Feixiang Lu, Xinjing Cheng, Zhong Zhou, Ruigang Yang
As it is labor-intensive to annotate semantic parts on real street views, we propose a specific approach to implicitly transfer part features from synthesized images to real street views.
no code implementations • ACL 2019 • Zhong Zhou, Matthias Sperber, Alex Waibel
Our multi-paraphrase NMT that trains only on two languages outperforms the multilingual baselines.
no code implementations • 1 Aug 2018 • Qichuan Geng, Xinyu Huang, Zhong Zhou, Ruigang Yang
Confusing classes that are ubiquitous in real world often degrade performance for many vision related applications like object detection, classification, and segmentation.
no code implementations • WS 2018 • Zhong Zhou, Matthias Sperber, Alex Waibel
The main challenges we identify are the lack of low-resource language data, effective methods for cross-lingual transfer, and the variable-binding problem that is common in neural systems.
1 code implementation • ACL 2016 • Bhuwan Dhingra, Zhong Zhou, Dylan Fitzpatrick, Michael Muehl, William W. Cohen
Text from social media provides a set of challenges that can cause traditional NLP approaches to fail.