no code implementations • RANLP 2021 • Yukun Feng, Chenlong Hu, Hidetaka Kamigaito, Hiroya Takamura, Manabu Okumura
Character-aware neural language models can capture the relationship between words by exploiting character-level information and are particularly effective for languages with rich morphology.
no code implementations • 10 Jan 2024 • Yukun Feng, Yangming Shi, Fengze Liu, Tan Yan
By implementing MGTC with the masking ratio of 25\%, we further augment accuracy by 0. 1 and simultaneously reduce computational costs by over 31\% on Kinetics-400.
no code implementations • 30 Dec 2022 • Yukun Feng, Ming Tu, Rui Xia, Chuanzeng Huang, Yuxuan Wang
Recent studies have shown that using an external Language Model (LM) benefits the end-to-end Automatic Speech Recognition (ASR).
Automatic Speech Recognition Automatic Speech Recognition (ASR) +2
no code implementations • 20 Oct 2022 • Yukun Feng, Patrick Xia, Benjamin Van Durme, João Sedoc
Building pretrained language models is considered expensive and data-intensive, but must we increase dataset size to achieve better performance?
no code implementations • Findings (NAACL) 2022 • Yukun Feng, Feng Li, Ziang Song, Boyuan Zheng, Philipp Koehn
We conduct experiments on three popular datasets for document-level machine translation and our model has an average improvement of 0. 91 s-BLEU over the sentence-level baseline.
no code implementations • 16 Feb 2022 • Guanghui Qin, Yukun Feng, Benjamin Van Durme
Transformer models cannot easily scale to long sequences due to their O(N^2) time and space complexity.
no code implementations • 20 Dec 2021 • Yong Cao, Yukun Feng, Shaohui Kuang, Gu Xu
In almost all text generation applications, word sequences are constructed in a left-to-right (L2R) or right-to-left (R2L) manner, as natural language sentences are written either L2R or R2L.
no code implementations • EACL 2021 • Chenlong Hu, Yukun Feng, Hidetaka Kamigaito, Hiroya Takamura, Manabu Okumura
This work presents multi-modal deep SVDD (mSVDD) for one-class text classification.
1 code implementation • Asian Chapter of the Association for Computational Linguistics 2020 • Yukun Feng, Chenlong Hu, Hidetaka Kamigaito, Hiroya Takamura, Manabu Okumura
We propose a simple and effective method for incorporating word clusters into the Continuous Bag-of-Words (CBOW) model.
no code implementations • CONLL 2019 • Yukun Feng, Hidetaka Kamigaito, Hiroya Takamura, Manabu Okumura
Our injection method can also be used together with previous methods.
2 code implementations • NAACL 2019 • Alex Erdmann, er, David Joseph Wrisley, Benjamin Allen, Christopher Brown, Sophie Cohen-Bod{\'e}n{\`e}s, Micha Elsner, Yukun Feng, Brian Joseph, B{\'e}atrice Joyeux-Prunel, Marie-Catherine de Marneffe
Scholars in inter-disciplinary fields like the Digital Humanities are increasingly interested in semantic annotation of specialized corpora.
no code implementations • WS 2018 • Kaiyin Zhou, Sheng Zhang, Xiangyu Meng, Qi Luo, Yuxing Wang, Ke Ding, Yukun Feng, Mo Chen, Kevin Cohen, Jingbo Xia
Sequence labeling of biomedical entities, e. g., side effects or phenotypes, was a long-term task in BioNLP and MedNLP communities.
no code implementations • SEMEVAL 2017 • Yukun Feng, Dong Yu, Jian Xu, Chunhua Liu
This paper explores the automatic learning of distributed representations of the target{'}s context for semantic frame labeling with target-based neural model.