Search Results for author: Yukun Feng

Found 16 papers, 2 papers with code

Improving Character-Aware Neural Language Model by Warming up Character Encoder under Skip-gram Architecture

no code implementations RANLP 2021 Yukun Feng, Chenlong Hu, Hidetaka Kamigaito, Hiroya Takamura, Manabu Okumura

Character-aware neural language models can capture the relationship between words by exploiting character-level information and are particularly effective for languages with rich morphology.

Language Modelling

Motion Guided Token Compression for Efficient Masked Video Modeling

no code implementations10 Jan 2024 Yukun Feng, Yangming Shi, Fengze Liu, Tan Yan

By implementing MGTC with the masking ratio of 25\%, we further augment accuracy by 0. 1 and simultaneously reduce computational costs by over 31\% on Kinetics-400.

Video Compression Video Recognition

Automatic Document Selection for Efficient Encoder Pretraining

no code implementations20 Oct 2022 Yukun Feng, Patrick Xia, Benjamin Van Durme, João Sedoc

Building pretrained language models is considered expensive and data-intensive, but must we increase dataset size to achieve better performance?

Sentence

Learn To Remember: Transformer with Recurrent Memory for Document-Level Machine Translation

no code implementations Findings (NAACL) 2022 Yukun Feng, Feng Li, Ziang Song, Boyuan Zheng, Philipp Koehn

We conduct experiments on three popular datasets for document-level machine translation and our model has an average improvement of 0. 91 s-BLEU over the sentence-level baseline.

Document Level Machine Translation Machine Translation +2

The NLP Task Effectiveness of Long-Range Transformers

no code implementations16 Feb 2022 Guanghui Qin, Yukun Feng, Benjamin Van Durme

Transformer models cannot easily scale to long sequences due to their O(N^2) time and space complexity.

Spiral Language Modeling

no code implementations20 Dec 2021 Yong Cao, Yukun Feng, Shaohui Kuang, Gu Xu

In almost all text generation applications, word sequences are constructed in a left-to-right (L2R) or right-to-left (R2L) manner, as natural language sentences are written either L2R or R2L.

Language Modelling Machine Translation +2

Semantic Frame Labeling with Target-based Neural Model

no code implementations SEMEVAL 2017 Yukun Feng, Dong Yu, Jian Xu, Chunhua Liu

This paper explores the automatic learning of distributed representations of the target{'}s context for semantic frame labeling with target-based neural model.

Feature Engineering Sentence +2

Cannot find the paper you are looking for? You can Submit a new open access paper.