no code implementations • EMNLP 2020 • Ming Wang, Yinglin Wang
Contextual embeddings are proved to be overwhelmingly effective to the task of Word Sense Disambiguation (WSD) compared with other sense representation techniques.
no code implementations • EMNLP 2021 • Ming Wang, Jianzhang Zhang, Yinglin Wang
In previous similarity-based WSD systems, studies have allocated much effort on learning comprehensive sense embeddings using contextual representations and knowledge sources.
1 code implementation • 25 Apr 2023 • Jianzhang Zhang, Yiyang Chen, Nan Niu, Yinglin Wang, Chuang Liu
Our evaluation of ChatGPT on requirements IR under zero-shot setting provides preliminary evidence for designing or developing more effective requirements IR methods or tools based on LLMs.
1 code implementation • ACL 2021 • Ming Wang, Yinglin Wang
Lately proposed Word Sense Disambiguation (WSD) systems have approached the estimated upper bound of the task on standard evaluation benchmarks.
1 code implementation • ACL 2021 • Mingyue Han, Yinglin Wang
We mitigate this problem by simply adding a regularization loss and experimental results show that this solution not only improves the model's generalization ability, but also assists the models to perform more robustly on a challenging dataset, BCOPA-CE, which has unbiased token distribution and is more difficult for models to distinguish cause and effect.
no code implementations • 29 Feb 2020 • Yinglin Wang, Ming Wang, Hamido Fujita
Word Sense Disambiguation (WSD) has been a basic and on-going issue since its introduction in natural language processing (NLP) community.
Ranked #1 on Word Sense Disambiguation on Knowledge-based: