1 code implementation • ACL 2022 • Ruixi Lin, Hwee Tou Ng
The success of a natural language processing (NLP) system on a task does not amount to fully understanding the complexity of the task, typified by many deep learning models.
1 code implementation • NAACL 2022 • Muhammad Qorib, Seung-Hoon Na, Hwee Tou Ng
In this paper, we formulate system combination for grammatical error correction (GEC) as a simple machine learning task: binary classification.
Ranked #3 on Grammatical Error Correction on BEA-2019 (test)
1 code implementation • COLING 2022 • Qingyu Tan, Ruidan He, Lidong Bing, Hwee Tou Ng
While there is much research on cross-domain text classification, most existing approaches focus on one-to-one or many-to-one domain adaptation.
no code implementations • COLING 2022 • Muhammad Reza Qorib, Hwee Tou Ng
There has been much recent progress in natural language processing, and grammatical error correction (GEC) is no exception.
1 code implementation • Findings (EMNLP) 2021 • Hannan Cao, Wenmian Yang, Hwee Tou Ng
Although grammatical error correction (GEC) has achieved good performance on texts written by learners of English as a second language, performance on low error density domains where texts are written by English speakers of varying levels of proficiency can still be improved.
1 code implementation • Findings (EMNLP) 2021 • Yang song, Xin Cai Ong, Hwee Tou Ng, Qian Lin
Current state-of-the-art supervised word sense disambiguation (WSD) systems (such as GlossBERT and bi-encoder model) yield surprisingly good results by purely leveraging pre-trained language models and short dictionary definitions (or glosses) of the different word senses.
Ranked #2 on Word Sense Disambiguation on Supervised:
1 code implementation • 30 Oct 2024 • Muhammad Reza Qorib, Alham Fikri Aji, Hwee Tou Ng
Error type information has been widely used to improve the performance of grammatical error correction (GEC) models, whether for generating corrections, re-ranking them, or combining GEC models.
1 code implementation • 2 Sep 2024 • Hai Ye, Hwee Tou Ng
To enhance the reliability of LLMs in following instructions, we propose the study of selective instruction following, whereby the system declines to execute instructions if the anticipated response quality is low.
1 code implementation • 22 Aug 2024 • Hai Ye, Hwee Tou Ng
The process of data sampling is crucial, as it significantly influences the success of policy improvement.
1 code implementation • EMNLP 2023 2023 • Hannan Cao, Liping Yuan, Yuchen Zhang, Hwee Tou Ng
Empirical results show that our GEC system outperforms previous unsupervised GEC systems, and achieves performance comparable to supervised GEC systems without ensemble.
1 code implementation • 16 Nov 2023 • Qingyu Tan, Hwee Tou Ng, Lidong Bing
Therefore, it is crucial for LLMs to understand the concept of temporal knowledge.
1 code implementation • ACL 2022 • Hai Ye, Hwee Tou Ng, Wenjuan Han
In conversational question answering (CQA), the task of question rewriting~(QR) in context aims to rewrite a context-dependent question into an equivalent self-contained question that gives the same answer.
1 code implementation • 23 Oct 2023 • Muhammad Reza Qorib, Hwee Tou Ng
However, we found that existing GEC quality estimation models are not good enough in differentiating good corrections from bad ones, resulting in a low F0. 5 score when used for system combination.
1 code implementation • 16 Jun 2023 • Qingyu Tan, Lu Xu, Lidong Bing, Hwee Tou Ng
We conducted experiments on document-level and biomedical relation extraction datasets, and the results showed that our proposed self-training framework consistently outperforms existing competitive methods on the Re-DocRED and ChemDisgene datasets when the training data are incompletely annotated.
1 code implementation • 15 Jun 2023 • Qingyu Tan, Hwee Tou Ng, Lidong Bing
In this paper, we introduce a comprehensive probing dataset \tempreason to evaluate the temporal reasoning capability of large language models.
1 code implementation • 11 Jun 2023 • Hai Ye, Qizhe Xie, Hwee Tou Ng
In this work, we study multi-source test-time model adaptation from user feedback, where K distinct models are established for adaptation.
1 code implementation • 24 May 2023 • Xingxuan Li, Liying Cheng, Qingyu Tan, Hwee Tou Ng, Shafiq Joty, Lidong Bing
The temporal aspect is a significant dimension of our reality.
1 code implementation • 9 Feb 2023 • Hai Ye, Yuyang Ding, Juntao Li, Hwee Tou Ng
To answer this question, we evaluate test-time adaptation (TTA) to improve a model after deployment.
no code implementations • 9 Nov 2022 • Christopher Bryant, Zheng Yuan, Muhammad Reza Qorib, Hannan Cao, Hwee Tou Ng, Ted Briscoe
Grammatical Error Correction (GEC) is the task of automatically detecting and correcting errors in text.
3 code implementations • 25 May 2022 • Qingyu Tan, Lu Xu, Lidong Bing, Hwee Tou Ng, Sharifah Mahani Aljunied
We analyze the causes and effects of the overwhelming false negative problem in the DocRED dataset.
1 code implementation • Findings (ACL) 2022 • Qingyu Tan, Ruidan He, Lidong Bing, Hwee Tou Ng
Our model consistently outperforms strong baselines and its performance exceeds the previous SOTA by 1. 36 F1 and 1. 46 Ign_F1 score on the DocRED leaderboard.
Ranked #2 on Relation Extraction on DocRED
Document-level Relation Extraction Knowledge Distillation +2
1 code implementation • 22 Feb 2022 • Qian Lin, Hwee Tou Ng
We leverage unlabeled data to improve classification in student training where we employ two teachers to refine the labeling of unlabeled data through teacher-student learning in a bootstrapping manner.
1 code implementation • RANLP 2021 • Ruixi Lin, Hwee Tou Ng
In this paper, we propose a system combination method for grammatical error correction (GEC), based on nonlinear integer programming (IP).
no code implementations • 28 Oct 2021 • Wenjuan Han, Hwee Tou Ng
However, most existing state-of-the-art GEC approaches are based on similar sequence-to-sequence neural networks, so the gains are limited from combining the outputs of component systems similar to one another.
no code implementations • 27 Sep 2021 • Preslav Nakov, Hwee Tou Ng
We propose a novel approach to translating from a morphologically complex language.
1 code implementation • RANLP 2021 • Tapas Nayak, Hwee Tou Ng
Distantly supervised datasets for relation extraction mostly focus on sentence-level extraction, and they cover very few relations.
1 code implementation • COLING 2020 • Qian Lin, Souvik Kundu, Hwee Tou Ng
One of the major challenges is that a dialogue system may generate an undesired utterance leading to a dialogue breakdown, which degrades the overall interaction quality.
1 code implementation • 23 Nov 2020 • Juntao Li, Ruidan He, Hai Ye, Hwee Tou Ng, Lidong Bing, Rui Yan
Experimental results show that our proposed method achieves significant performance improvements over the state-of-the-art pretrained cross-lingual language model in the CLCD setting.
no code implementations • COLING 2020 • Wenjuan Han, Yong Jiang, Hwee Tou Ng, Kewei Tu
Syntactic dependency parsing is an important task in natural language processing.
2 code implementations • EMNLP 2020 • Hai Ye, Qingyu Tan, Ruidan He, Juntao Li, Hwee Tou Ng, Lidong Bing
To improve the robustness of self-training, in this paper we present class-aware feature self-distillation (CFd) to learn discriminative features from PrLMs, in which PrLM features are self-distilled into a feature adaptation module and the features from the same class are more tightly clustered.
no code implementations • ACL 2020 • Souvik Kundu, Qian Lin, Hwee Tou Ng
Despite recent progress in conversational question answering, most prior work does not focus on follow-up questions.
no code implementations • EACL 2021 • Yixuan Tang, Hwee Tou Ng, Anthony K. H. Tung
Multi-hop question answering (QA) requires a model to retrieve and integrate information from different parts of a long text to answer a question.
1 code implementation • CONLL 2019 • Tapas Nayak, Hwee Tou Ng
Relation extraction is the task of determining the relation between two entities in a sentence.
1 code implementation • 22 Nov 2019 • Tapas Nayak, Hwee Tou Ng
A relation tuple consists of two entities and the relation between them, and often such tuples are found in unstructured text.
Ranked #1 on Relation Extraction on NYT24
1 code implementation • IJCNLP 2019 • Christian Hadiwinoto, Hwee Tou Ng, Wee Chung Gan
Contextualized word representations are able to give different representations for the same word in different contexts, and they have been shown to be effective in downstream natural language processing tasks, such as question answering, named entity recognition, and sentiment analysis.
Ranked #14 on Word Sense Disambiguation on Supervised:
no code implementations • WS 2019 • Steven Kester Yuwono, Hwee Tou Ng, Kee Yuan Ngiam
The objective of this work is to develop an automated diagnosis system that is able to predict the probability of appendicitis given a free-text emergency department (ED) note and additional structured information (e. g., lab test results).
1 code implementation • ACL 2019 • Shamil Chollampatt, Weiqi Wang, Hwee Tou Ng
Automatic grammatical error correction (GEC) research has made remarkable progress in the past decade.
1 code implementation • ACL 2019 • Wee Chung Gan, Hwee Tou Ng
Despite the advancement of question answering (QA) systems and rapid improvements on held-out test sets, their generalizability is a topic of concern.
3 code implementations • ACL 2019 • Ruidan He, Wee Sun Lee, Hwee Tou Ng, Daniel Dahlmeier
Aspect-based sentiment analysis produces a list of aspect terms and their corresponding sentiments for a natural language sentence.
Aspect-Based Sentiment Analysis Aspect-Based Sentiment Analysis (ABSA) +4
1 code implementation • EMNLP 2018 • Shamil Chollampatt, Hwee Tou Ng
We also show that a state-of-the-art GEC system can be improved when quality scores are used as features for re-ranking the N-best candidates.
Ranked #2 on Grammatical Error Correction on Restricted
1 code implementation • EMNLP 2018 • Souvik Kundu, Hwee Tou Ng
However, current approaches suffer from an impractical assumption that every question has a valid answer in the associated passage.
1 code implementation • EMNLP 2018 • Ruidan He, Wee Sun Lee, Hwee Tou Ng, Daniel Dahlmeier
We consider the cross-domain sentiment classification problem, where a sentiment classifier is to be learned from a source domain and to be generalized to a target domain.
1 code implementation • COLING 2018 • Shamil Chollampatt, Hwee Tou Ng
Previous studies of the correlation of these metrics with human quality judgments were inconclusive, due to the lack of appropriate significance tests, discrepancies in the methods, and choice of datasets used.
no code implementations • COLING 2018 • Ruidan He, Wee Sun Lee, Hwee Tou Ng, Daniel Dahlmeier
First, we propose a method for target representation that better captures the semantic meaning of the opinion target.
1 code implementation • ACL 2018 • Ruidan He, Wee Sun Lee, Hwee Tou Ng, Daniel Dahlmeier
Attention-based long short-term memory (LSTM) networks have proven to be useful in aspect-level sentiment classification.
1 code implementation • LREC 2018 • Christian Hadiwinoto, Hwee Tou Ng
Our goal in this paper is to propose a benchmark in evaluation setup for Chinese-to-English machine translation, such that the effectiveness of a new proposed MT approach can be directly compared to previous approaches.
3 code implementations • 26 Jan 2018 • Shamil Chollampatt, Hwee Tou Ng
We improve automatic correction of grammatical, orthographic, and collocation errors in text using a multilayer convolutional encoder-decoder neural network.
Ranked #1 on Grammatical Error Correction on Restricted
1 code implementation • 25 Jan 2018 • Souvik Kundu, Hwee Tou Ng
Neural network models recently proposed for question answering (QA) primarily focus on capturing the passage-question relation.
Ranked #4 on Question Answering on NewsQA
no code implementations • WS 2017 • Shamil Chollampatt, Hwee Tou Ng
We build a grammatical error correction (GEC) system primarily based on the state-of-the-art statistical machine translation (SMT) approach, using task-specific features and tuning, and further enhance it with the modeling power of neural network joint models.
3 code implementations • ACL 2017 • Ruidan He, Wee Sun Lee, Hwee Tou Ng, Daniel Dahlmeier
Unlike topic models which typically assume independently generated words, word embedding models encourage words that appear in similar contexts to be located close to each other in the embedding space.
no code implementations • 15 Feb 2017 • Christian Hadiwinoto, Hwee Tou Ng
In machine translation (MT) that involves translating between two languages with significant differences in word order, determining the correct word order of translated words is a major challenge.
no code implementations • WS 2016 • Steven Kester Yuwono, Hwee Tou Ng, Kee Yuan Ngiam
Personal health information (PHI) (such as name and identification number) needs to be removed so that patients cannot be identified.
no code implementations • 3 Aug 2016 • Christian Hadiwinoto, Yang Liu, Hwee Tou Ng
Reordering poses a major challenge in machine translation (MT) between two languages with significant differences in word order.
2 code implementations • 1 Jun 2016 • Shamil Chollampatt, Kaveh Taghipour, Hwee Tou Ng
Phrase-based statistical machine translation (SMT) systems have previously been used for the task of grammatical error correction (GEC) to achieve state-of-the-art accuracy.
no code implementations • 1 Jun 2016 • Duc Tam Hoang, Shamil Chollampatt, Hwee Tou Ng
Grammatical error correction (GEC) is the task of detecting and correcting grammatical errors in texts written by second language learners.
no code implementations • 23 Jan 2014 • Preslav Ivanov Nakov, Hwee Tou Ng
We propose a novel language-independent approach for improving machine translation for resource-poor languages by exploiting their similarity to resource-rich ones.