1 code implementation • COLING 2022 • Yucheng Li, Chenghua Lin, Frank Guerin
The metaphor identification module is able to perform a self-training procedure, which discovers novel metaphors from a large-scale unlabeled corpus for NM generation.
1 code implementation • 1 Feb 2024 • Yucheng Li, Yunhao Guo, Frank Guerin, Chenghua Lin
We measure: 1) the compression performance on the testing period as a measure of generalization on unseen data; and 2) the performance gap between the training and testing period as a measure of robustness.
no code implementations • 29 Jan 2024 • Yucheng Li, Frank Guerin, Chenghua Lin
In this paper, we test various NLP models on the VUA metaphor dataset and quantify to what extent metaphors affect models' performance on various downstream tasks.
1 code implementation • 19 Dec 2023 • Yucheng Li, Frank Guerin, Chenghua Lin
LatestEval avoids data contamination by only using texts published within a recent time window, ensuring no overlap with the training corpora of pre-trained language models.
no code implementations • 31 Oct 2023 • Chen Tang, Frank Guerin, Chenghua Lin
This paper presents a tool called ``ACL Anthology Helper''.
1 code implementation • 26 Oct 2023 • Yucheng Li, Frank Guerin, Chenghua Lin
We also introduce an open-source pipeline that enables the community to perform contamination analysis on customised data and models.
1 code implementation • 9 Oct 2023 • Yucheng Li, Bo Dong, Chenghua Lin, Frank Guerin
This paper proposes a method called Selective Context that enhances the inference efficiency of LLMs by identifying and pruning redundancy in the input context to make the input more compact.
no code implementations • 24 Aug 2023 • Rui Mao, Guanyi Chen, Xulang Zhang, Frank Guerin, Erik Cambria
The emergence of ChatGPT has generated much speculation in the press about its potential to disrupt social and economic systems.
1 code implementation • 23 Aug 2023 • Mona Ahmadian, Frank Guerin, Andrew Gilbert
Despite the importance of motion in supervised learning techniques for action recognition, SSL methods often do not explicitly consider motion information in videos.
1 code implementation • 28 Jun 2023 • Chen Tang, Hongbo Zhang, Tyler Loakman, Chenghua Lin, Frank Guerin
Further analysis also shows that our representation learning framework can fill the semantic gap by coagulating representations of both text and graph knowledge.
1 code implementation • 11 Feb 2023 • Shun Wang, Yucheng Li, Chenghua Lin, Loïc Barrault, Frank Guerin
We propose a novel RoBERTa-based model, RoPPT, which introduces a target-oriented parse tree structure in metaphor detection.
1 code implementation • 9 Feb 2023 • Yucheng Li, Shun Wang, Chenghua Lin, Frank Guerin, Loïc Barrault
In this paper, we propose FrameBERT, a RoBERTa-based model that can explicitly learn and incorporate FrameNet Embeddings for concept-level metaphor detection.
1 code implementation • 30 Jan 2023 • Yucheng Li, Frank Guerin, Chenghua Lin
Metaphors are proven to have stronger emotional impact than literal expressions.
1 code implementation • 27 Oct 2022 • Chen Tang, Hongbo Zhang, Tyler Loakman, Chenghua Lin, Frank Guerin
In this paper, we propose a novel framework to improve medical dialogue generation by considering features centered on domain-specific terminology.
1 code implementation • 22 Oct 2022 • Chen Tang, Chenghua Lin, Henglin Huang, Frank Guerin, Zhihao Zhang
One of the key challenges of automatic story generation is how to generate a long narrative that can maintain fluency, relevance, and coherence.
1 code implementation • 19 Oct 2022 • Chen Tang, Zhihao Zhang, Tyler Loakman, Chenghua Lin, Frank Guerin
To improve the performance of long text generation, recent studies have leveraged automatically planned event structures (i. e. storylines) to guide story generation.
1 code implementation • 19 Oct 2022 • Henglin Huang, Chen Tang, Tyler Loakman, Frank Guerin, Chenghua Lin
In spite of the success of prior works with the application of pre-trained models, current neural models for Chinese stories still struggle to generate high-quality long text narratives.
1 code implementation • 6 Mar 2022 • Chen Tang, Frank Guerin, Chenghua Lin
In recent years, considerable research has been dedicated to the application of neural models in the field of natural language generation (NLG).
no code implementations • 12 Jul 2021 • Joseph Chrol-Cannon, Andrew Gilbert, Ranko Lazic, Adithya Madhusoodanan, Frank Guerin
We apply the method to a challenging subset of the something-something dataset and achieve a more robust performance against neural network baselines on challenging activities.
no code implementations • 7 Apr 2021 • Rui Mao, Chenghua Lin, Frank Guerin
Metaphorical expressions are difficult linguistic phenomena, challenging diverse Natural Language Processing tasks.
no code implementations • 7 Apr 2021 • Rui Mao, Chenghua Lin, Frank Guerin
The pre-trained word embeddings GloVe, ELMo and BERT have individually shown good performance on sequential metaphor identification.
no code implementations • 24 Mar 2021 • Frank Guerin
Artificial Intelligence systems cannot yet match human abilities to apply knowledge to situations that vary from what they have been programmed for, or trained for.
no code implementations • 3 Dec 2020 • Jing Su, Qingyun Dai, Frank Guerin, Mian Zhou
Visual storytelling is a creative and challenging task, aiming to automatically generate a story-like description for a sequence of images.
Ranked #21 on Visual Storytelling on VIST (CIDEr metric)
2 code implementations • ICML 2020 • Xiao Li, Chenghua Lin, Ruizhe Li, Chaozheng Wang, Frank Guerin
We demonstrate the utility of our method for attribute manipulation in autoencoders trained across varied domains, using both human evaluation and automated methods.
Ranked #7 on Image Generation on CelebA 256x256 (FID metric)
1 code implementation • ACL 2019 • Rui Mao, Chenghua Lin, Frank Guerin
End-to-end training with Deep Neural Networks (DNN) is a currently popular method for metaphor identification.
no code implementations • ACL 2018 • Rui Mao, Chenghua Lin, Frank Guerin
Metaphoric expressions are widespread in natural language, posing a significant challenge for various natural language processing tasks such as Machine Translation.
no code implementations • WS 2017 • Noor Fazilla Abd Yusof, Chenghua Lin, Frank Guerin
We develop a computational model to discover the potential causes of depression by analysing the topics in a usergenerated text.