no code implementations • NAACL (CLPsych) 2022 • Prasadith Kirinde Gamaarachchige, Ahmed Husseini Orabi, Mahmoud Husseini Orabi, Diana Inkpen
This paper investigates the impact of using Multi-Task Learning (MTL) to predict mood changes over time for each individual (social media user).
1 code implementation • 28 Oct 2024 • Nima Meghdadi, Diana Inkpen
Our results were 86. 3% in the L-NER subtask and 88. 25% in the L-NLI subtask.
no code implementations • 11 Aug 2024 • Xuanyu Su, Yansong Li, Diana Inkpen, Nathalie Japkowicz
Amidst the rise of Large Multimodal Models (LMMs) and their widespread application in generating and interpreting complex content, the risk of propagating biased and harmful memes remains significant.
no code implementations • 30 Jan 2022 • Yuan Wu, Diana Inkpen, Ahmed El-Roby
Multi-domain text classification (MDTC) aims to leverage all available resources from multiple domains to learn a predictive model that can generalize well on these domains.
no code implementations • 29 Jan 2022 • Yuan Wu, Diana Inkpen, Ahmed El-Roby
Multi-domain text classification (MDTC) has obtained remarkable achievements due to the advent of deep learning.
no code implementations • 14 Aug 2021 • Yuan Wu, Diana Inkpen, Ahmed El-Roby
Adversarial domain adaptation has made impressive advances in transferring knowledge from the source domain to the target domain by aligning feature distributions of both domains.
no code implementations • 25 May 2021 • Andrew Dunn, Diana Inkpen, Răzvan Andonie
The modified texts that produce the largest difference in the target classification output neuron are selected, and the combination of removed words are then considered to be the most influential on the model's output.
no code implementations • EACL (AdaptNLP) 2021 • Yuan Wu, Diana Inkpen, Ahmed El-Roby
We provide theoretical analysis for the CAN framework, showing that CAN's objective is equivalent to minimizing the total divergence among multiple joint distributions of shared features and label predictions.
no code implementations • 31 Jan 2021 • Yuan Wu, Diana Inkpen, Ahmed El-Roby
Using the shared-private paradigm and adversarial training has significantly improved the performances of multi-domain text classification (MDTC) models.
no code implementations • 1 Jan 2021 • Yuan Wu, Diana Inkpen, Ahmed El-Roby
Domain adaptation sets out to address this problem, aiming to leverage labeled data in the source domain to learn a good predictive model for the target domain whose labels are scarce or unavailable.
no code implementations • ECCV 2020 • Yuan Wu, Diana Inkpen, Ahmed El-Roby
Second, samples from the source and target domains alone are not sufficient for domain-invariant feature extracting in the latent space.
no code implementations • WS 2019 • Prasadith Kirinde Gamaarachchige, Diana Inkpen
We illustrate the effectiveness of using multi-task learning with a multi-channel convolutional neural network as the shared representation and use additional inputs identified by researchers as indicatives in detecting mental disorders to enhance the model predictability.
no code implementations • WS 2019 • Arya Rahgozar, Diana Inkpen
Our labels are the only semantic clustering alternative to the previously existing, hand-labeled, gold-standard classification of Hafez poems, to be used for literary research.
no code implementations • CL 2018 • Farah Benamara, Diana Inkpen, Maite Taboada
This special issue contributes to a deeper understanding of the role of these interactions to process social media data from a new perspective in discourse interpretation.
no code implementations • COLING 2018 • Qianjia Huang, Diana Inkpen, Jianhong Zhang, David Van Bruwaene
This paper describes the process of building a cyberbullying intervention interface driven by a machine-learning based text-classification service.
no code implementations • COLING 2018 • Ahmed Husseini Orabi, Mahmoud Husseini Orabi, Qianjia Huang, Diana Inkpen, David Van Bruwaene
In this paper, we propose a novel deep-learning architecture for text classification, named cross segment-and-concatenate multi-task learning (CSC-MTL).
no code implementations • COLING 2018 • Haifa Alharthi, Diana Inkpen, Stan Szpakowicz
Book recommender systems can help promote the practice of reading for pleasure, which has been declining in recent years.
no code implementations • SEMEVAL 2018 • Ahmed Husseini Orabi, Mahmoud Husseini Orabi, Diana Inkpen, David Van Bruwaene
We propose a novel attentive hybrid GRU-based network (SAHGN), which we used at SemEval-2018 Task 1: Affect in Tweets.
no code implementations • WS 2018 • Ahmed Husseini Orabi, Prasadith Buddhitha, Mahmoud Husseini Orabi, Diana Inkpen
Mental illness detection in social media can be considered a complex task, mainly due to the complicated nature of mental disorders.
no code implementations • ICLR 2018 • Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Diana Inkpen
Modeling informal inference in natural language is very challenging.
2 code implementations • ACL 2018 • Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Diana Inkpen, Si Wei
With the availability of large annotated data, it has recently become feasible to train complex models such as neural-network-based inference models, which have shown to achieve the state-of-the-art performance.
Ranked #20 on Natural Language Inference on SNLI
2 code implementations • WS 2017 • Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Si Wei, Hui Jiang, Diana Inkpen
The RepEval 2017 Shared Task aims to evaluate natural language understanding models for sentence representation, in which a sentence is represented as a fixed-length vector with neural networks and the quality of the representation is tested with a natural language inference task.
Ranked #69 on Natural Language Inference on SNLI
Natural Language Inference Natural Language Understanding +1
no code implementations • WS 2017 • Vaibhav Kesarwani, Diana Inkpen, Stan Szpakowicz, Chris Tanasescu
Our method focuses on metaphor detection in a poetry corpus.
no code implementations • WS 2017 • Zunaira Jamil, Diana Inkpen, Prasadith Buddhitha, Kenton White
To achieve our goal, we trained a user-level classifier that can detect at-risk users that achieves a reasonable precision and recall.
no code implementations • EACL 2017 • Parinaz Sobhani, Diana Inkpen, Xiaodan Zhu
Current models for stance classification often treat each target independently, but in many applications, there exist natural dependencies among targets, e. g., stance towards two or more politicians in an election or towards several brands of the same product.
no code implementations • WS 2016 • Ehsan Amjadian, Diana Inkpen, Tahereh Paribakht, Farahnaz Faez
The present paper explores a novel method that integrates efficient distributed representations with terminology extraction.
11 code implementations • ACL 2017 • Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Si Wei, Hui Jiang, Diana Inkpen
Reasoning and inference are central to human and artificial intelligence.
Ranked #30 on Natural Language Inference on SNLI