Search Results for author: Diana Inkpen

Found 38 papers, 5 papers with code

HateSieve: A Contrastive Learning Framework for Detecting and Segmenting Hateful Content in Multimodal Memes

no code implementations11 Aug 2024 Xuanyu Su, Yansong Li, Diana Inkpen, Nathalie Japkowicz

Amidst the rise of Large Multimodal Models (LMMs) and their widespread application in generating and interpreting complex content, the risk of propagating biased and harmful memes remains significant.

Contrastive Learning Triplet

Co-Regularized Adversarial Learning for Multi-Domain Text Classification

no code implementations30 Jan 2022 Yuan Wu, Diana Inkpen, Ahmed El-Roby

Multi-domain text classification (MDTC) aims to leverage all available resources from multiple domains to learn a predictive model that can generalize well on these domains.

text-classification Text Classification

Maximum Batch Frobenius Norm for Multi-Domain Text Classification

no code implementations29 Jan 2022 Yuan Wu, Diana Inkpen, Ahmed El-Roby

Multi-domain text classification (MDTC) has obtained remarkable achievements due to the advent of deep learning.

text-classification Text Classification

Towards Category and Domain Alignment: Category-Invariant Feature Enhancement for Adversarial Domain Adaptation

no code implementations14 Aug 2021 Yuan Wu, Diana Inkpen, Ahmed El-Roby

Adversarial domain adaptation has made impressive advances in transferring knowledge from the source domain to the target domain by aligning feature distributions of both domains.

Domain Adaptation

Context-Sensitive Visualization of Deep Learning Natural Language Processing Models

no code implementations25 May 2021 Andrew Dunn, Diana Inkpen, Răzvan Andonie

The modified texts that produce the largest difference in the target classification output neuron are selected, and the combination of removed words are then considered to be the most influential on the model's output.

Deep Learning Sentence

Conditional Adversarial Networks for Multi-Domain Text Classification

no code implementations EACL (AdaptNLP) 2021 Yuan Wu, Diana Inkpen, Ahmed El-Roby

We provide theoretical analysis for the CAN framework, showing that CAN's objective is equivalent to minimizing the total divergence among multiple joint distributions of shared features and label predictions.

General Classification text-classification +1

Mixup Regularized Adversarial Networks for Multi-Domain Text Classification

no code implementations31 Jan 2021 Yuan Wu, Diana Inkpen, Ahmed El-Roby

Using the shared-private paradigm and adversarial training has significantly improved the performances of multi-domain text classification (MDTC) models.

General Classification text-classification +1

Dual Adversarial Training for Unsupervised Domain Adaptation

no code implementations1 Jan 2021 Yuan Wu, Diana Inkpen, Ahmed El-Roby

Domain adaptation sets out to address this problem, aiming to leverage labeled data in the source domain to learn a good predictive model for the target domain whose labels are scarce or unavailable.

Unsupervised Domain Adaptation

Dual Mixup Regularized Learning for Adversarial Domain Adaptation

no code implementations ECCV 2020 Yuan Wu, Diana Inkpen, Ahmed El-Roby

Second, samples from the source and target domains alone are not sufficient for domain-invariant feature extracting in the latent space.

Unsupervised Domain Adaptation

Multi-Task, Multi-Channel, Multi-Input Learning for Mental Illness Detection using Social Media Text

no code implementations WS 2019 Prasadith Kirinde Gamaarachchige, Diana Inkpen

We illustrate the effectiveness of using multi-task learning with a multi-channel convolutional neural network as the shared representation and use additional inputs identified by researchers as indicatives in detecting mental disorders to enhance the model predictability.

3D Object Classification Multi-class Classification +1

Semantics and Homothetic Clustering of Hafez Poetry

no code implementations WS 2019 Arya Rahgozar, Diana Inkpen

Our labels are the only semantic clustering alternative to the previously existing, hand-labeled, gold-standard classification of Hafez poems, to be used for literary research.

Clustering valid +1

Introduction to the Special Issue on Language in Social Media: Exploiting Discourse and Other Contextual Information

no code implementations CL 2018 Farah Benamara, Diana Inkpen, Maite Taboada

This special issue contributes to a deeper understanding of the role of these interactions to process social media data from a new perspective in discourse interpretation.

Cyberbullying Intervention Based on Convolutional Neural Networks

no code implementations COLING 2018 Qianjia Huang, Diana Inkpen, Jianhong Zhang, David Van Bruwaene

This paper describes the process of building a cyberbullying intervention interface driven by a machine-learning based text-classification service.

BIG-bench Machine Learning General Classification +2

Deep Learning for Depression Detection of Twitter Users

no code implementations WS 2018 Ahmed Husseini Orabi, Prasadith Buddhitha, Mahmoud Husseini Orabi, Diana Inkpen

Mental illness detection in social media can be considered a complex task, mainly due to the complicated nature of mental disorders.

Deep Learning Depression Detection

Neural Natural Language Inference Models Enhanced with External Knowledge

2 code implementations ACL 2018 Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Diana Inkpen, Si Wei

With the availability of large annotated data, it has recently become feasible to train complex models such as neural-network-based inference models, which have shown to achieve the state-of-the-art performance.

Natural Language Inference

Recurrent Neural Network-Based Sentence Encoder with Gated Attention for Natural Language Inference

2 code implementations WS 2017 Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Si Wei, Hui Jiang, Diana Inkpen

The RepEval 2017 Shared Task aims to evaluate natural language understanding models for sentence representation, in which a sentence is represented as a fixed-length vector with neural networks and the quality of the representation is tested with a natural language inference task.

Natural Language Inference Natural Language Understanding +1

Monitoring Tweets for Depression to Detect At-risk Users

no code implementations WS 2017 Zunaira Jamil, Diana Inkpen, Prasadith Buddhitha, Kenton White

To achieve our goal, we trained a user-level classifier that can detect at-risk users that achieves a reasonable precision and recall.

A Dataset for Multi-Target Stance Detection

no code implementations EACL 2017 Parinaz Sobhani, Diana Inkpen, Xiaodan Zhu

Current models for stance classification often treat each target independently, but in many applications, there exist natural dependencies among targets, e. g., stance towards two or more politicians in an election or towards several brands of the same product.

Classification General Classification +3

Local-Global Vectors to Improve Unigram Terminology Extraction

no code implementations WS 2016 Ehsan Amjadian, Diana Inkpen, Tahereh Paribakht, Farahnaz Faez

The present paper explores a novel method that integrates efficient distributed representations with terminology extraction.

Term Extraction Word Embeddings

Cannot find the paper you are looking for? You can Submit a new open access paper.