Search Results for author: Aria Haghighi

Found 7 papers, 3 papers with code

CTM - A Model for Large-Scale Multi-View Tweet Topic Classification

no code implementations NAACL (ACL) 2022 Vivek Kulkarni, Kenny Leung, Aria Haghighi

In contrast to most prior work which only focuses on post-classification into a small number of topics (10-20), we consider the task of large-scale topic classification in the context of Twitter where the topic space is 10 times larger with potentially multiple topic associations per Tweet.

Classification Topic Classification

TweetNERD -- End to End Entity Linking Benchmark for Tweets

no code implementations14 Oct 2022 Shubhanshu Mishra, Aman Saini, Raheleh Makki, Sneha Mehta, Aria Haghighi, Ali Mollahosseini

Named Entity Recognition and Disambiguation (NERD) systems are foundational for information retrieval, question answering, event detection, and other natural language processing (NLP) applications.

Benchmarking Entity Linking +7

CTM -- A Model for Large-Scale Multi-View Tweet Topic Classification

no code implementations3 May 2022 Vivek Kulkarni, Kenny Leung, Aria Haghighi

In contrast to most prior work which only focuses on post classification into a small number of topics ($10$-$20$), we consider the task of large-scale topic classification in the context of Twitter where the topic space is $10$ times larger with potentially multiple topic associations per Tweet.

Classification Topic Classification

Learning Stance Embeddings from Signed Social Graphs

1 code implementation27 Jan 2022 John Pougué-Biyong, Akshay Gupta, Aria Haghighi, Ahmed El-Kishky

We propose the Stance Embeddings Model(SEM), which jointly learns embeddings for each user and topic in signed social graphs with distinct edge types for each topic.

Misinformation Stance Detection

LMSOC: An Approach for Socially Sensitive Pretraining

1 code implementation Findings (EMNLP) 2021 Vivek Kulkarni, Shubhanshu Mishra, Aria Haghighi

Although language depends heavily on the geographical, temporal, and other social contexts of the speaker, these elements have not been incorporated into modern transformer-based language models.

Cloze Test Graph Representation Learning +1

Improved Multilingual Language Model Pretraining for Social Media Text via Translation Pair Prediction

1 code implementation WNUT (ACL) 2021 Shubhanshu Mishra, Aria Haghighi

We evaluate a simple approach to improving zero-shot multilingual transfer of mBERT on social media corpus by adding a pretraining task called translation pair prediction (TPP), which predicts whether a pair of cross-lingual texts are a valid translation.

Benchmarking Language Modelling +8

Cannot find the paper you are looking for? You can Submit a new open access paper.