Search Results for author: Hassan Sajjad

Found 59 papers, 14 papers with code

Implicit representations of event properties within contextual language models: Searching for “causativity neurons”

1 code implementation IWCS (ACL) 2021 Esther Seyffarth, Younes Samih, Laura Kallmeyer, Hassan Sajjad

This paper addresses the question to which extent neural contextual language models such as BERT implicitly represent complex semantic properties.

Discovering Latent Concepts Learned in BERT

no code implementations ICLR 2022 Fahim Dalvi, Abdul Rafae Khan, Firoj Alam, Nadir Durrani, Jia Xu, Hassan Sajjad

We address this limitation by discovering and analyzing latent concepts learned in neural network models in an unsupervised fashion and provide interpretations from the model's perspective.

POS

Probing for Constituency Structure in Neural Language Models

1 code implementation13 Apr 2022 David Arps, Younes Samih, Laura Kallmeyer, Hassan Sajjad

We find that 4 pretrained transfomer LMs obtain high performance on our probing tasks even on manipulated data, suggesting that semantic and syntactic knowledge in their representations can be separated and that constituency information is in fact learned by the LM.

Neuron-level Interpretation of Deep NLP Models: A Survey

no code implementations30 Aug 2021 Hassan Sajjad, Nadir Durrani, Fahim Dalvi

The proliferation of deep neural networks in various domains has seen an increased need for interpretability of these methods.

Domain Adaptation

How transfer learning impacts linguistic knowledge in deep NLP models?

no code implementations Findings (ACL) 2021 Nadir Durrani, Hassan Sajjad, Fahim Dalvi

The pattern varies across architectures, with BERT retaining linguistic information relatively deeper in the network compared to RoBERTa and XLNet, where it is predominantly delegated to the lower layers.

Transfer Learning

Fine-grained Interpretation and Causation Analysis in Deep NLP Models

no code implementations NAACL 2021 Hassan Sajjad, Narine Kokhlikyan, Fahim Dalvi, Nadir Durrani

This paper is a write-up for the tutorial on "Fine-grained Interpretation and Causation Analysis in Deep NLP Models" that we are presenting at NAACL 2021.

Domain Adaptation

Effect of Post-processing on Contextualized Word Representations

no code implementations15 Apr 2021 Hassan Sajjad, Firoj Alam, Fahim Dalvi, Nadir Durrani

However, post-processing for contextualized embeddings is an under-studied problem.

Word Similarity

Are We Ready for this Disaster? Towards Location Mention Recognition from Crisis Tweets

no code implementations COLING 2020 Reem Suwaileh, Muhammad Imran, Tamer Elsayed, Hassan Sajjad

For example, results show that, for training a location mention recognition model, Twitter-based data is preferred over general-purpose data; and crisis-related data is preferred over general-purpose Twitter data.

Analyzing Individual Neurons in Pre-trained Language Models

1 code implementation EMNLP 2020 Nadir Durrani, Hassan Sajjad, Fahim Dalvi, Yonatan Belinkov

We found small subsets of neurons to predict linguistic tasks, with lower level tasks (such as morphology) localized in fewer neurons, compared to higher level task of predicting syntax.

Fighting the COVID-19 Infodemic in Social Media: A Holistic Perspective and a Call to Arms

1 code implementation15 Jul 2020 Firoj Alam, Fahim Dalvi, Shaden Shaar, Nadir Durrani, Hamdy Mubarak, Alex Nikolov, Giovanni Da San Martino, Ahmed Abdelali, Hassan Sajjad, Kareem Darwish, Preslav Nakov

With the outbreak of the COVID-19 pandemic, people turned to social media to read and to share timely information including statistics, warnings, advice, and inspirational stories.

Misinformation

Similarity Analysis of Contextual Word Representation Models

1 code implementation ACL 2020 John M. Wu, Yonatan Belinkov, Hassan Sajjad, Nadir Durrani, Fahim Dalvi, James Glass

We use existing and novel similarity measures that aim to gauge the level of localization of information in the deep models, and facilitate the investigation of which design factors affect model similarity, without requiring any external linguistic annotation.

CrisisBench: Benchmarking Crisis-related Social Media Datasets for Humanitarian Information Processing

no code implementations14 Apr 2020 Firoj Alam, Hassan Sajjad, Muhammad Imran, Ferda Ofli

Time-critical analysis of social media streams is important for humanitarian organizations for planing rapid response during disasters.

General Classification Humanitarian +1

Analyzing Redundancy in Pretrained Transformer Models

1 code implementation EMNLP 2020 Fahim Dalvi, Hassan Sajjad, Nadir Durrani, Yonatan Belinkov

Transformer-based deep NLP models are trained using hundreds of millions of parameters, limiting their applicability in computationally constrained environments.

Transfer Learning

On the Effect of Dropping Layers of Pre-trained Transformer Models

4 code implementations8 Apr 2020 Hassan Sajjad, Fahim Dalvi, Nadir Durrani, Preslav Nakov

Transformer-based NLP models are trained using hundreds of millions or even billions of parameters, limiting their applicability in computationally constrained environments.

Knowledge Distillation Sentence Similarity

A Clustering Framework for Lexical Normalization of Roman Urdu

1 code implementation31 Mar 2020 Abdul Rafae Khan, Asim Karim, Hassan Sajjad, Faisal Kamiran, Jia Xu

Roman Urdu is an informal form of the Urdu language written in Roman script, which is widely used in South Asia for online textual content.

Lexical Normalization

Compressing Large-Scale Transformer-Based Models: A Case Study on BERT

no code implementations27 Feb 2020 Prakhar Ganesh, Yao Chen, Xin Lou, Mohammad Ali Khan, Yin Yang, Hassan Sajjad, Preslav Nakov, Deming Chen, Marianne Winslett

Pre-trained Transformer-based models have achieved state-of-the-art performance for various Natural Language Processing (NLP) tasks.

Model Compression

A System for Diacritizing Four Varieties of Arabic

no code implementations IJCNLP 2019 Hamdy Mubarak, Ahmed Abdelali, Kareem Darwish, Mohamed Eldesouki, Younes Samih, Hassan Sajjad

Short vowels, aka diacritics, are more often omitted when writing different varieties of Arabic including Modern Standard Arabic (MSA), Classical Arabic (CA), and Dialectal Arabic (DA).

Feature Engineering

One Size Does Not Fit All: Comparing NMT Representations of Different Granularities

no code implementations NAACL 2019 Nadir Durrani, Fahim Dalvi, Hassan Sajjad, Yonatan Belinkov, Preslav Nakov

Recent work has shown that contextualized word representations derived from neural machine translation are a viable alternative to such from simple word predictions tasks.

Machine Translation Translation

NeuroX: A Toolkit for Analyzing Individual Neurons in Neural Networks

2 code implementations21 Dec 2018 Fahim Dalvi, Avery Nortonsmith, D. Anthony Bau, Yonatan Belinkov, Hassan Sajjad, Nadir Durrani, James Glass

We present a toolkit to facilitate the interpretation and understanding of neural network models.

What Is One Grain of Sand in the Desert? Analyzing Individual Neurons in Deep NLP Models

1 code implementation21 Dec 2018 Fahim Dalvi, Nadir Durrani, Hassan Sajjad, Yonatan Belinkov, Anthony Bau, James Glass

We further present a comprehensive analysis of neurons with the aim to address the following questions: i) how localized or distributed are different linguistic properties in the models?

Language Modelling Machine Translation

Incremental Decoding and Training Methods for Simultaneous Translation in Neural Machine Translation

no code implementations NAACL 2018 Fahim Dalvi, Nadir Durrani, Hassan Sajjad, Stephan Vogel

We address the problem of simultaneous translation by modifying the Neural MT decoder to operate with dynamically built encoder and attention.

Machine Translation Translation

Understanding and Improving Morphological Learning in the Neural Machine Translation Decoder

no code implementations IJCNLP 2017 Fahim Dalvi, Nadir Durrani, Hassan Sajjad, Yonatan Belinkov, Stephan Vogel

End-to-end training makes the neural machine translation (NMT) architecture simpler, yet elegant compared to traditional statistical machine translation (SMT).

Machine Translation Multi-Task Learning +1

Neural Machine Translation Training in a Multi-Domain Scenario

no code implementations IWSLT 2017 Hassan Sajjad, Nadir Durrani, Fahim Dalvi, Yonatan Belinkov, Stephan Vogel

Model stacking works best when training begins with the furthest out-of-domain data and the model is incrementally fine-tuned with the next furthest domain and so on.

Machine Translation Translation

QCRI Machine Translation Systems for IWSLT 16

no code implementations14 Jan 2017 Nadir Durrani, Fahim Dalvi, Hassan Sajjad, Stephan Vogel

This paper describes QCRI's machine translation systems for the IWSLT 2016 evaluation campaign.

Domain Adaptation Language Modelling +2

Applications of Online Deep Learning for Crisis Response Using Social Media Information

no code implementations4 Oct 2016 Dat Tien Nguyen, Shafiq Joty, Muhammad Imran, Hassan Sajjad, Prasenjit Mitra

During natural or man-made disasters, humanitarian response organizations look for useful information to support their decision-making processes.

Decision Making Disaster Response +2

Rapid Classification of Crisis-Related Data on Social Networks using Convolutional Neural Networks

no code implementations12 Aug 2016 Dat Tien Nguyen, Kamela Ali Al Mannai, Shafiq Joty, Hassan Sajjad, Muhammad Imran, Prasenjit Mitra

The current state-of-the-art classification methods require a significant amount of labeled data specific to a particular event for training plus a lot of feature engineering to achieve best results.

Classification Feature Engineering +1

The AMARA Corpus: Building Parallel Language Resources for the Educational Domain

no code implementations LREC 2014 Ahmed Abdelali, Francisco Guzman, Hassan Sajjad, Stephan Vogel

This paper presents the AMARA corpus of on-line educational content: a new parallel corpus of educational video subtitles, multilingually aligned for 20 languages, i. e. 20 monolingual corpora and 190 parallel corpora.

Machine Translation Translation

Cannot find the paper you are looking for? You can Submit a new open access paper.