Search Results for author: Fahim Dalvi

Found 30 papers, 10 papers with code

Neuron-level Interpretation of Deep NLP Models: A Survey

no code implementations30 Aug 2021 Hassan Sajjad, Nadir Durrani, Fahim Dalvi

The proliferation of deep neural networks in various domains has seen an increased need for interpretability of these methods.

Domain Adaptation

How transfer learning impacts linguistic knowledge in deep NLP models?

no code implementations Findings (ACL) 2021 Nadir Durrani, Hassan Sajjad, Fahim Dalvi

The pattern varies across architectures, with BERT retaining linguistic information relatively deeper in the network compared to RoBERTa and XLNet, where it is predominantly delegated to the lower layers.

Fine-tuning Transfer Learning

Fine-grained Interpretation and Causation Analysis in Deep NLP Models

no code implementations NAACL 2021 Hassan Sajjad, Narine Kokhlikyan, Fahim Dalvi, Nadir Durrani

This paper is a write-up for the tutorial on "Fine-grained Interpretation and Causation Analysis in Deep NLP Models" that we are presenting at NAACL 2021.

Domain Adaptation

Effect of Post-processing on Contextualized Word Representations

no code implementations15 Apr 2021 Hassan Sajjad, Firoj Alam, Fahim Dalvi, Nadir Durrani

However, post-processing for contextualized embeddings is an under-studied problem.

Word Similarity

Analyzing Individual Neurons in Pre-trained Language Models

1 code implementation EMNLP 2020 Nadir Durrani, Hassan Sajjad, Fahim Dalvi, Yonatan Belinkov

We found small subsets of neurons to predict linguistic tasks, with lower level tasks (such as morphology) localized in fewer neurons, compared to higher level task of predicting syntax.

Fighting the COVID-19 Infodemic in Social Media: A Holistic Perspective and a Call to Arms

1 code implementation15 Jul 2020 Firoj Alam, Fahim Dalvi, Shaden Shaar, Nadir Durrani, Hamdy Mubarak, Alex Nikolov, Giovanni Da San Martino, Ahmed Abdelali, Hassan Sajjad, Kareem Darwish, Preslav Nakov

With the outbreak of the COVID-19 pandemic, people turned to social media to read and to share timely information including statistics, warnings, advice, and inspirational stories.

Misinformation

FINDINGS OF THE IWSLT 2020 EVALUATION CAMPAIGN

no code implementations WS 2020 Ebrahim Ansari, Amittai Axelrod, Nguyen Bach, Ond{\v{r}}ej Bojar, Roldano Cattoni, Fahim Dalvi, Nadir Durrani, Marcello Federico, Christian Federmann, Jiatao Gu, Fei Huang, Kevin Knight, Xutai Ma, Ajay Nagesh, Matteo Negri, Jan Niehues, Juan Pino, Elizabeth Salesky, Xing Shi, Sebastian St{\"u}ker, Marco Turchi, Alex Waibel, er, Changhan Wang

The evaluation campaign of the International Conference on Spoken Language Translation (IWSLT 2020) featured this year six challenge tracks: (i) Simultaneous speech translation, (ii) Video speech translation, (iii) Offline speech translation, (iv) Conversational speech translation, (v) Open domain translation, and (vi) Non-native speech translation.

Translation

Similarity Analysis of Contextual Word Representation Models

1 code implementation ACL 2020 John M. Wu, Yonatan Belinkov, Hassan Sajjad, Nadir Durrani, Fahim Dalvi, James Glass

We use existing and novel similarity measures that aim to gauge the level of localization of information in the deep models, and facilitate the investigation of which design factors affect model similarity, without requiring any external linguistic annotation.

Fine-tuning

Analyzing Redundancy in Pretrained Transformer Models

1 code implementation EMNLP 2020 Fahim Dalvi, Hassan Sajjad, Nadir Durrani, Yonatan Belinkov

Transformer-based deep NLP models are trained using hundreds of millions of parameters, limiting their applicability in computationally constrained environments.

Feature Selection Transfer Learning

On the Effect of Dropping Layers of Pre-trained Transformer Models

4 code implementations8 Apr 2020 Hassan Sajjad, Fahim Dalvi, Nadir Durrani, Preslav Nakov

Transformer-based NLP models are trained using hundreds of millions or even billions of parameters, limiting their applicability in computationally constrained environments.

Knowledge Distillation Sentence Similarity

One Size Does Not Fit All: Comparing NMT Representations of Different Granularities

no code implementations NAACL 2019 Nadir Durrani, Fahim Dalvi, Hassan Sajjad, Yonatan Belinkov, Preslav Nakov

Recent work has shown that contextualized word representations derived from neural machine translation are a viable alternative to such from simple word predictions tasks.

Machine Translation Translation

What Is One Grain of Sand in the Desert? Analyzing Individual Neurons in Deep NLP Models

1 code implementation21 Dec 2018 Fahim Dalvi, Nadir Durrani, Hassan Sajjad, Yonatan Belinkov, Anthony Bau, James Glass

We further present a comprehensive analysis of neurons with the aim to address the following questions: i) how localized or distributed are different linguistic properties in the models?

Language Modelling Machine Translation

Incremental Decoding and Training Methods for Simultaneous Translation in Neural Machine Translation

no code implementations NAACL 2018 Fahim Dalvi, Nadir Durrani, Hassan Sajjad, Stephan Vogel

We address the problem of simultaneous translation by modifying the Neural MT decoder to operate with dynamically built encoder and attention.

Machine Translation Translation

Continuous Space Reordering Models for Phrase-based MT

no code implementations25 Jan 2018 Nadir Durrani, Fahim Dalvi

We also observed improvements compared to the systems that used POS tags and word clusters to train these models.

POS

Understanding and Improving Morphological Learning in the Neural Machine Translation Decoder

no code implementations IJCNLP 2017 Fahim Dalvi, Nadir Durrani, Hassan Sajjad, Yonatan Belinkov, Stephan Vogel

End-to-end training makes the neural machine translation (NMT) architecture simpler, yet elegant compared to traditional statistical machine translation (SMT).

Machine Translation Multi-Task Learning +1

Neural Machine Translation Training in a Multi-Domain Scenario

no code implementations29 Aug 2017 Hassan Sajjad, Nadir Durrani, Fahim Dalvi, Yonatan Belinkov, Stephan Vogel

Model stacking works best when training begins with the furthest out-of-domain data and the model is incrementally fine-tuned with the next furthest domain and so on.

Fine-tuning Machine Translation +1

QCRI Machine Translation Systems for IWSLT 16

no code implementations14 Jan 2017 Nadir Durrani, Fahim Dalvi, Hassan Sajjad, Stephan Vogel

This paper describes QCRI's machine translation systems for the IWSLT 2016 evaluation campaign.

Domain Adaptation Fine-tuning +3

DeepFace: Face Generation using Deep Learning

no code implementations7 Jan 2017 Hardie Cate, Fahim Dalvi, Zeshan Hussain

We use CNNs to build a system that both classifies images of faces based on a variety of different facial attributes and generates new faces given a set of desired facial characteristics.

Face Generation General Classification

Sign Language Recognition Using Temporal Classification

no code implementations7 Jan 2017 Hardie Cate, Fahim Dalvi, Zeshan Hussain

Devices like the Myo armband available in the market today enable us to collect data about the position of a user's hands and fingers over time.

Classification General Classification +4

Cannot find the paper you are looking for? You can Submit a new open access paper.