Search Results for author: Farhad Nooralahzadeh

Found 10 papers, 3 papers with code

Improving the Cross-Lingual Generalisation in Visual Question Answering

1 code implementation7 Sep 2022 Farhad Nooralahzadeh, Rico Sennrich

While several benefits were realized for multilingual vision-language pretrained models, recent benchmarks across various tasks and languages showed poor cross-lingual generalisation when multilingually pre-trained vision-language models are applied to non-English data, with a large gap between (supervised) English performance and (zero-shot) cross-lingual transfer.

Question Answering Visual Question Answering +3

Progressive Transformer-Based Generation of Radiology Reports

1 code implementation Findings (EMNLP) 2021 Farhad Nooralahzadeh, Nicolas Perez Gonzalez, Thomas Frauenfelder, Koji Fujimoto, Michael Krauthammer

Inspired by Curriculum Learning, we propose a consecutive (i. e., image-to-text-to-text) generation framework where we divide the problem of radiology report generation into two steps.

Text Generation

Zero-Shot Cross-Lingual Transfer with Meta Learning

1 code implementation EMNLP 2020 Farhad Nooralahzadeh, Giannis Bekoulis, Johannes Bjerva, Isabelle Augenstein

We show that this challenging setup can be approached using meta-learning, where, in addition to training a source language model, another model learns to select which training instances are the most beneficial to the first.

Few-Shot NLI Language Modelling +5

SIRIUS-LTG: An Entity Linking Approach to Fact Extraction and Verification

no code implementations WS 2018 Farhad Nooralahzadeh, Lilja {\O}vrelid

The experiments show that the pipeline with simple Cosine Similarity using TFIDF in sentence selection along with DA model as labelling model achieves the best results on the development set (F1 evidence: 32. 17, label accuracy: 59. 61 and FEVER score: 0. 3778).

Entity Linking Information Retrieval +2

Syntactic Dependency Representations in Neural Relation Classification

no code implementations WS 2018 Farhad Nooralahzadeh, Lilja Øvrelid

We investigate the use of different syntactic dependency representations in a neural relation classification task and compare the CoNLL, Stanford Basic and Universal Dependencies schemes.

Classification General Classification +1

Cannot find the paper you are looking for? You can Submit a new open access paper.