Search Results for author: Mahsa Yarmohammadi

Found 13 papers, 6 papers with code

MultiMUC: Multilingual Template Filling on MUC-4

1 code implementation29 Jan 2024 William Gantt, Shabnam Behzad, Hannah Youngeun An, Yunmo Chen, Aaron Steven White, Benjamin Van Durme, Mahsa Yarmohammadi

We introduce MultiMUC, the first multilingual parallel corpus for template filling, comprising translations of the classic MUC-4 template filling benchmark into five languages: Arabic, Chinese, Farsi, Korean, and Russian.

Machine Translation Translation

MegaWika: Millions of reports and their sources across 50 diverse languages

no code implementations13 Jul 2023 Samuel Barham, Orion Weller, Michelle Yuan, Kenton Murray, Mahsa Yarmohammadi, Zhengping Jiang, Siddharth Vashishtha, Alexander Martin, Anqi Liu, Aaron Steven White, Jordan Boyd-Graber, Benjamin Van Durme

To foster the development of new models for collaborative AI-assisted report generation, we introduce MegaWika, consisting of 13 million Wikipedia articles in 50 diverse languages, along with their 71 million referenced source materials.

Cross-Lingual Question Answering Retrieval +1

Multilingual Coreference Resolution in Multiparty Dialogue

1 code implementation2 Aug 2022 Boyuan Zheng, Patrick Xia, Mahsa Yarmohammadi, Benjamin Van Durme

Existing multiparty dialogue datasets for entity coreference resolution are nascent, and many challenges are still unaddressed.

coreference-resolution Data Augmentation

Gradual Fine-Tuning for Low-Resource Domain Adaptation

2 code implementations EACL (AdaptNLP) 2021 Haoran Xu, Seth Ebner, Mahsa Yarmohammadi, Aaron Steven White, Benjamin Van Durme, Kenton Murray

Fine-tuning is known to improve NLP models by adapting an initial model trained on more plentiful but less domain-salient examples to data in a target domain.

Domain Adaptation

Semi-Orthogonal Low-Rank Matrix Factorization for Deep Neural Networks

1 code implementation Interspeech 2018 2018 Daniel Povey, Gaofeng Cheng, Yiming Wang, Ke Li, Hainan Xu, Mahsa Yarmohammadi, Sanjeev Khudanpur

Time Delay Neural Networks (TDNNs), also known as onedimensional Convolutional Neural Networks (1-d CNNs), are an efficient and well-performing neural network architecture for speech recognition.

speech-recognition Speech Recognition

Cannot find the paper you are looking for? You can Submit a new open access paper.