no code implementations • 15 Aug 2024 • Shachar Don-Yehiya, Ben Burtenshaw, Ramon Fernandez Astudillo, Cailean Osborne, Mimansa Jaiswal, Tzu-Sheng Kuo, Wenting Zhao, Idan Shenfeld, Andi Peng, Mikhail Yurochkin, Atoosa Kasirzadeh, Yangsibo Huang, Tatsunori Hashimoto, Yacine Jernite, Daniel Vila-Suero, Omri Abend, Jennifer Ding, Sara Hooker, Hannah Rose Kirk, Leshem Choshen
In this work, we bring together interdisciplinary experts to assess the opportunities and challenges to realizing an open ecosystem of human feedback for AI.
no code implementations • 26 May 2023 • Sadhana Kumaravel, Tahira Naseem, Ramon Fernandez Astudillo, Radu Florian, Salim Roukos
We evaluate our oracle and parser using the Abstract Meaning Representation (AMR) parsing 3. 0 corpus.
1 code implementation • NAACL 2022 • Andrew Drozdov, Jiawei Zhou, Radu Florian, Andrew McCallum, Tahira Naseem, Yoon Kim, Ramon Fernandez Astudillo
These alignments are learned separately from parser training and require a complex pipeline of rule-based components, pre-processing, and post-processing to satisfy domain-specific constraints.
no code implementations • 15 Dec 2021 • Mihaela Bornea, Ramon Fernandez Astudillo, Tahira Naseem, Nandana Mihindukulasooriya, Ibrahim Abdelaziz, Pavan Kapanipathi, Radu Florian, Salim Roukos
We propose a transition-based system to transpile Abstract Meaning Representation (AMR) into SPARQL for Knowledge Base Question Answering (KBQA).
Abstract Meaning Representation Knowledge Base Question Answering +1
2 code implementations • NAACL 2022 • Young-suk Lee, Ramon Fernandez Astudillo, Thanh Lam Hoang, Tahira Naseem, Radu Florian, Salim Roukos
AMR parsing has experienced an unprecendented increase in performance in the last three years, due to a mixture of effects including architecture improvements and transfer learning.
Ranked #1 on AMR Parsing on LDC2020T02 (using extra training data)
1 code implementation • NeurIPS 2021 • Hoang Thanh Lam, Gabriele Picco, Yufang Hou, Young-suk Lee, Lam M. Nguyen, Dzung T. Phan, Vanessa López, Ramon Fernandez Astudillo
In many machine learning tasks, models are trained to predict structure data such as graphs.
Ranked #2 on AMR Parsing on LDC2020T02 (using extra training data)
1 code implementation • 13 Apr 2021 • Murali Karthick Baskar, Lukáš Burget, Shinji Watanabe, Ramon Fernandez Astudillo, Jan "Honza'' Černocký
Self-supervised ASR-TTS models suffer in out-of-domain data conditions.
no code implementations • EACL 2021 • Janaki Sheth, Young-suk Lee, Ramon Fernandez Astudillo, Tahira Naseem, Radu Florian, Salim Roukos, Todd Ward
We develop high performance multilingualAbstract Meaning Representation (AMR) sys-tems by projecting English AMR annotationsto other languages with weak supervision.
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Young-suk Lee, Ramon Fernandez Astudillo, Tahira Naseem, Revanth Gangi Reddy, Radu Florian, Salim Roukos
Abstract Meaning Representation (AMR) parsing has experienced a notable growth in performance in the last two years, due both to the impact of transfer learning and the development of novel architectures specific to AMR.
Ranked #2 on AMR Parsing on LDC2014T12
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Ramon Fernandez Astudillo, Miguel Ballesteros, Tahira Naseem, Austin Blodgett, Radu Florian
Modeling the parser state is key to good performance in transition-based parsing.
Ranked #19 on AMR Parsing on LDC2017T10
1 code implementation • ACL 2020 • Manuel Mager, Ramon Fernandez Astudillo, Tahira Naseem, Md. Arafat Sultan, Young-suk Lee, Radu Florian, Salim Roukos
Meaning Representations (AMRs) are broad-coverage sentence-level semantic graphs.
Ranked #10 on AMR-to-Text Generation on LDC2017T10