Search Results for author: Young-suk Lee

Found 19 papers, 10 papers with code

Language Independent Dependency to Constituent Tree Conversion

no code implementations COLING 2016 Young-suk Lee, Zhiguo Wang

We present a dependency to constituent tree conversion technique that aims to improve constituent parsing accuracies by leveraging dependency treebanks available in a wide variety in many languages.

Nonparametric Deconvolution Models

1 code implementation17 Mar 2020 Allison J. B. Chaney, Archit Verma, Young-suk Lee, Barbara E. Engelhardt

This uniquely allows NDMs both to deconvolve each observation into its constituent factors, and also to describe how the factor distributions specific to each observation vary across observations and deviate from the corresponding global factors.

Variational Inference

Pushing the Limits of AMR Parsing with Self-Learning

1 code implementation Findings of the Association for Computational Linguistics 2020 Young-suk Lee, Ramon Fernandez Astudillo, Tahira Naseem, Revanth Gangi Reddy, Radu Florian, Salim Roukos

Abstract Meaning Representation (AMR) parsing has experienced a notable growth in performance in the last two years, due both to the impact of transfer learning and the development of novel architectures specific to AMR.

AMR Parsing Machine Translation +4

Bootstrapping Multilingual AMR with Contextual Word Alignments

no code implementations EACL 2021 Janaki Sheth, Young-suk Lee, Ramon Fernandez Astudillo, Tahira Naseem, Radu Florian, Salim Roukos, Todd Ward

We develop high performance multilingualAbstract Meaning Representation (AMR) sys-tems by projecting English AMR annotationsto other languages with weak supervision.

Multilingual Word Embeddings Word Alignment +1

Structure-aware Fine-tuning of Sequence-to-sequence Transformers for Transition-based AMR Parsing

1 code implementation EMNLP 2021 Jiawei Zhou, Tahira Naseem, Ramón Fernandez Astudillo, Young-suk Lee, Radu Florian, Salim Roukos

We provide a detailed comparison with recent progress in AMR parsing and show that the proposed parser retains the desirable properties of previous transition-based approaches, while being simpler and reaching the new parsing state of the art for AMR 2. 0, without the need for graph re-categorization.

Ranked #9 on AMR Parsing on LDC2017T10 (using extra training data)

AMR Parsing Sentence

Maximum Bayes Smatch Ensemble Distillation for AMR Parsing

2 code implementations NAACL 2022 Young-suk Lee, Ramon Fernandez Astudillo, Thanh Lam Hoang, Tahira Naseem, Radu Florian, Salim Roukos

AMR parsing has experienced an unprecendented increase in performance in the last three years, due to a mixture of effects including architecture improvements and transfer learning.

 Ranked #1 on AMR Parsing on LDC2020T02 (using extra training data)

AMR Parsing Data Augmentation +3

DocAMR: Multi-Sentence AMR Representation and Evaluation

1 code implementation NAACL 2022 Tahira Naseem, Austin Blodgett, Sadhana Kumaravel, Tim O'Gorman, Young-suk Lee, Jeffrey Flanigan, Ramón Fernandez Astudillo, Radu Florian, Salim Roukos, Nathan Schneider

Despite extensive research on parsing of English sentences into Abstraction Meaning Representation (AMR) graphs, which are compared to gold graphs via the Smatch metric, full-document parsing into a unified graph representation lacks well-defined representation and evaluation.

coreference-resolution Sentence

Learning Cross-Lingual IR from an English Retriever

1 code implementation NAACL 2022 Yulong Li, Martin Franz, Md Arafat Sultan, Bhavani Iyer, Young-suk Lee, Avirup Sil

We present DR. DECR (Dense Retrieval with Distillation-Enhanced Cross-Lingual Representation), a new cross-lingual information retrieval (CLIR) system trained using multi-stage knowledge distillation (KD).

Cross-Lingual Information Retrieval Knowledge Distillation +3

A Benchmark for Generalizable and Interpretable Temporal Question Answering over Knowledge Bases

no code implementations15 Jan 2022 Sumit Neelam, Udit Sharma, Hima Karanam, Shajith Ikbal, Pavan Kapanipathi, Ibrahim Abdelaziz, Nandana Mihindukulasooriya, Young-suk Lee, Santosh Srivastava, Cezar Pendus, Saswati Dana, Dinesh Garg, Achille Fokoue, G P Shrivatsa Bhargav, Dinesh Khandelwal, Srinivas Ravishankar, Sairam Gurajada, Maria Chang, Rosario Uceda-Sosa, Salim Roukos, Alexander Gray, Guilherme Lima, Ryan Riegel, Francois Luus, L Venkata Subramaniam

Specifically, our benchmark is a temporal question answering dataset with the following advantages: (a) it is based on Wikidata, which is the most frequently curated, openly available knowledge base, (b) it includes intermediate sparql queries to facilitate the evaluation of semantic parsing based approaches for KBQA, and (c) it generalizes to multiple knowledge bases: Freebase and Wikidata.

Knowledge Base Question Answering Semantic Parsing

AMR Parsing with Instruction Fine-tuned Pre-trained Language Models

no code implementations24 Apr 2023 Young-suk Lee, Ramón Fernandez Astudillo, Radu Florian, Tahira Naseem, Salim Roukos

Instruction fine-tuned language models on a collection of instruction annotated datasets (FLAN) have shown highly effective to improve model performance and generalization to unseen tasks.

AMR Parsing Semantic Role Labeling

Ensemble-Instruct: Generating Instruction-Tuning Data with a Heterogeneous Mixture of LMs

1 code implementation21 Oct 2023 Young-suk Lee, Md Arafat Sultan, Yousef El-Kurdi, Tahira Naseem Asim Munawar, Radu Florian, Salim Roukos, Ramón Fernandez Astudillo

Using in-context learning (ICL) for data generation, techniques such as Self-Instruct (Wang et al., 2023) or the follow-up Alpaca (Taori et al., 2023) can train strong conversational agents with only a small amount of human supervision.

In-Context Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.