Search Results for author: Ajay Nagesh

Found 21 papers, 2 papers with code

MEEP: An Open-Source Platform for Human-Human Dialog Collection and End-to-End Agent Training

1 code implementation9 Oct 2020 Arkady Arkhangorodsky, Amittai Axelrod, Christopher Chu, Scot Fang, Yiqi Huang, Ajay Nagesh, Xing Shi, Boliang Zhang, Kevin Knight

We create a new task-oriented dialog platform (MEEP) where agents are given considerable freedom in terms of utterances and API calls, but are constrained to work within a push-button environment.

Lightly-supervised Representation Learning with Global Interpretability

no code implementations WS 2019 Marco A. Valenzuela-Escárcega, Ajay Nagesh, Mihai Surdeanu

We propose a lightly-supervised approach for information extraction, in particular named entity classification, which combines the benefits of traditional bootstrapping, i. e., use of limited annotations and interpretability of extraction patterns, with the robust learning approaches proposed in representation learning.

Representation Learning

Learning Discriminative Relational Features for Sequence Labeling

no code implementations7 May 2017 Naveen Nair, Ajay Nagesh, Ganesh Ramakrishnan

For learning features derived from inputs at a particular sequence position, we propose a Hierarchical Kernels-based approach (referred to as Hierarchical Kernel Learning for Structured Output Spaces - StructHKL).

Visual Supervision in Bootstrapped Information Extraction

no code implementations EMNLP 2018 Matthew Berger, Ajay Nagesh, Joshua Levine, Mihai Surdeanu, Helen Zhang

We challenge a common assumption in active learning, that a list-based interface populated by informative samples provides for efficient and effective data annotation.

Active Learning General Classification

An Exploration of Three Lightly-supervised Representation Learning Approaches for Named Entity Classification

no code implementations COLING 2018 Ajay Nagesh, Mihai Surdeanu

Several semi-supervised representation learning methods have been proposed recently that mitigate the drawbacks of traditional bootstrapping: they reduce the amount of semantic drift introduced by iterative approaches through one-shot learning; others address the sparsity of data through the learning of custom, dense representation for the information modeled.

General Classification One-Shot Learning +1

Exploration of Noise Strategies in Semi-supervised Named Entity Classification

no code implementations SEMEVAL 2019 Pooja Lakshmi Narayan, Ajay Nagesh, Mihai Surdeanu

Our work aims to address this gap by exploring different noise strategies for the semi-supervised named entity classification task, including statistical methods such as adding Gaussian noise to input embeddings, and linguistically-inspired ones such as dropping words and replacing words with their synonyms.

Classification General Classification +1

Semi-Supervised Teacher-Student Architecture for Relation Extraction

no code implementations WS 2019 Fan Luo, Ajay Nagesh, Rebecca Sharp, Mihai Surdeanu

Generating a large amount of training data for information extraction (IE) is either costly (if annotations are created manually), or runs the risk of introducing noisy instances (if distant supervision is used).

Binary Relation Extraction Denoising +1

FINDINGS OF THE IWSLT 2020 EVALUATION CAMPAIGN

no code implementations WS 2020 Ebrahim Ansari, Amittai Axelrod, Nguyen Bach, Ond{\v{r}}ej Bojar, Roldano Cattoni, Fahim Dalvi, Nadir Durrani, Marcello Federico, Christian Federmann, Jiatao Gu, Fei Huang, Kevin Knight, Xutai Ma, Ajay Nagesh, Matteo Negri, Jan Niehues, Juan Pino, Elizabeth Salesky, Xing Shi, Sebastian St{\"u}ker, Marco Turchi, Alex Waibel, er, Changhan Wang

The evaluation campaign of the International Conference on Spoken Language Translation (IWSLT 2020) featured this year six challenge tracks: (i) Simultaneous speech translation, (ii) Video speech translation, (iii) Offline speech translation, (iv) Conversational speech translation, (v) Open domain translation, and (vi) Non-native speech translation.

Translation

Findings of the 2021 Conference on Machine Translation (WMT21)

no code implementations WMT (EMNLP) 2021 Farhad Akhbardeh, Arkady Arkhangorodsky, Magdalena Biesialska, Ondřej Bojar, Rajen Chatterjee, Vishrav Chaudhary, Marta R. Costa-Jussa, Cristina España-Bonet, Angela Fan, Christian Federmann, Markus Freitag, Yvette Graham, Roman Grundkiewicz, Barry Haddow, Leonie Harter, Kenneth Heafield, Christopher Homan, Matthias Huck, Kwabena Amponsah-Kaakyire, Jungo Kasai, Daniel Khashabi, Kevin Knight, Tom Kocmi, Philipp Koehn, Nicholas Lourie, Christof Monz, Makoto Morishita, Masaaki Nagata, Ajay Nagesh, Toshiaki Nakazawa, Matteo Negri, Santanu Pal, Allahsera Auguste Tapo, Marco Turchi, Valentin Vydrin, Marcos Zampieri

This paper presents the results of the newstranslation task, the multilingual low-resourcetranslation for Indo-European languages, thetriangular translation task, and the automaticpost-editing task organised as part of the Con-ference on Machine Translation (WMT) 2021. In the news task, participants were asked tobuild machine translation systems for any of10 language pairs, to be evaluated on test setsconsisting mainly of news stories.

Machine Translation Translation

Cannot find the paper you are looking for? You can Submit a new open access paper.