Search Results for author: Jaime Carbonell

Found 53 papers, 17 papers with code

Domain Adaptation with Invariant Representation Learning: What Transformations to Learn?

1 code implementation NeurIPS 2021 Petar Stojanov, Zijian Li, Mingming Gong, Ruichu Cai, Jaime Carbonell, Kun Zhang

We provide reasoning why when the supports of the source and target data from overlap, any map of $X$ that is fixed across domains may not be suitable for domain adaptation via invariant features.

Representation Learning Transfer Learning +1

Efficient Meta Lifelong-Learning with Limited Memory

no code implementations EMNLP 2020 ZiRui Wang, Sanket Vaibhav Mehta, Barnabás Póczos, Jaime Carbonell

State-of-the-art lifelong language learning methods store past examples in episodic memory and replay them at both training and inference time.

Multi-Task Learning Question Answering +2

Soft Gazetteers for Low-Resource Named Entity Recognition

1 code implementation ACL 2020 Shruti Rijhwani, Shuyan Zhou, Graham Neubig, Jaime Carbonell

However, designing such features for low-resource languages is challenging, because exhaustive entity gazetteers do not exist in these languages.

Cross-Lingual Entity Linking Entity Linking +4

Improving Candidate Generation for Low-resource Cross-lingual Entity Linking

1 code implementation TACL 2020 Shuyan Zhou, Shruti Rijhawani, John Wieting, Jaime Carbonell, Graham Neubig

Cross-lingual entity linking (XEL) is the task of finding referents in a target-language knowledge base (KB) for mentions extracted from source-language texts.

Cross-Lingual Entity Linking Entity Linking +1

StructSum: Summarization via Structured Representations

1 code implementation EACL 2021 Vidhisha Balachandran, Artidoro Pagnoni, Jay Yoon Lee, Dheeraj Rajagopal, Jaime Carbonell, Yulia Tsvetkov

To this end, we propose incorporating latent and explicit dependencies across sentences in the source document into end-to-end single-document summarization models.

Abstractive Text Summarization Decoder +2

Optimizing Data Usage via Differentiable Rewards

1 code implementation ICML 2020 Xinyi Wang, Hieu Pham, Paul Michel, Antonios Anastasopoulos, Jaime Carbonell, Graham Neubig

To acquire a new skill, humans learn better and faster if a tutor, based on their current knowledge level, informs them of how much attention they should pay to particular content or practice problems.

Image Classification Machine Translation +1

Learning Rhyming Constraints using Structured Adversaries

1 code implementation IJCNLP 2019 Harsh Jhamtani, Sanket Vaibhav Mehta, Jaime Carbonell, Taylor Berg-Kirkpatrick

Existing recurrent neural language models often fail to capture higher-level structure present in text: for example, rhyming patterns present in poetry.

XLNet: Generalized Autoregressive Pretraining for Language Understanding

26 code implementations NeurIPS 2019 Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, Quoc V. Le

With the capability of modeling bidirectional contexts, denoising autoencoding based pretraining like BERT achieves better performance than pretraining approaches based on autoregressive language modeling.

Audio Question Answering Chinese Reading Comprehension +9

The ARIEL-CMU Systems for LoReHLT18

no code implementations24 Feb 2019 Aditi Chaudhary, Siddharth Dalmia, Junjie Hu, Xinjian Li, Austin Matthews, Aldrian Obaja Muis, Naoki Otani, Shruti Rijhwani, Zaid Sheikh, Nidhi Vyas, Xinyi Wang, Jiateng Xie, Ruochen Xu, Chunting Zhou, Peter J. Jansen, Yiming Yang, Lori Levin, Florian Metze, Teruko Mitamura, David R. Mortensen, Graham Neubig, Eduard Hovy, Alan W. black, Jaime Carbonell, Graham V. Horwood, Shabnam Tafreshi, Mona Diab, Efsun S. Kayi, Noura Farra, Kathleen McKeown

This paper describes the ARIEL-CMU submissions to the Low Resource Human Language Technologies (LoReHLT) 2018 evaluations for the tasks Machine Translation (MT), Entity Discovery and Linking (EDL), and detection of Situation Frames in Text and Speech (SF Text and Speech).

Machine Translation Translation

Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context

37 code implementations ACL 2019 Zihang Dai, Zhilin Yang, Yiming Yang, Jaime Carbonell, Quoc V. Le, Ruslan Salakhutdinov

Transformers have a potential of learning longer-term dependency, but are limited by a fixed-length context in the setting of language modeling.

Language Modelling

Characterizing and Avoiding Negative Transfer

no code implementations CVPR 2019 Zirui Wang, Zihang Dai, Barnabás Póczos, Jaime Carbonell

When labeled data is scarce for a specific target task, transfer learning often offers an effective solution by utilizing data from a related source task.

Transfer Learning

Zero-shot Neural Transfer for Cross-lingual Entity Linking

1 code implementation9 Nov 2018 Shruti Rijhwani, Jiateng Xie, Graham Neubig, Jaime Carbonell

To address this problem, we investigate zero-shot cross-lingual entity linking, in which we assume no bilingual lexical resources are available in the source low-resource language.

Cross-Lingual Entity Linking Entity Linking

DeepCx: A transition-based approach for shallow semantic parsing with complex constructional triggers

no code implementations EMNLP 2018 Jesse Dunietz, Jaime Carbonell, Lori Levin

This paper introduces the surface construction labeling (SCL) task, which expands the coverage of Shallow Semantic Parsing (SSP) to include frames triggered by complex constructions.

Semantic Parsing

Neural Cross-Lingual Named Entity Recognition with Minimal Resources

1 code implementation EMNLP 2018 Jiateng Xie, Zhilin Yang, Graham Neubig, Noah A. Smith, Jaime Carbonell

To improve robustness to word order differences, we propose to use self-attention, which allows for a degree of flexibility with respect to word order.

named-entity-recognition Named Entity Recognition +2

Towards Semi-Supervised Learning for Deep Semantic Role Labeling

no code implementations EMNLP 2018 Sanket Vaibhav Mehta, Jay Yoon Lee, Jaime Carbonell

The paper proposes a semi-supervised semantic role labeling method that outperforms the state-of-the-art in limited SRL training corpora.

Semantic Role Labeling

Towards more Reliable Transfer Learning

no code implementations6 Jul 2018 Zirui Wang, Jaime Carbonell

Multi-source transfer learning has been proven effective when within-target labeled data is scarce.

Active Learning Transfer Learning

Lifelong Learning with Output Kernels

no code implementations ICLR 2018 Keerthiram Murugesan, Jaime Carbonell

Lifelong learning poses considerable challenges in terms of effectiveness (minimizing prediction errors for all tasks) and overall computational tractability for real-time performance.

Active Learning from Peers

no code implementations NeurIPS 2017 Keerthiram Murugesan, Jaime Carbonell

This paper addresses the challenge of learning from peers in an online multitask setting.

Active Learning

Asymmetric Variational Autoencoders

1 code implementation20 Nov 2017 Guoqing Zheng, Yiming Yang, Jaime Carbonell

However, freely enriching the family of variational distribution is challenging since the ELBO requires variational likelihood evaluations of the latent variables.

Density Estimation Variational Inference

Convolutional Normalizing Flows

1 code implementation ICLR 2018 Guoqing Zheng, Yiming Yang, Jaime Carbonell

Variational inference provides one way to approximate the posterior distribution, however its expressive power is limited and so is the accuracy of resulting approximation.

Variational Inference

Gradient-based Inference for Networks with Output Constraints

no code implementations26 Jul 2017 Jay Yoon Lee, Sanket Vaibhav Mehta, Michael Wick, Jean-Baptiste Tristan, Jaime Carbonell

Practitioners apply neural networks to increasingly complex problems in natural language processing, such as syntactic parsing and semantic role labeling that have rich output structures.

Constituency Parsing Semantic Role Labeling +2

Block-Normalized Gradient Method: An Empirical Study for Training Deep Neural Network

2 code implementations ICLR 2018 Adams Wei Yu, Lei Huang, Qihang Lin, Ruslan Salakhutdinov, Jaime Carbonell

In this paper, we propose a generic and simple strategy for utilizing stochastic gradient information in optimization.

Co-Clustering for Multitask Learning

no code implementations3 Mar 2017 Keerthiram Murugesan, Jaime Carbonell, Yiming Yang

This paper presents a new multitask learning framework that learns a shared representation among the tasks, incorporating both task and feature clusters.

Clustering

Self-Paced Multitask Learning with Shared Knowledge

no code implementations2 Mar 2017 Keerthiram Murugesan, Jaime Carbonell

This paper introduces self-paced task selection to multitask learning, where instances from more closely related tasks are selected in a progression of easier-to-harder tasks, to emulate an effective human education strategy, but applied to multitask machine learning.

Automatically Tagging Constructions of Causation and Their Slot-Fillers

no code implementations TACL 2017 Jesse Dunietz, Lori Levin, Jaime Carbonell

Semantic parsing becomes difficult in the face of the wide variety of linguistic realizations that causation can take on.

Semantic Parsing

Adaptive Smoothed Online Multi-Task Learning

no code implementations NeurIPS 2016 Keerthiram Murugesan, Hanxiao Liu, Jaime Carbonell, Yiming Yang

This paper addresses the challenge of jointly learning both the per-task model parameters and the inter-task relationships in a multi-task online learning setting.

Multi-Task Learning

Leveraging Multilingual Training for Limited Resource Event Extraction

no code implementations COLING 2016 Andrew Hsi, Yiming Yang, Jaime Carbonell, Ruochen Xu

Event extraction has become one of the most important topics in information extraction, but to date, there is very limited work on leveraging cross-lingual training to boost performance.

Dependency Parsing Event Argument Extraction +4

Multi-Task Multiple Kernel Relationship Learning

no code implementations10 Nov 2016 Keerthiram Murugesan, Jaime Carbonell

The problem is formulated as a regularization-based approach called \textit{Multi-Task Multiple Kernel Relationship Learning} (\textit{MK-MTRL}), which models the task relationship matrix from the weights learned from latent feature spaces of task-specific base kernels.

Privacy-Preserving Multi-Document Summarization

no code implementations6 Aug 2015 Luís Marujo, José Portêlo, Wang Ling, David Martins de Matos, João P. Neto, Anatole Gershman, Jaime Carbonell, Isabel Trancoso, Bhiksha Raj

State-of-the-art extractive multi-document summarization systems are usually designed without any concern about privacy issues, meaning that all documents are open to third parties.

Document Summarization Multi-Document Summarization +1

Bounds on the Minimax Rate for Estimating a Prior over a VC Class from Independent Learning Tasks

no code implementations20 May 2015 Liu Yang, Steve Hanneke, Jaime Carbonell

We study the optimal rates of convergence for estimating a prior distribution over a VC class from a sequence of independent data sets respectively labeled by independent target functions sampled from the prior.

Transfer Learning

Efficient Structured Matrix Rank Minimization

no code implementations NeurIPS 2014 Adams Wei Yu, Wanli Ma, YaoLiang Yu, Jaime Carbonell, Suvrit Sra

We study the problem of finding structured low-rank matrices using nuclear norm regularization where the structure is encoded by a linear map.

Resources for the Detection of Conventionalized Metaphors in Four Languages

no code implementations LREC 2014 Lori Levin, Teruko Mitamura, Brian MacWhinney, Davida Fromm, Jaime Carbonell, Weston Feely, Robert Frederking, Anatole Gershman, Carlos Ramirez

The extraction rules operate on the output of a dependency parser and identify the grammatical configurations (such as a verb with a prepositional phrase complement) that are likely to contain conventional metaphors.

Ensemble Detection of Single & Multiple Events at Sentence-Level

no code implementations24 Mar 2014 Luís Marujo, Anatole Gershman, Jaime Carbonell, João P. Neto, David Martins de Matos

Event classification at sentence level is an important Information Extraction task with applications in several NLP, IR, and personalization systems.

Classification General Classification +2

Co-Multistage of Multiple Classifiers for Imbalanced Multiclass Learning

no code implementations23 Dec 2013 Luis Marujo, Anatole Gershman, Jaime Carbonell, David Martins de Matos, João P. Neto

In this work, we propose two stochastic architectural models (CMC and CMC-M) with two layers of classifiers applicable to datasets with one and multiple skewed classes.

Event Detection General Classification +2

Buy-in-Bulk Active Learning

no code implementations NeurIPS 2013 Liu Yang, Jaime Carbonell

We additionally study the total cost sufficient for learning, for an abstract notion of the cost of requesting the labels of a given number of examples at once.

Active Learning

Recognition of Named-Event Passages in News Articles

no code implementations COLING 2012 Luis Marujo, Wang Ling, Anatole Gershman, Jaime Carbonell, João P. Neto, David Matos

We extend the concept of Named Entities to Named Events - commonly occurring events such as battles and earthquakes.

Cannot find the paper you are looking for? You can Submit a new open access paper.