Search Results for author: Miguel Ballesteros

Found 65 papers, 15 papers with code

Novel Chapter Abstractive Summarization using Spinal Tree Aware Sub-Sentential Content Selection

no code implementations9 Nov 2022 Hardy Hardy, Miguel Ballesteros, Faisal Ladhak, Muhammad Khalifa, Vittorio Castelli, Kathleen McKeown

Summarizing novel chapters is a difficult task due to the input length and the fact that sentences that appear in the desired summaries draw content from multiple places throughout the chapter.

Abstractive Text Summarization Extractive Summarization

Instruction Tuning for Few-Shot Aspect-Based Sentiment Analysis

no code implementations12 Oct 2022 Siddharth Varia, Shuai Wang, Kishaloy Halder, Robert Vacareanu, Miguel Ballesteros, Yassine Benajiba, Neha Anna John, Rishita Anubhai, Smaranda Muresan, Dan Roth

Aspect-based Sentiment Analysis (ABSA) is a fine-grained sentiment analysis task which involves four elements from user-generated texts: aspect term, aspect category, opinion term, and sentiment polarity.

Aspect-Based Sentiment Analysis (ABSA) Few-Shot Learning +1

Contrastive Training Improves Zero-Shot Classification of Semi-structured Documents

no code implementations11 Oct 2022 Muhammad Khalifa, Yogarshi Vyas, Shuai Wang, Graham Horwood, Sunil Mallya, Miguel Ballesteros

The standard classification setting where categories are fixed during both training and testing falls short in dynamic environments where new document categories could potentially emerge.

Classification Document Classification +1

Exploring the Role of Task Transferability in Large-Scale Multi-Task Learning

no code implementations NAACL 2022 Vishakh Padmakumar, Leonard Lausen, Miguel Ballesteros, Sheng Zha, He He, George Karypis

Recent work has found that multi-task training with a large number of diverse tasks can uniformly improve downstream performance on unseen target tasks.

Multi-Task Learning Representation Learning

A Bag of Tricks for Dialogue Summarization

no code implementations EMNLP 2021 Muhammad Khalifa, Miguel Ballesteros, Kathleen McKeown

Dialogue summarization comes with its own peculiar challenges as opposed to news or scientific articles summarization.

Language Modelling Multi-Task Learning

On the Evolution of Syntactic Information Encoded by BERT's Contextualized Representations

no code implementations EACL 2021 Laura Pérez-Mayos, Roberto Carlini, Miguel Ballesteros, Leo Wanner

The adaptation of pretrained language models to solve supervised tasks has become a baseline in NLP, and many recent works have focused on studying how linguistic information is encoded in the pretrained sentence representations.

Constituency Parsing POS

To BERT or Not to BERT: Comparing Task-specific and Task-agnostic Semi-Supervised Approaches for Sequence Tagging

no code implementations EMNLP 2020 Kasturi Bhattacharjee, Miguel Ballesteros, Rishita Anubhai, Smaranda Muresan, Jie Ma, Faisal Ladhak, Yaser Al-Onaizan

Leveraging large amounts of unlabeled data using Transformer-like architectures, like BERT, has gained popularity in recent times owing to their effectiveness in learning general representations that can then be further fine-tuned for downstream tasks to much success.

Linking Entities to Unseen Knowledge Bases with Arbitrary Schemas

no code implementations NAACL 2021 Yogarshi Vyas, Miguel Ballesteros

In entity linking, mentions of named entities in raw text are disambiguated against a knowledge base (KB).

Entity Linking

Structural Supervision Improves Few-Shot Learning and Syntactic Generalization in Neural Language Models

no code implementations EMNLP 2020 Ethan Wilcox, Peng Qian, Richard Futrell, Ryosuke Kohita, Roger Levy, Miguel Ballesteros

Humans can learn structural properties about a word from minimal experience, and deploy their learned syntactic representations uniformly in different grammatical contexts.

Few-Shot Learning

The appearance of particle tracks in detectors

no code implementations1 Jul 2020 Miguel Ballesteros, Tristan Benoist, Martin Fraas, Jürg Fröhlich

The phenomenon that a quantum particle propagating in a detector, such as a Wilson cloud chamber, leaves a track close to a classical trajectory is analyzed.

Mathematical Physics Mathematical Physics Quantum Physics

Transition-Based Dependency Parsing using Perceptron Learner

no code implementations22 Jan 2020 Rahul Radhakrishnan Iyer, Miguel Ballesteros, Chris Dyer, Robert Frederking

Syntactic parsing using dependency structures has become a standard technique in natural language processing with many different parsing models, in particular data-driven models that can be trained on syntactically annotated corpora.

Transition-Based Dependency Parsing

Rewarding Smatch: Transition-Based AMR Parsing with Reinforcement Learning

no code implementations ACL 2019 Tahira Naseem, Abhishek Shah, Hui Wan, Radu Florian, Salim Roukos, Miguel Ballesteros

Our work involves enriching the Stack-LSTM transition-based AMR parser (Ballesteros and Al-Onaizan, 2017) by augmenting training with Policy Learning and rewarding the Smatch score of sampled graphs.

AMR Parsing reinforcement-learning +1

Neural Language Models as Psycholinguistic Subjects: Representations of Syntactic State

2 code implementations NAACL 2019 Richard Futrell, Ethan Wilcox, Takashi Morita, Peng Qian, Miguel Ballesteros, Roger Levy

We deploy the methods of controlled psycholinguistic experimentation to shed light on the extent to which the behavior of neural network language models reflects incremental representations of syntactic state.

Structural Supervision Improves Learning of Non-Local Grammatical Dependencies

no code implementations NAACL 2019 Ethan Wilcox, Peng Qian, Richard Futrell, Miguel Ballesteros, Roger Levy

State-of-the-art LSTM language models trained on large corpora learn sequential contingencies in impressive detail and have been shown to acquire a number of non-local grammatical dependencies with some success.

Language Modelling

Recursive Subtree Composition in LSTM-Based Dependency Parsing

1 code implementation NAACL 2019 Miryam de Lhoneux, Miguel Ballesteros, Joakim Nivre

When ablating the forward LSTM, performance drops less dramatically and composition recovers a substantial part of the gap, indicating that a forward LSTM and composition capture similar information.

Dependency Parsing

Scheduled Multi-Task Learning: From Syntax to Translation

no code implementations TACL 2018 Eliyahu Kiperwasser, Miguel Ballesteros

Neural encoder-decoder models of machine translation have achieved impressive results, while learning linguistic knowledge of both the source and target languages in an implicit end-to-end manner.

Machine Translation Multi-Task Learning +1

Pieces of Eight: 8-bit Neural Machine Translation

no code implementations NAACL 2018 Jerry Quinn, Miguel Ballesteros

Neural machine translation has achieved levels of fluency and adequacy that would have been surprising a short time ago.

Machine Translation Quantization +1

Multimodal Emoji Prediction

1 code implementation NAACL 2018 Francesco Barbieri, Miguel Ballesteros, Francesco Ronzano, Horacio Saggion

Emojis are small images that are commonly included in social media text messages.

Arc-Standard Spinal Parsing with Stack-LSTMs

no code implementations WS 2017 Miguel Ballesteros, Xavier Carreras

We present a neural transition-based parser for spinal trees, a dependency representation of constituent trees.

AMR Parsing using Stack-LSTMs

no code implementations EMNLP 2017 Miguel Ballesteros, Yaser Al-Onaizan

We present a transition-based AMR parser that directly generates AMR parses from plain text.

AMR Parsing POS

Greedy Transition-Based Dependency Parsing with Stack LSTMs

no code implementations CL 2017 Miguel Ballesteros, Chris Dyer, Yoav Goldberg, Noah A. Smith

During training, dynamic oracles alternate between sampling parser states from the training data and from the model as it is being learned, making the model more robust to the kinds of errors that will be made at test time.

Transition-Based Dependency Parsing

Are Emojis Predictable?

3 code implementations EACL 2017 Francesco Barbieri, Miguel Ballesteros, Horacio Saggion

Emojis are ideograms which are naturally combined with plain text to visually complement or condense the meaning of a message.

DyNet: The Dynamic Neural Network Toolkit

4 code implementations15 Jan 2017 Graham Neubig, Chris Dyer, Yoav Goldberg, Austin Matthews, Waleed Ammar, Antonios Anastasopoulos, Miguel Ballesteros, David Chiang, Daniel Clothiaux, Trevor Cohn, Kevin Duh, Manaal Faruqui, Cynthia Gan, Dan Garrette, Yangfeng Ji, Lingpeng Kong, Adhiguna Kuncoro, Gaurav Kumar, Chaitanya Malaviya, Paul Michel, Yusuke Oda, Matthew Richardson, Naomi Saphra, Swabha Swayamdipta, Pengcheng Yin

In the static declaration strategy that is used in toolkits like Theano, CNTK, and TensorFlow, the user first defines a computation graph (a symbolic representation of the computation), and then examples are fed into an engine that executes this computation and computes its derivatives.

graph construction

What Do Recurrent Neural Network Grammars Learn About Syntax?

1 code implementation EACL 2017 Adhiguna Kuncoro, Miguel Ballesteros, Lingpeng Kong, Chris Dyer, Graham Neubig, Noah A. Smith

We investigate what information they learn, from a linguistic perspective, through various ablations to the model and the data, and by augmenting the model with an attention mechanism (GA-RNNG) to enable closer inspection.

Constituency Parsing Dependency Parsing +1

Static and Dynamic Feature Selection in Morphosyntactic Analyzers

no code implementations21 Mar 2016 Bernd Bohnet, Miguel Ballesteros, Ryan Mcdonald, Joakim Nivre

Experiments on five languages show that feature selection can result in more compact models as well as higher accuracy under all conditions, but also that a dynamic ordering works better than a static ordering and that joint systems benefit more than standalone taggers.

Training with Exploration Improves a Greedy Stack-LSTM Parser

no code implementations11 Mar 2016 Miguel Ballesteros, Yoav Goldberg, Chris Dyer, Noah A. Smith

We adapt the greedy Stack-LSTM dependency parser of Dyer et al. (2015) to support a training-with-exploration procedure using dynamic oracles(Goldberg and Nivre, 2013) instead of cross-entropy minimization.

Chinese Dependency Parsing Dependency Parsing

Neural Architectures for Named Entity Recognition

42 code implementations NAACL 2016 Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, Chris Dyer

State-of-the-art named entity recognition systems rely heavily on hand-crafted features and domain-specific knowledge in order to learn effectively from the small, supervised training corpora that are available.

Named Entity Recognition

Improved Transition-Based Parsing by Modeling Characters instead of Words with LSTMs

1 code implementation EMNLP 2015 Miguel Ballesteros, Chris Dyer, Noah A. Smith

We present extensions to a continuous-state dependency parsing method that makes it applicable to morphologically rich languages.

Dependency Parsing

MaltOptimizer: A System for MaltParser Optimization

no code implementations LREC 2012 Miguel Ballesteros, Joakim Nivre

Freely available statistical parsers often require careful optimization to produce state-of-the-art results, which can be a non-trivial task especially for application developers who are not interested in parsing research for its own sake.

Dependency Parsing

Cannot find the paper you are looking for? You can Submit a new open access paper.