Search Results for author: Ramón Fernandez Astudillo

Found 4 papers, 3 papers with code

Structural Guidance for Transformer Language Models

1 code implementation ACL 2021 Peng Qian, Tahira Naseem, Roger Levy, Ramón Fernandez Astudillo

Here we study whether structural guidance leads to more human-like systematic linguistic generalization in Transformer language models without resorting to pre-training on very large amounts of data.

Language Modelling

AMR Parsing with Action-Pointer Transformer

no code implementations NAACL 2021 Jiawei Zhou, Tahira Naseem, Ramón Fernandez Astudillo, Radu Florian

In this work, we propose a transition-based system that combines hard-attention over sentences with a target-side action pointer mechanism to decouple source tokens from node representations and address alignments.

Ranked #4 on AMR Parsing on LDC2020T02 (using extra training data)

AMR Parsing

From Softmax to Sparsemax: A Sparse Model of Attention and Multi-Label Classification

8 code implementations5 Feb 2016 André F. T. Martins, Ramón Fernandez Astudillo

We propose sparsemax, a new activation function similar to the traditional softmax, but able to output sparse probabilities.

Classification General Classification +2

Cannot find the paper you are looking for? You can Submit a new open access paper.