Ensembling Graph Predictions for AMR Parsing

In many machine learning tasks, models are trained to predict structure data such as graphs. For example, in natural language processing, it is very common to parse texts into dependency trees or abstract meaning representation (AMR) graphs. On the other hand, ensemble methods combine predictions from multiple models to create a new one that is more robust and accurate than individual predictions. In the literature, there are many ensembling techniques proposed for classification or regression problems, however, ensemble graph prediction has not been studied thoroughly. In this work, we formalize this problem as mining the largest graph that is the most supported by a collection of graph predictions. As the problem is NP-Hard, we propose an efficient heuristic algorithm to approximate the optimal solution. To validate our approach, we carried out experiments in AMR parsing problems. The experimental results demonstrate that the proposed approach can combine the strength of state-of-the-art AMR parsers to create new predictions that are more accurate than any individual models in five standard benchmark datasets.

PDF Abstract NeurIPS 2021 PDF NeurIPS 2021 Abstract

Results from the Paper


Ranked #2 on AMR Parsing on LDC2020T02 (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
AMR Parsing Bio Graphene Smatch Smatch 62.8 # 3
AMR Parsing LDC2017T10 Graphene Support (IBM) Smatch 85.85 # 5
AMR Parsing LDC2017T10 Graphene Smatch (IBM) Smatch 86.26 # 2
AMR Parsing LDC2020T02 Graphene Support (IBM) Smatch 84.41 # 4
AMR Parsing LDC2020T02 Graphene Smatch (IBM) Smatch 84.87 # 2
AMR Parsing New3 Graphene Smatch Smatch 76.32 # 2
AMR Parsing The Little Prince Graphene Smatch Smatch 79.52 # 2

Methods


No methods listed for this paper. Add relevant methods here