Search Results for author: Aleksandre Maskharashvili

Found 8 papers, 5 papers with code

Neural Methodius Revisited: Do Discourse Relations Help with Pre-Trained Models Too?

1 code implementation INLG (ACL) 2021 Aleksandre Maskharashvili, Symon Stevens-Guille, Xintong Li, Michael White

Recent developments in natural language generation (NLG) have bolstered arguments in favor of re-introducing explicit coding of discourse relations in the input to neural models.

Relation Text Generation

Self-Training for Compositional Neural NLG in Task-Oriented Dialogue

2 code implementations INLG (ACL) 2021 Xintong Li, Symon Stevens-Guille, Aleksandre Maskharashvili, Michael White

Neural approaches to natural language generation in task-oriented dialogue have typically required large amounts of annotated training data to achieve satisfactory performance, especially when generating from compositional inputs.

Text Generation

Generating Discourse Connectives with Pre-trained Language Models: Conditioning on Discourse Relations Helps Reconstruct the PDTB

1 code implementation SIGDIAL (ACL) 2022 Symon Stevens-Guille, Aleksandre Maskharashvili, Xintong Li, Michael White

Our results suggest that including discourse relation information in the input of the model significantly improves the consistency with which it produces a correctly realized discourse relation in the output.

Relation Text Generation

Two experiments for embedding Wordnet hierarchy into vector spaces

no code implementations GWC 2019 Jean-Philippe Bernardy, Aleksandre Maskharashvili

The first one leverages an existing mapping of words to feature vectors (fastText), and attempts to classify such vectors as within or outside of each class.

Vocal Bursts Valence Prediction

Leveraging Large Pretrained Models for WebNLG 2020

1 code implementation ACL (WebNLG, INLG) 2020 Xintong Li, Aleksandre Maskharashvili, Symon Jory Stevens-Guille, Michael White

In this paper, we report experiments on finetuning large pretrained models to realize resource description framework (RDF) triples to natural language.

Neural NLG for Methodius: From RST Meaning Representations to Texts

1 code implementation INLG (ACL) 2020 Symon Stevens-Guille, Aleksandre Maskharashvili, Amy Isard, Xintong Li, Michael White

While classic NLG systems typically made use of hierarchically structured content plans that included discourse relations as central components, more recent neural approaches have mostly mapped simple, flat inputs to texts without representing discourse relations explicitly.

Sentence

Mapping the hyponymy relation of wordnet onto vector Spaces

no code implementations ICLR 2019 Jean-Philippe Bernardy, Aleksandre Maskharashvili

The first one leverages an existing mapping of words to feature vectors (fasttext), and attempts to classify such vectors as within or outside of each class.

Relation

Cannot find the paper you are looking for? You can Submit a new open access paper.