Search Results for author: Christoph Teichmann

Found 11 papers, 0 papers with code

Generic Oracles for Structured Prediction

no code implementations ACL (IWPT) 2021 Christoph Teichmann, Antoine Venant

When learned without exploration, local models for structured prediction tasks are subject to exposure bias and cannot be trained without detailed guidance.

Imitation Learning Structured Prediction

Uncertainty over Uncertainty: Investigating the Assumptions, Annotations, and Text Measurements of Economic Policy Uncertainty

no code implementations EMNLP (NLP+CSS) 2020 Katherine A. Keith, Christoph Teichmann, Brendan O'Connor, Edgar Meij

We find for this application (1) some annotator disagreements of economic policy uncertainty can be attributed to ambiguity in language, and (2) switching measurements from keyword-matching to supervised machine learning classifiers results in low correlation, a concerning implication for the validity of the index.

Grammatical Sequence Prediction for Real-Time Neural Semantic Parsing

no code implementations WS 2019 Chunyang Xiao, Christoph Teichmann, Konstantine Arkoudas

While sequence-to-sequence (seq2seq) models achieve state-of-the-art performance in many natural language processing tasks, they can be too slow for real-time applications.

Semantic Parsing valid

Discovering User Groups for Natural Language Generation

no code implementations WS 2018 Nikos Engonopoulos, Christoph Teichmann, Alexander Koller

We present a model which predicts how individual users of a dialog system understand and produce utterances based on user groups.

Referring Expression Text Generation

Coarse-To-Fine Parsing for Expressive Grammar Formalisms

no code implementations WS 2017 Christoph Teichmann, Alex Koller, er, Jonas Groschwitz

We generalize coarse-to-fine parsing to grammar formalisms that are more expressive than PCFGs and/or describe languages of trees or graphs.

Generating Contrastive Referring Expressions

no code implementations ACL 2017 Mart{\'\i}n Villalba, Christoph Teichmann, Alex Koller, er

The referring expressions (REs) produced by a natural language generation (NLG) system can be misunderstood by the hearer, even when they are semantically correct.

Text Generation

Cannot find the paper you are looking for? You can Submit a new open access paper.