Search Results for author: Ioannis Konstas

Found 48 papers, 25 papers with code

Learning to Read Maps: Understanding Natural Language Instructions from Unseen Maps

1 code implementation ACL (splurobonlp) 2021 Miltiadis Marios Katsakioris, Ioannis Konstas, Pierre Yves Mignotte, Helen Hastie

Robust situated dialog requires the ability to process instructions based on spatial information, which may or may not be available.

Visually Grounded Language Learning: a review of language games, datasets, tasks, and models

no code implementations5 Dec 2023 Alessandro Suglia, Ioannis Konstas, Oliver Lemon

Our analysis of the literature provides evidence that future work should be focusing on interactive games where communication in Natural Language is important to resolve ambiguities about object referents and action plans and that physical embodiment is essential to understand the semantics of situations and events.

Grounded language learning Language Modelling +1

Multitask Multimodal Prompted Training for Interactive Embodied Task Completion

no code implementations7 Nov 2023 Georgios Pantazopoulos, Malvina Nikandrou, Amit Parekh, Bhathiya Hemanthage, Arash Eshghi, Ioannis Konstas, Verena Rieser, Oliver Lemon, Alessandro Suglia

Interactive and embodied tasks pose at least two fundamental challenges to existing Vision & Language (VL) models, including 1) grounding language in trajectories of actions and observations, and 2) referential disambiguation.

Text Generation

Neuron to Graph: Interpreting Language Model Neurons at Scale

1 code implementation31 May 2023 Alex Foote, Neel Nanda, Esben Kran, Ioannis Konstas, Shay Cohen, Fazl Barez

Conventional methods require examination of examples with strong neuron activation and manual identification of patterns to decipher the concepts a neuron responds to.

Language Modelling

Detecting Edit Failures In Large Language Models: An Improved Specificity Benchmark

1 code implementation27 May 2023 Jason Hoelscher-Obermaier, Julia Persson, Esben Kran, Ioannis Konstas, Fazl Barez

We use this improved benchmark to evaluate recent model editing techniques and find that they suffer from low specificity.

Model Editing Specificity

The Larger They Are, the Harder They Fail: Language Models do not Recognize Identifier Swaps in Python

1 code implementation24 May 2023 Antonio Valerio Miceli-Barone, Fazl Barez, Ioannis Konstas, Shay B. Cohen

Large Language Models (LLMs) have successfully been applied to code generation tasks, raising the question of how well these models understand programming.

Code Generation

iLab at SemEval-2023 Task 11 Le-Wi-Di: Modelling Disagreement or Modelling Perspectives?

no code implementations10 May 2023 Nikolas Vitsakis, Amit Parekh, Tanvi Dinkar, Gavin Abercrombie, Ioannis Konstas, Verena Rieser

There are two competing approaches for modelling annotator disagreement: distributional soft-labelling approaches (which aim to capture the level of disagreement) or modelling perspectives of individual annotators or groups thereof.

Going for GOAL: A Resource for Grounded Football Commentaries

1 code implementation8 Nov 2022 Alessandro Suglia, José Lopes, Emanuele Bastianelli, Andrea Vanzo, Shubham Agarwal, Malvina Nikandrou, Lu Yu, Ioannis Konstas, Verena Rieser

As the course of a game is unpredictable, so are commentaries, which makes them a unique resource to investigate dynamic language grounding.

Moment Retrieval Retrieval

Mind the Labels: Describing Relations in Knowledge Graphs With Pretrained Models

1 code implementation13 Oct 2022 Zdeněk Kasner, Ioannis Konstas, Ondřej Dušek

Pretrained language models (PLMs) for data-to-text (D2T) generation can use human-readable data labels such as column headings, keys, or relation names to generalize to out-of-domain examples.

Knowledge Graphs Relation

MiRANews: Dataset and Benchmarks for Multi-Resource-Assisted News Summarization

1 code implementation Findings (EMNLP) 2021 Xinnuo Xu, Ondřej Dušek, Shashi Narayan, Verena Rieser, Ioannis Konstas

We show via data analysis that it's not only the models which are to blame: more than 27% of facts mentioned in the gold summaries of MiRANews are better grounded on assisting documents than in the main source articles.

Document Summarization Multi-Document Summarization +2

AGGGEN: Ordering and Aggregating while Generating

1 code implementation ACL 2021 Xinnuo Xu, Ondřej Dušek, Verena Rieser, Ioannis Konstas

We present AGGGEN (pronounced 'again'), a data-to-text model which re-introduces two explicit sentence planning stages into neural data-to-text systems: input ordering and input aggregation.

Sentence

Imagining Grounded Conceptual Representations from Perceptual Information in Situated Guessing Games

no code implementations COLING 2020 Alessandro Suglia, Antonio Vergari, Ioannis Konstas, Yonatan Bisk, Emanuele Bastianelli, Andrea Vanzo, Oliver Lemon

However, as shown by Suglia et al. (2020), existing models fail to learn truly multi-modal representations, relying instead on gold category labels for objects in the scene both at training and inference time.

Object

Findings of the Fourth Workshop on Neural Generation and Translation

no code implementations WS 2020 Kenneth Heafield, Hiroaki Hayashi, Yusuke Oda, Ioannis Konstas, Andrew Finch, Graham Neubig, Xi-An Li, Alex Birch, ra

We describe the finding of the Fourth Workshop on Neural Generation and Translation, held in concert with the annual conference of the Association for Computational Linguistics (ACL 2020).

Machine Translation NMT +1

Fact-based Content Weighting for Evaluating Abstractive Summarisation

no code implementations ACL 2020 Xinnuo Xu, Ond{\v{r}}ej Du{\v{s}}ek, Jingyi Li, Verena Rieser, Ioannis Konstas

Abstractive summarisation is notoriously hard to evaluate since standard word-overlap-based metrics are insufficient.

CompGuessWhat?!: A Multi-task Evaluation Framework for Grounded Language Learning

no code implementations ACL 2020 Alessandro Suglia, Ioannis Konstas, Andrea Vanzo, Emanuele Bastianelli, Desmond Elliott, Stella Frank, Oliver Lemon

To remedy this, we present GROLLA, an evaluation framework for Grounded Language Learning with Attributes with three sub-tasks: 1) Goal-oriented evaluation; 2) Object attribute prediction evaluation; and 3) Zero-shot evaluation.

Attribute Grounded language learning

A Scientific Information Extraction Dataset for Nature Inspired Engineering

1 code implementation LREC 2020 Ruben Kruiper, Julian F. V. Vincent, Jessica Chen-Burger, Marc P. Y. Desmulliez, Ioannis Konstas

Nature has inspired various ground-breaking technological developments in applications ranging from robotics to aerospace engineering and the manufacturing of medical devices.

Relation Extraction

In Layman's Terms: Semi-Open Relation Extraction from Scientific Texts

1 code implementation ACL 2020 Ruben Kruiper, Julian F. V. Vincent, Jessica Chen-Burger, Marc P. Y. Desmulliez, Ioannis Konstas

First, we present the Focused Open Biological Information Extraction (FOBIE) dataset and use FOBIE to train a state-of-the-art narrow scientific IE system to extract trade-off relations and arguments that are central to biology texts.

Relation Relation Extraction

History for Visual Dialog: Do we really need it?

2 code implementations ACL 2020 Shubham Agarwal, Trung Bui, Joon-Young Lee, Ioannis Konstas, Verena Rieser

Visual Dialog involves "understanding" the dialog history (what has been discussed previously) and the current question (what is asked), in addition to grounding information in the image, to generate the correct response.

Visual Dialog

Findings of the Third Workshop on Neural Generation and Translation

no code implementations WS 2019 Hiroaki Hayashi, Yusuke Oda, Alexandra Birch, Ioannis Konstas, Andrew Finch, Minh-Thang Luong, Graham Neubig, Katsuhito Sudoh

This document describes the findings of the Third Workshop on Neural Generation and Translation, held in concert with the annual conference of the Empirical Methods in Natural Language Processing (EMNLP 2019).

Machine Translation NMT +1

Automatic Quality Estimation for Natural Language Generation: Ranting (Jointly Rating and Ranking)

1 code implementation WS 2019 Ondřej Dušek, Karin Sevegnani, Ioannis Konstas, Verena Rieser

We present a recurrent neural network based system for automatic quality estimation of natural language generation (NLG) outputs, which jointly learns to assign numerical ratings to individual outputs and to provide pairwise rankings of two different outputs.

Learning-To-Rank Text Generation

Corpus of Multimodal Interaction for Collaborative Planning

no code implementations WS 2019 Miltiadis Marios Katsakioris, Helen Hastie, Ioannis Konstas, Atanas Laskov

As autonomous systems become more commonplace, we need a way to easily and naturally communicate to them our goals and collaboratively come up with a plan on how to achieve these goals.

SEQ^3: Differentiable Sequence-to-Sequence-to-Sequence Autoencoder for Unsupervised Abstractive Sentence Compression

1 code implementation7 Apr 2019 Christos Baziotis, Ion Androutsopoulos, Ioannis Konstas, Alexandros Potamianos

The proposed model does not require parallel text-summary pairs, achieving promising results in unsupervised sentence compression on benchmark datasets.

Language Modelling Sentence +1

A Knowledge-Grounded Multimodal Search-Based Conversational Agent

1 code implementation WS 2018 Shubham Agarwal, Ondrej Dusek, Ioannis Konstas, Verena Rieser

Multimodal search-based dialogue is a challenging new task: It extends visually grounded question answering systems into multi-turn conversations with access to an external database.

Question Answering Response Generation

Better Conversations by Modeling, Filtering, and Optimizing for Coherence and Diversity

1 code implementation EMNLP 2018 Xinnuo Xu, Ond{\v{r}}ej Du{\v{s}}ek, Ioannis Konstas, Verena Rieser

We present three enhancements to existing encoder-decoder models for open-domain conversational agents, aimed at effectively modeling coherence and promoting output diversity: (1) We introduce a measure of coherence as the GloVe embedding similarity between the dialogue context and the generated response, (2) we filter our training corpora based on the measure of coherence to obtain topically coherent and lexically diverse context-response pairs, (3) we then train a response generator using a conditional variational autoencoder model that incorporates the measure of coherence as a latent variable and uses a context gate to guarantee topical consistency with the context and promote lexical diversity.

Dialogue Generation

Better Conversations by Modeling,Filtering,and Optimizing for Coherence and Diversity

2 code implementations18 Sep 2018 Xinnuo Xu, Ondřej Dušek, Ioannis Konstas, Verena Rieser

We present three enhancements to existing encoder-decoder models for open-domain conversational agents, aimed at effectively modeling coherence and promoting output diversity: (1) We introduce a measure of coherence as the GloVe embedding similarity between the dialogue context and the generated response, (2) we filter our training corpora based on the measure of coherence to obtain topically coherent and lexically diverse context-response pairs, (3) we then train a response generator using a conditional variational autoencoder model that incorporates the measure of coherence as a latent variable and uses a context gate to guarantee topical consistency with the context and promote lexical diversity.

Mapping Language to Code in Programmatic Context

1 code implementation EMNLP 2018 Srinivasan Iyer, Ioannis Konstas, Alvin Cheung, Luke Zettlemoyer

To study this phenomenon, we introduce the task of generating class member functions given English documentation and the programmatic context provided by the rest of the class.

Learning a Neural Semantic Parser from User Feedback

no code implementations ACL 2017 Srinivasan Iyer, Ioannis Konstas, Alvin Cheung, Jayant Krishnamurthy, Luke Zettlemoyer

We present an approach to rapidly and easily build natural language interfaces to databases for new domains, whose performance improves over time based on user feedback, and requires minimal intervention.

SQL Parsing

Story Cloze Task: UW NLP System

no code implementations WS 2017 Roy Schwartz, Maarten Sap, Ioannis Konstas, Leila Zilles, Yejin Choi, Noah A. Smith

This paper describes University of Washington NLP{'}s submission for the Linking Models of Lexical, Sentential and Discourse-level Semantics (LSDSem 2017) shared task{---}the Story Cloze Task.

Language Modelling

Cannot find the paper you are looking for? You can Submit a new open access paper.