Search Results for author: Thomas Kober

Found 14 papers, 7 papers with code

Zero-shot Cross-Linguistic Learning of Event Semantics

no code implementations5 Jul 2022 Malihe Alikhani, Thomas Kober, Bashar Alhafni, Yue Chen, Mert Inan, Elizabeth Nielsen, Shahab Raji, Mark Steedman, Matthew Stone

Typologically diverse languages offer systems of lexical and grammatical aspect that allow speakers to focus on facets of event structure in ways that comport with the specific communicative setting and discourse constraints they face.

Aspectuality Across Genre: A Distributional Semantics Approach

no code implementations COLING 2020 Thomas Kober, Malihe Alikhani, Matthew Stone, Mark Steedman

The interpretation of the lexical aspect of verbs in English plays a crucial role for recognizing textual entailment and learning discourse-level inferences.

Natural Language Inference

STAR: A Schema-Guided Dialog Dataset for Transfer Learning

1 code implementation22 Oct 2020 Johannes E. M. Mosig, Shikib Mehri, Thomas Kober

We present STAR, a schema-guided task-oriented dialog dataset consisting of 127, 833 utterances and knowledge base queries across 5, 820 task-oriented dialogs in 13 domains that is especially designed to facilitate task and domain transfer learning in task-oriented dialog.

Transfer Learning Zero-shot Generalization

Going Beyond T-SNE: Exposing \texttt{whatlies} in Text Embeddings

1 code implementation4 Sep 2020 Vincent D. Warmerdam, Thomas Kober, Rachael Tatman

We introduce whatlies, an open source toolkit for visually inspecting word and sentence embeddings.

Dimensionality Reduction Sentence +2

Data Augmentation for Hypernymy Detection

1 code implementation EACL 2021 Thomas Kober, Julie Weeds, Lorenzo Bertolini, David Weir

The automatic detection of hypernymy relationships represents a challenging problem in NLP.

Data Augmentation

Temporal and Aspectual Entailment

1 code implementation WS 2019 Thomas Kober, Sander Bijl de Vroe, Mark Steedman

Inferences regarding "Jane's arrival in London" from predications such as "Jane is going to London" or "Jane has gone to London" depend on tense and aspect of the predications.

Natural Language Inference

Improving Semantic Composition with Offset Inference

1 code implementation ACL 2017 Thomas Kober, Julie Weeds, Jeremy Reffin, David Weir

Count-based distributional semantic models suffer from sparsity due to unobserved but plausible co-occurrences in any text collection.

Semantic Composition

When a Red Herring in Not a Red Herring: Using Compositional Methods to Detect Non-Compositional Phrases

no code implementations EACL 2017 Julie Weeds, Thomas Kober, Jeremy Reffin, David Weir

Non-compositional phrases such as \textit{red herring} and weakly compositional phrases such as \textit{spelling bee} are an integral part of natural language (Sag, 2002).

One Representation per Word - Does it make Sense for Composition?

1 code implementation WS 2017 Thomas Kober, Julie Weeds, John Wilkie, Jeremy Reffin, David Weir

In this paper, we investigate whether an a priori disambiguation of word senses is strictly necessary or whether the meaning of a word in context can be disambiguated through composition alone.

Aligning Packed Dependency Trees: a theory of composition for distributional semantics

no code implementations CL 2016 David Weir, Julie Weeds, Jeremy Reffin, Thomas Kober

We present a new framework for compositional distributional semantics in which the distributional contexts of lexemes are expressed in terms of anchored packed dependency trees.

Improving Sparse Word Representations with Distributional Inference for Semantic Composition

1 code implementation EMNLP 2016 Thomas Kober, Julie Weeds, Jeremy Reffin, David Weir

Distributional models are derived from co-occurrences in a corpus, where only a small proportion of all possible plausible co-occurrences will be observed.

Semantic Composition Word Similarity

Cannot find the paper you are looking for? You can Submit a new open access paper.