no code implementations • 7 Feb 2023 • Sean Tull, Razin A. Shaikh, Sara Sabrina Zemljic, Stephen Clark
Our approach builds upon Gardenfors' classical framework of conceptual spaces, in which cognition is modelled geometrically through the use of convex spaces, which in turn factorise in terms of simpler spaces called domains.
no code implementations • 21 Mar 2022 • Razin A. Shaikh, Sara Sabrina Zemljic, Sean Tull, Stephen Clark
In this report we present a new model of concepts, based on the framework of variational autoencoders, which is designed to have attractive properties such as factored conceptual domains, and at the same time be learnable from data.
1 code implementation • 8 Oct 2021 • Dimitri Kartsaklis, Ian Fan, Richie Yeung, Anna Pearson, Robin Lorenz, Alexis Toumi, Giovanni De Felice, Konstantinos Meichanetzidis, Stephen Clark, Bob Coecke
We present lambeq, the first high-level Python library for Quantum Natural Language Processing (QNLP).
2 code implementations • 21 Sep 2021 • Stephen Clark
This report describes the parsing problem for Combinatory Categorial Grammar (CCG), showing how a combination of Transformer-based neural models and a symbolic CCG grammar can lead to substantial gains over existing approaches.
no code implementations • 13 Jan 2021 • Stephen Clark, Alexander Lerchner, Tamara von Glehn, Olivier Tieleman, Richard Tanburn, Misha Dashevskiy, Matko Bosnjak
The mathematics of partial orders and lattices is a standard tool for modelling conceptual spaces (Ch. 2, Mitchell (1997), Ganter and Obiedkov (2016)); however, there is no formal work that we are aware of which defines a conceptual lattice on top of a representation that is induced using unsupervised deep learning (Goodfellow et al., 2016).
no code implementations • 10 Dec 2020 • Josh Abramson, Arun Ahuja, Iain Barr, Arthur Brussee, Federico Carnevale, Mary Cassin, Rachita Chhaparia, Stephen Clark, Bogdan Damoc, Andrew Dudzik, Petko Georgiev, Aurelia Guy, Tim Harley, Felix Hill, Alden Hung, Zachary Kenton, Jessica Landon, Timothy Lillicrap, Kory Mathewson, Soňa Mokrá, Alistair Muldal, Adam Santoro, Nikolay Savinov, Vikrant Varma, Greg Wayne, Duncan Williams, Nathaniel Wong, Chen Yan, Rui Zhu
These evaluations convincingly demonstrate that interactive training and auxiliary losses improve agent behaviour beyond what is achieved by supervised learning of actions alone.
1 code implementation • CONLL 2020 • Gijs Wijnholds, Mehrnoosh Sadrzadeh, Stephen Clark
This paper is about learning word representations using grammatical type information.
no code implementations • 17 Sep 2020 • Saad Aloteibi, Stephen Clark
In this paper, we formulate session search as a personalization task under the framework of learning to rank.
1 code implementation • ICLR 2021 • Felix Hill, Olivier Tieleman, Tamara von Glehn, Nathaniel Wong, Hamza Merzic, Stephen Clark
Recent work has shown that large text-based neural language models, trained with conventional supervised learning objectives, acquire a surprising propensity for few- and one-shot learning.
no code implementations • ICML 2020 • Abhishek Das, Federico Carnevale, Hamza Merzic, Laura Rimell, Rosalia Schneider, Josh Abramson, Alden Hung, Arun Ahuja, Stephen Clark, Gregory Wayne, Felix Hill
Recent work has shown how predictive modeling can endow agents with rich knowledge of their surroundings, improving their ability to act in complex environments.
1 code implementation • ACL 2020 • Daniel Fried, Jean-Baptiste Alayrac, Phil Blunsom, Chris Dyer, Stephen Clark, Aida Nematzadeh
We apply a generative segmental model of task structure, guided by narration, to action segmentation in video.
no code implementations • ICLR 2020 • Felix Hill, Andrew Lampinen, Rosalia Schneider, Stephen Clark, Matthew Botvinick, James L. McClelland, Adam Santoro
The question of whether deep neural networks are good at generalising beyond their immediate training experience is of critical importance for learning-based approaches to AI.
no code implementations • IJCNLP 2019 • Amandla Mabona, Laura Rimell, Stephen Clark, Andreas Vlachos
We show that, for our parser's traversal order, previous beam search algorithms for RNNGs have a left-branching bias which is ill-suited for RST parsing.
no code implementations • ACL 2019 • Adhiguna Kuncoro, Chris Dyer, Laura Rimell, Stephen Clark, Phil Blunsom
Prior work has shown that, on small amounts of training data, syntactic neural language models learn structurally sensitive generalisations more successfully than sequential language models.
no code implementations • ACL 2018 • Adhiguna Kuncoro, Chris Dyer, John Hale, Dani Yogatama, Stephen Clark, Phil Blunsom
Language exhibits hierarchical structure, but recent work using a subject-verb agreement diagnostic argued that state-of-the-art language models, LSTMs, fail to learn long-range syntax sensitive dependencies.
no code implementations • WS 2018 • Jean Maillard, Stephen Clark
Latent tree learning models represent sentences by composing their words according to an induced parse tree, all based on a downstream task.
no code implementations • NAACL 2019 • Kris Cao, Stephen Clark
Generating from Abstract Meaning Representation (AMR) is an underspecified problem, as many syntactic decisions are not constrained by the semantic graph.
1 code implementation • ICLR 2018 • Kris Cao, Angeliki Lazaridou, Marc Lanctot, Joel Z. Leibo, Karl Tuyls, Stephen Clark
We also study communication behaviour in a setting where one agent interacts with agents in a community with different levels of prosociality and show how agent identifiability can aid negotiation.
1 code implementation • ICLR 2018 • Angeliki Lazaridou, Karl Moritz Hermann, Karl Tuyls, Stephen Clark
The ability of algorithms to evolve or learn (compositional) communication protocols has traditionally been studied in the language evolution literature through the use of emergent communication tasks.
no code implementations • ICLR 2018 • Felix Hill, Karl Moritz Hermann, Phil Blunsom, Stephen Clark
Neural network-based systems can now learn to locate the referents of words and phrases in images, answer questions about visual scenes, and even execute symbolic instructions as first-person actors in partially-observable worlds.
no code implementations • ICLR 2018 • Felix Hill, Stephen Clark, Karl Moritz Hermann, Phil Blunsom
Neural network-based systems can now learn to locate the referents of words and phrases in images, answer questions about visual scenes, and execute symbolic instructions as first-person actors in partially-observable worlds.
no code implementations • EMNLP 2017 • Luana Bulat, Stephen Clark, Ekaterina Shutova
Research in computational semantics is increasingly guided by our understanding of human semantic processing.
no code implementations • ICLR 2018 • Jean Maillard, Stephen Clark, Dani Yogatama
It can therefore be seen as a tree-based RNN that is unsupervised with respect to the parse trees.
no code implementations • EACL 2017 • Luana Bulat, Stephen Clark, Ekaterina Shutova
One of the key problems in computational metaphor modelling is finding the optimal level of abstraction of semantic representations, such that these are able to capture and generalise metaphorical mechanisms.
1 code implementation • EACL 2017 • Kris Cao, Stephen Clark
We present a dialogue generation model that directly captures the variability in possible responses to a given input, which reduces the `boring output' issue of deterministic dialogue models.
no code implementations • TACL 2017 • Andrew J. Anderson, Douwe Kiela, Stephen Clark, Massimo Poesio
Dual coding theory considers concrete concepts to be encoded in the brain both linguistically and visually, and abstract concepts only linguistically.
no code implementations • 24 Oct 2016 • Douwe Kiela, Luana Bulat, Anita L. Vero, Stephen Clark
Meaning has been called the "holy grail" of a variety of scientific disciplines, ranging from linguistics to philosophy, psychology and the neurosciences.
no code implementations • 28 Nov 2014 • Tamara Polajnar, Laura Rimell, Stephen Clark
The functional approach to compositional distributional semantics considers transitive verbs to be linear maps that transform the distributional vectors representing nouns into a vector representing a sentence.
no code implementations • 18 Jun 2014 • Mehrnoosh Sadrzadeh, Stephen Clark, Bob Coecke
Within the categorical compositional distributional model of meaning, we provide semantic interpretations for the subject and object roles of the possessive relative pronoun `whose'.
no code implementations • LREC 2014 • Tamara Polajnar, Laura Rimell, Stephen Clark
Distributional semantic models have been effective at representing linguistic semantics at the word level, and more recently research has moved on to the construction of distributional representations for larger segments of text.
no code implementations • 21 Apr 2014 • Mehrnoosh Sadrzadeh, Stephen Clark, Bob Coecke
This paper develops a compositional vector-based semantics of subject and object relative pronouns within a categorical framework.
no code implementations • TACL 2014 • Andreas Vlachos, Stephen Clark
Semantic parsing is the task of translating natural language utterances into a machine-interpretable meaning representation.
no code implementations • 20 Dec 2013 • Tamara Polajnar, Luana Fagarasan, Stephen Clark
This paper investigates the learning of 3rd-order tensors representing the semantics of transitive verbs.
no code implementations • 2 May 2013 • Stephen Clark, Bob Coecke, Edward Grefenstette, Stephen Pulman, Mehrnoosh Sadrzadeh
We discuss an algorithm which produces the meaning of a sentence given meanings of its words, and its resemblance to quantum teleportation.
2 code implementations • 23 Mar 2010 • Bob Coecke, Mehrnoosh Sadrzadeh, Stephen Clark
We propose a mathematical framework for a unification of the distributional theory of meaning in terms of vector space models, and a compositional theory for grammatical types, for which we rely on the algebra of Pregroups, introduced by Lambek.