Search Results for author: Phil Blunsom

Found 108 papers, 39 papers with code

Learning to encode spatial relations from natural language

no code implementations ICLR 2019 Tiago Ramalho, Tomas Kocisky‎, Frederic Besse, S. M. Ali Eslami, Gabor Melis, Fabio Viola, Phil Blunsom, Karl Moritz Hermann

Natural language processing has made significant inroads into learning the semantics of words through distributional approaches, however representations learnt via these methods fail to capture certain kinds of information implicit in the real world.

Understanding In-Context Learning in Transformers and LLMs by Learning to Learn Discrete Functions

no code implementations4 Oct 2023 Satwik Bhattamishra, Arkil Patel, Phil Blunsom, Varun Kanade

In this work, we take a step towards answering these questions by demonstrating the following: (a) On a test-bed with a variety of Boolean function classes, we find that Transformers can nearly match the optimal learning algorithm for 'simpler' tasks, while their performance deteriorates on more 'complex' tasks.

In-Context Learning

Human Feedback is not Gold Standard

1 code implementation28 Sep 2023 Tom Hosking, Phil Blunsom, Max Bartolo

We critically analyse the use of human feedback for both training and evaluation, to verify whether it fully captures a range of crucial error criteria.

Structural Transfer Learning in NL-to-Bash Semantic Parsers

no code implementations31 Jul 2023 Kyle Duffy, Satwik Bhattamishra, Phil Blunsom

Large-scale pre-training has made progress in many fields of natural language processing, though little is understood about the design of pre-training datasets.

Machine Translation Semantic Parsing +2

Simplicity Bias in Transformers and their Ability to Learn Sparse Boolean Functions

1 code implementation22 Nov 2022 Satwik Bhattamishra, Arkil Patel, Varun Kanade, Phil Blunsom

(ii) When trained on Boolean functions, both Transformers and LSTMs prioritize learning functions of low sensitivity, with Transformers ultimately converging to functions of lower sensitivity.

Augmenting Multi-Turn Text-to-SQL Datasets with Self-Play

1 code implementation21 Oct 2022 Qi Liu, Zihuiwen Ye, Tao Yu, Phil Blunsom, Linfeng Song

We first design a SQL-to-text model conditioned on a sampled goal query, which represents a user's intent, that then converses with a text-to-SQL semantic parser to generate new interactions.

Domain Generalization SQL-to-Text +1

Reassessing Evaluation Practices in Visual Question Answering: A Case Study on Out-of-Distribution Generalization

no code implementations24 May 2022 Aishwarya Agrawal, Ivana Kajić, Emanuele Bugliarello, Elnaz Davoodi, Anita Gergely, Phil Blunsom, Aida Nematzadeh

Vision-and-language (V&L) models pretrained on large-scale multimodal data have demonstrated strong performance on various tasks such as image captioning and visual question answering (VQA).

Image Captioning Out-of-Distribution Generalization +3

Revisiting the Compositional Generalization Abilities of Neural Sequence Models

1 code implementation ACL 2022 Arkil Patel, Satwik Bhattamishra, Phil Blunsom, Navin Goyal

Compositional generalization is a fundamental trait in humans, allowing us to effortlessly combine known phrases to form novel sentences.

Transformer Grammars: Augmenting Transformer Language Models with Syntactic Inductive Biases at Scale

no code implementations1 Mar 2022 Laurent Sartran, Samuel Barrett, Adhiguna Kuncoro, Miloš Stanojević, Phil Blunsom, Chris Dyer

We find that TGs outperform various strong baselines on sentence-level language modeling perplexity, as well as on multiple syntax-sensitive language modeling evaluation metrics.

Inductive Bias Language Modelling +1

Relational Memory Augmented Language Models

no code implementations24 Jan 2022 Qi Liu, Dani Yogatama, Phil Blunsom

We present a memory-augmented approach to condition an autoregressive language model on a knowledge graph.

Language Modelling Text Generation

A Systematic Investigation of Commonsense Knowledge in Large Language Models

no code implementations31 Oct 2021 Xiang Lorraine Li, Adhiguna Kuncoro, Jordan Hoffmann, Cyprien de Masson d'Autume, Phil Blunsom, Aida Nematzadeh

Language models (LMs) trained on large amounts of data have shown impressive performance on many NLP tasks under the zero-shot and few-shot setup.

Pretraining the Noisy Channel Model for Task-Oriented Dialogue

no code implementations18 Mar 2021 Qi Liu, Lei Yu, Laura Rimell, Phil Blunsom

Direct decoding for task-oriented dialogue is known to suffer from the explaining-away effect, manifested in models that prefer short and generic responses.

End-To-End Dialogue Modelling

Mind the Gap: Assessing Temporal Generalization in Neural Language Models

1 code implementation NeurIPS 2021 Angeliki Lazaridou, Adhiguna Kuncoro, Elena Gribovskaya, Devang Agrawal, Adam Liska, Tayfun Terzi, Mai Gimenez, Cyprien de Masson d'Autume, Tomas Kocisky, Sebastian Ruder, Dani Yogatama, Kris Cao, Susannah Young, Phil Blunsom

Hence, given the compilation of ever-larger language modelling datasets, combined with the growing list of language-model-based NLP applications that require up-to-date factual knowledge about the world, we argue that now is the right time to rethink the static way in which we currently train and evaluate our language models, and develop adaptive language models that can remain up-to-date with respect to our ever-changing and non-stationary world.

Language Modelling

Mutual Information Constraints for Monte-Carlo Objectives

no code implementations1 Dec 2020 Gábor Melis, András György, Phil Blunsom

A common failure mode of density models trained as variational autoencoders is to model the data without relying on their latent variables, rendering these variables useless.

The Struggles of Feature-Based Explanations: Shapley Values vs. Minimal Sufficient Subsets

1 code implementation23 Sep 2020 Oana-Maria Camburu, Eleonora Giunchiglia, Jakob Foerster, Thomas Lukasiewicz, Phil Blunsom

For neural models to garner widespread public trust and ensure fairness, we must have human-intelligible explanations for their predictions.

Decision Making Fairness

Syntactic Structure Distillation Pretraining For Bidirectional Encoders

no code implementations27 May 2020 Adhiguna Kuncoro, Lingpeng Kong, Daniel Fried, Dani Yogatama, Laura Rimell, Chris Dyer, Phil Blunsom

Textual representation learners trained on large amounts of data have achieved notable success on downstream tasks; intriguingly, they have also performed well on challenging tests of syntactic competence.

Knowledge Distillation Language Modelling +3

A Survey on Contextual Embeddings

no code implementations16 Mar 2020 Qi Liu, Matt J. Kusner, Phil Blunsom

Contextual embeddings, such as ELMo and BERT, move beyond global word representations like Word2Vec and achieve ground-breaking performance on a wide range of natural language processing tasks.

Model Compression

Visual Grounding in Video for Unsupervised Word Translation

1 code implementation CVPR 2020 Gunnar A. Sigurdsson, Jean-Baptiste Alayrac, Aida Nematzadeh, Lucas Smaira, Mateusz Malinowski, João Carreira, Phil Blunsom, Andrew Zisserman

Given this shared embedding we demonstrate that (i) we can map words between the languages, particularly the 'visual' words; (ii) that the shared embedding provides a good initialization for existing unsupervised text-based word translation techniques, forming the basis for our proposed hybrid visual-text mapping algorithm, MUVE; and (iii) our approach achieves superior performance by addressing the shortcomings of text-based methods -- it is more robust, handles datasets with less commonality, and is applicable to low-resource languages.

Translation Visual Grounding +1

Learning Robust and Multilingual Speech Representations

no code implementations Findings of the Association for Computational Linguistics 2020 Kazuya Kawakami, Luyu Wang, Chris Dyer, Phil Blunsom, Aaron van den Oord

Unsupervised speech representation learning has shown remarkable success at finding representations that correlate with phonetic structures and improve downstream speech recognition performance.

Representation Learning speech-recognition +1

Make Up Your Mind! Adversarial Generation of Inconsistent Natural Language Explanations

1 code implementation ACL 2020 Oana-Maria Camburu, Brendan Shillingford, Pasquale Minervini, Thomas Lukasiewicz, Phil Blunsom

To increase trust in artificial intelligence systems, a promising research direction consists of designing neural models capable of generating natural language explanations for their predictions.

Decision Making Natural Language Inference

Can I Trust the Explainer? Verifying Post-hoc Explanatory Methods

2 code implementations4 Oct 2019 Oana-Maria Camburu, Eleonora Giunchiglia, Jakob Foerster, Thomas Lukasiewicz, Phil Blunsom

We aim for this framework to provide a publicly available, off-the-shelf evaluation when the feature-selection perspective on explanations is needed.

feature selection

Better Document-Level Machine Translation with Bayes' Rule

no code implementations TACL 2020 Lei Yu, Laurent Sartran, Wojciech Stokowiec, Wang Ling, Lingpeng Kong, Phil Blunsom, Chris Dyer

We show that Bayes' rule provides an effective mechanism for creating document translation models that can be learned from only parallel sentences and monolingual documents---a compelling benefit as parallel documents are not always available.

Document Level Machine Translation Document Translation +4

Putting Machine Translation in Context with the Noisy Channel Model

no code implementations25 Sep 2019 Lei Yu, Laurent Sartran, Wojciech Stokowiec, Wang Ling, Lingpeng Kong, Phil Blunsom, Chris Dyer

We show that Bayes' rule provides a compelling mechanism for controlling unconditional document language models, using the long-standing challenge of effectively leveraging document context in machine translation.

Document Translation Language Modelling +3

Unsupervised Learning of Efficient and Robust Speech Representations

no code implementations25 Sep 2019 Kazuya Kawakami, Luyu Wang, Chris Dyer, Phil Blunsom, Aaron van den Oord

We present an unsupervised method for learning speech representations based on a bidirectional contrastive predictive coding that implicitly discovers phonetic structure from large-scale corpora of unlabelled raw audio signals.

speech-recognition Speech Recognition

A Critical Analysis of Biased Parsers in Unsupervised Parsing

1 code implementation20 Sep 2019 Chris Dyer, Gábor Melis, Phil Blunsom

A series of recent papers has used a parsing algorithm due to Shen et al. (2018) to recover phrase-structure trees based on proxies for "syntactic depth."

Language Modelling

Mogrifier LSTM

3 code implementations ICLR 2020 Gábor Melis, Tomáš Kočiský, Phil Blunsom

Many advances in Natural Language Processing have been based upon more expressive models for how inputs interact with the context in which they occur.

Language Modelling

Scalable Syntax-Aware Language Models Using Knowledge Distillation

no code implementations ACL 2019 Adhiguna Kuncoro, Chris Dyer, Laura Rimell, Stephen Clark, Phil Blunsom

Prior work has shown that, on small amounts of training data, syntactic neural language models learn structurally sensitive generalisations more successfully than sequential language models.

Knowledge Distillation Language Modelling +1

Learning and Evaluating General Linguistic Intelligence

no code implementations31 Jan 2019 Dani Yogatama, Cyprien de Masson d'Autume, Jerome Connor, Tomas Kocisky, Mike Chrzanowski, Lingpeng Kong, Angeliki Lazaridou, Wang Ling, Lei Yu, Chris Dyer, Phil Blunsom

We define general linguistic intelligence as the ability to reuse previously acquired knowledge about a language's lexicon, syntax, semantics, and pragmatic conventions to adapt to new tasks quickly.

Natural Language Understanding Question Answering

e-SNLI: Natural Language Inference with Natural Language Explanations

2 code implementations NeurIPS 2018 Oana-Maria Camburu, Tim Rocktäschel, Thomas Lukasiewicz, Phil Blunsom

In order for machine learning to garner widespread public adoption, models must be able to provide interpretable and robust explanations for their decisions, as well as learn from human-provided explanations at train time.

Natural Language Inference Sentence

Learning with Stochastic Guidance for Navigation

1 code implementation27 Nov 2018 Linhai Xie, Yishu Miao, Sen Wang, Phil Blunsom, Zhihua Wang, Changhao Chen, Andrew Markham, Niki Trigoni

Due to the sparse rewards and high degree of environment variation, reinforcement learning approaches such as Deep Deterministic Policy Gradient (DDPG) are plagued by issues of high variance when applied in complex real world environments.

Robotics

Learning to Discover, Ground and Use Words with Segmental Neural Language Models

no code implementations ACL 2019 Kazuya Kawakami, Chris Dyer, Phil Blunsom

We propose a segmental neural language model that combines the generalization power of neural networks with the ability to discover word-like units that are latent in unsegmented character sequences.

Language Modelling Segmentation

Transferring Physical Motion Between Domains for Neural Inertial Tracking

no code implementations4 Oct 2018 Changhao Chen, Yishu Miao, Chris Xiaoxuan Lu, Phil Blunsom, Andrew Markham, Niki Trigoni

Inertial information processing plays a pivotal role in ego-motion awareness for mobile agents, as inertial measurements are entirely egocentric and not environment dependent.

Domain Adaptation

Unsupervised Word Discovery with Segmental Neural Language Models

no code implementations27 Sep 2018 Kazuya Kawakami, Chris Dyer, Phil Blunsom

We propose a segmental neural language model that combines the representational power of neural networks and the structure learning mechanism of Bayesian nonparametrics, and show that it learns to discover semantically meaningful units (e. g., morphemes and words) from unsegmented character sequences.

Language Modelling

Neural Arithmetic Logic Units

21 code implementations NeurIPS 2018 Andrew Trask, Felix Hill, Scott Reed, Jack Rae, Chris Dyer, Phil Blunsom

Neural networks can learn to represent and manipulate numerical information, but they seldom generalize well outside of the range of numerical values encountered during training.

Encoding Spatial Relations from Natural Language

1 code implementation4 Jul 2018 Tiago Ramalho, Tomáš Kočiský, Frederic Besse, S. M. Ali Eslami, Gábor Melis, Fabio Viola, Phil Blunsom, Karl Moritz Hermann

Natural language processing has made significant inroads into learning the semantics of words through distributional approaches, however representations learnt via these methods fail to capture certain kinds of information implicit in the real world.

LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modeling Structure Makes Them Better

no code implementations ACL 2018 Adhiguna Kuncoro, Chris Dyer, John Hale, Dani Yogatama, Stephen Clark, Phil Blunsom

Language exhibits hierarchical structure, but recent work using a subject-verb agreement diagnostic argued that state-of-the-art language models, LSTMs, fail to learn long-range syntax sensitive dependencies.

Language Modelling Machine Translation +1

Neural Syntactic Generative Models with Exact Marginalization

no code implementations NAACL 2018 Jan Buys, Phil Blunsom

We present neural syntactic generative models with exact marginalization that support both dependency parsing and language modeling.

Language Modelling Transition-Based Dependency Parsing

Pushing the bounds of dropout

1 code implementation ICLR 2019 Gábor Melis, Charles Blundell, Tomáš Kočiský, Karl Moritz Hermann, Chris Dyer, Phil Blunsom

We show that dropout training is best understood as performing MAP estimation concurrently for a family of conditional models whose objectives are themselves lower bounded by the original dropout objective.

Language Modelling

Memory Architectures in Recurrent Neural Network Language Models

no code implementations ICLR 2018 Dani Yogatama, Yishu Miao, Gabor Melis, Wang Ling, Adhiguna Kuncoro, Chris Dyer, Phil Blunsom

We compare and analyze sequential, random access, and stack memory architectures for recurrent neural network language models.

Understanding Grounded Language Learning Agents

no code implementations ICLR 2018 Felix Hill, Karl Moritz Hermann, Phil Blunsom, Stephen Clark

Neural network-based systems can now learn to locate the referents of words and phrases in images, answer questions about visual scenes, and even execute symbolic instructions as first-person actors in partially-observable worlds.

Grounded language learning Policy Gradient Methods

The NarrativeQA Reading Comprehension Challenge

2 code implementations TACL 2018 Tomáš Kočiský, Jonathan Schwarz, Phil Blunsom, Chris Dyer, Karl Moritz Hermann, Gábor Melis, Edward Grefenstette

Reading comprehension (RC)---in contrast to information retrieval---requires integrating information and reasoning about events, entities, and their relations across a full document.

Ranked #9 on Question Answering on NarrativeQA (BLEU-1 metric)

Information Retrieval Question Answering +2

Understanding Early Word Learning in Situated Artificial Agents

no code implementations ICLR 2018 Felix Hill, Stephen Clark, Karl Moritz Hermann, Phil Blunsom

Neural network-based systems can now learn to locate the referents of words and phrases in images, answer questions about visual scenes, and execute symbolic instructions as first-person actors in partially-observable worlds.

Grounded language learning Policy Gradient Methods

Oxford at SemEval-2017 Task 9: Neural AMR Parsing with Pointer-Augmented Attention

no code implementations SEMEVAL 2017 Jan Buys, Phil Blunsom

We present a neural encoder-decoder AMR parser that extends an attention-based model by predicting the alignment between graph nodes and sentence tokens explicitly with a pointer mechanism.

AMR Parsing Lemmatization +1

On the State of the Art of Evaluation in Neural Language Models

1 code implementation ICLR 2018 Gábor Melis, Chris Dyer, Phil Blunsom

Ongoing innovations in recurrent neural network architectures have provided a steady influx of apparently state-of-the-art results on language modelling benchmarks.

Language Modelling

Grounded Language Learning in a Simulated 3D World

1 code implementation20 Jun 2017 Karl Moritz Hermann, Felix Hill, Simon Green, Fumin Wang, Ryan Faulkner, Hubert Soyer, David Szepesvari, Wojciech Marian Czarnecki, Max Jaderberg, Denis Teplyashin, Marcus Wainwright, Chris Apps, Demis Hassabis, Phil Blunsom

Trained via a combination of reinforcement and unsupervised learning, and beginning with minimal prior knowledge, the agent learns to relate linguistic symbols to emergent perceptual representations of its physical surroundings and to pertinent sequences of actions.

Grounded language learning

Latent Intention Dialogue Models

1 code implementation ICML 2017 Tsung-Hsien Wen, Yishu Miao, Phil Blunsom, Steve Young

Developing a dialogue agent that is capable of making autonomous decisions and communicating by natural language is one of the long-term goals of machine learning research.

reinforcement-learning Reinforcement Learning (RL) +1

Program Induction by Rationale Generation : Learning to Solve and Explain Algebraic Word Problems

1 code implementation11 May 2017 Wang Ling, Dani Yogatama, Chris Dyer, Phil Blunsom

Solving algebraic word problems requires executing a series of arithmetic operations---a program---to obtain a final answer.

Program induction

Robust Incremental Neural Semantic Graph Parsing

1 code implementation ACL 2017 Jan Buys, Phil Blunsom

Parsing sentences to linguistically-expressive semantic representations is a key goal of Natural Language Processing.

AMR Parsing

Learning to Create and Reuse Words in Open-Vocabulary Neural Language Modeling

no code implementations ACL 2017 Kazuya Kawakami, Chris Dyer, Phil Blunsom

Fixed-vocabulary language models fail to account for one of the most characteristic statistical facts of natural language: the frequent creation and reuse of new word types.

Language Modelling

Generative and Discriminative Text Classification with Recurrent Neural Networks

2 code implementations6 Mar 2017 Dani Yogatama, Chris Dyer, Wang Ling, Phil Blunsom

We empirically characterize the performance of discriminative and generative LSTM models for text classification.

Continual Learning General Classification +2

Learning to Compose Words into Sentences with Reinforcement Learning

no code implementations28 Nov 2016 Dani Yogatama, Phil Blunsom, Chris Dyer, Edward Grefenstette, Wang Ling

We use reinforcement learning to learn tree-structured neural networks for computing representations of natural language sentences.

reinforcement-learning Reinforcement Learning (RL)

The Neural Noisy Channel

no code implementations8 Nov 2016 Lei Yu, Phil Blunsom, Chris Dyer, Edward Grefenstette, Tomas Kocisky

We formulate sequence to sequence transduction as a noisy channel decoding problem and use recurrent neural networks to parameterise the source and channel models.

Machine Translation Morphological Inflection +2

Reference-Aware Language Models

no code implementations EMNLP 2017 Zichao Yang, Phil Blunsom, Chris Dyer, Wang Ling

We propose a general class of language models that treat reference as an explicit stochastic latent variable.

Dialogue Generation Recipe Generation

Online Segment to Segment Neural Transduction

no code implementations EMNLP 2016 Lei Yu, Jan Buys, Phil Blunsom

We introduce an online neural sequence to sequence model that learns to alternate between encoding and decoding segments of the input as it is read.

Morphological Inflection Sentence +1

Language as a Latent Variable: Discrete Generative Models for Sentence Compression

no code implementations EMNLP 2016 Yishu Miao, Phil Blunsom

In this work we explore deep generative models of text in which the latent representation of a document is itself drawn from a discrete language model distribution.

Language Modelling Sentence +1

Optimizing Performance of Recurrent Neural Networks on GPUs

1 code implementation7 Apr 2016 Jeremy Appleyard, Tomas Kocisky, Phil Blunsom

As recurrent neural networks become larger and deeper, training times for single networks are rising into weeks or even months.

Stochastic Collapsed Variational Inference for Hidden Markov Models

no code implementations5 Dec 2015 Pengyu Wang, Phil Blunsom

In this paper, we propose a stochastic collapsed variational inference algorithm for hidden Markov models, in a sequential data setting.

Variational Inference

Stochastic Collapsed Variational Inference for Sequential Data

no code implementations5 Dec 2015 Pengyu Wang, Phil Blunsom

Stochastic variational inference for collapsed models has recently been successfully applied to large scale topic modelling.

Variational Inference

Neural Variational Inference for Text Processing

6 code implementations19 Nov 2015 Yishu Miao, Lei Yu, Phil Blunsom

We validate this framework on two very different text modelling applications, generative document modelling and supervised question answering.

Answer Selection Topic Models +1

Reasoning about Entailment with Neural Attention

7 code implementations22 Sep 2015 Tim Rocktäschel, Edward Grefenstette, Karl Moritz Hermann, Tomáš Kočiský, Phil Blunsom

We extend this model with a word-by-word neural attention mechanism that encourages reasoning over entailments of pairs of words and phrases.

Natural Language Inference

A Bayesian Model for Generative Transition-based Dependency Parsing

no code implementations WS 2015 Jan Buys, Phil Blunsom

We propose a simple, scalable, fully generative model for transition-based dependency parsing with high accuracy.

Language Modelling POS +2

Learning to Transduce with Unbounded Memory

4 code implementations NeurIPS 2015 Edward Grefenstette, Karl Moritz Hermann, Mustafa Suleyman, Phil Blunsom

Recently, strong results have been demonstrated by Deep Recurrent Neural Networks on natural language transduction problems.

Natural Language Transduction Translation

Bayesian Optimisation for Machine Translation

no code implementations22 Dec 2014 Yishu Miao, Ziyu Wang, Phil Blunsom

This paper presents novel Bayesian optimisation algorithms for minimum error rate training of statistical machine translation systems.

Bayesian Optimisation Machine Translation +1

Deep Learning for Answer Sentence Selection

2 code implementations4 Dec 2014 Lei Yu, Karl Moritz Hermann, Phil Blunsom, Stephen Pulman

Answer sentence selection is the task of identifying sentences that contain the answer to a given question.

Feature Engineering Open-Domain Question Answering +1

Deep Multi-Instance Transfer Learning

no code implementations12 Nov 2014 Dimitrios Kotzias, Misha Denil, Phil Blunsom, Nando de Freitas

We present a new approach for transferring knowledge from groups to individuals that comprise them.

Transfer Learning

Modelling, Visualising and Summarising Documents with a Single Convolutional Neural Network

no code implementations15 Jun 2014 Misha Denil, Alban Demiraj, Nal Kalchbrenner, Phil Blunsom, Nando de Freitas

Capturing the compositional process which maps the meaning of words to that of documents is a central challenge for researchers in Natural Language Processing and Information Retrieval.

Feature Engineering Information Retrieval +2

Compositional Morphology for Word Representations and Language Modelling

1 code implementation16 May 2014 Jan A. Botha, Phil Blunsom

This paper presents a scalable method for integrating compositional morphological representations into a vector-based probabilistic language model.

Language Modelling Machine Translation +2

Learning Bilingual Word Representations by Marginalizing Alignments

no code implementations ACL 2014 Tomáš Kočiský, Karl Moritz Hermann, Phil Blunsom

We present a probabilistic model that simultaneously learns alignments and distributed representations for bilingual data.

General Classification

A Deep Architecture for Semantic Parsing

no code implementations WS 2014 Edward Grefenstette, Phil Blunsom, Nando de Freitas, Karl Moritz Hermann

Many successful approaches to semantic parsing build on top of the syntactic analysis of text, and make use of distributional representations or statistical models to match parses to ontology-specific queries.

Semantic Parsing

Modelling the Lexicon in Unsupervised Part of Speech Induction

no code implementations EACL 2014 Greg Dubbin, Phil Blunsom

Automatically inducing the syntactic part-of-speech categories for words in text is a fundamental task in Computational Linguistics.

TAG Vocal Bursts Type Prediction

"Not not bad" is not "bad": A distributional account of negation

no code implementations10 Jun 2013 Karl Moritz Hermann, Edward Grefenstette, Phil Blunsom

With the increasing empirical success of distributional models of compositional semantics, it is timely to consider the types of textual logic that such models are capable of capturing.

Negation

Bayesian Synchronous Grammar Induction

no code implementations NeurIPS 2008 Phil Blunsom, Trevor Cohn, Miles Osborne

We present a novel method for inducing synchronous context free grammars (SCFGs) from a corpus of parallel string pairs.

Machine Translation Translation

Cannot find the paper you are looking for? You can Submit a new open access paper.