Search Results for author: John Terilla

Found 5 papers, 2 papers with code

An enriched category theory of language: from syntax to semantics

no code implementations15 Jun 2021 Tai-Danae Bradley, John Terilla, Yiannis Vlassopoulos

In this paper, we propose a mathematical framework for passing from probability distributions on extensions of given texts, such as the ones learned by today's large language models, to an enriched category containing semantic information.

Tensor Networks for Probabilistic Sequence Modeling

1 code implementation2 Mar 2020 Jacob Miller, Guillaume Rabusseau, John Terilla

Tensor networks are a powerful modeling framework developed for computational many-body physics, which have only recently been applied within machine learning.

Language Modelling Tensor Networks

Modeling Sequences with Quantum States: A Look Under the Hood

1 code implementation16 Oct 2019 Tai-Danae Bradley, E. Miles Stoudenmire, John Terilla

Because it is entangled, the reduced densities that describe subsystems also carry information about the complementary subsystem.

Probabilistic Modeling with Matrix Product States

no code implementations19 Feb 2019 James Stokes, John Terilla

Inspired by the possibility that generative models based on quantum circuits can provide a useful inductive bias for sequence modeling tasks, we propose an efficient training algorithm for a subset of classically simulable quantum circuit models.

Inductive Bias

Language as a matrix product state

no code implementations4 Nov 2017 Vasily Pestun, John Terilla, Yiannis Vlassopoulos

We propose a statistical model for natural language that begins by considering language as a monoid, then representing it in complex matrices with a compatible translation invariant probability measure.

Translation

Cannot find the paper you are looking for? You can Submit a new open access paper.