no code implementations • 15 Jun 2021 • Tai-Danae Bradley, John Terilla, Yiannis Vlassopoulos
In this paper, we propose a mathematical framework for passing from probability distributions on extensions of given texts, such as the ones learned by today's large language models, to an enriched category containing semantic information.
1 code implementation • 2 Mar 2020 • Jacob Miller, Guillaume Rabusseau, John Terilla
Tensor networks are a powerful modeling framework developed for computational many-body physics, which have only recently been applied within machine learning.
1 code implementation • 16 Oct 2019 • Tai-Danae Bradley, E. Miles Stoudenmire, John Terilla
Because it is entangled, the reduced densities that describe subsystems also carry information about the complementary subsystem.
no code implementations • 19 Feb 2019 • James Stokes, John Terilla
Inspired by the possibility that generative models based on quantum circuits can provide a useful inductive bias for sequence modeling tasks, we propose an efficient training algorithm for a subset of classically simulable quantum circuit models.
no code implementations • 4 Nov 2017 • Vasily Pestun, John Terilla, Yiannis Vlassopoulos
We propose a statistical model for natural language that begins by considering language as a monoid, then representing it in complex matrices with a compatible translation invariant probability measure.