Search Results for author: Matt Thomson

Found 18 papers, 4 papers with code

Herd: Using multiple, smaller LLMs to match the performances of proprietary, large LLMs via an intelligent composer

no code implementations30 Oct 2023 Surya Narayanan Hari, Matt Thomson

Currently, over a thousand LLMs exist that are multi-purpose and are capable of performing real world tasks, including Q&A, text summarization, content generation, etc.

Text Summarization

What's the Magic Word? A Control Theory of LLM Prompting

1 code implementation2 Oct 2023 Aman Bhargava, Cameron Witkowski, Manav Shah, Matt Thomson

We investigate the reachable set of output token sequences $R_y(\mathbf x_0)$ for which there exists a control input sequence $\mathbf u$ for each $\mathbf y \in R_y(\mathbf x_0)$ that steers the LLM to output $\mathbf y$ from initial state sequence $\mathbf x_0$.

Causal Language Modeling Language Modelling +1

Tryage: Real-time, intelligent Routing of User Prompts to Large Language Models

no code implementations22 Aug 2023 Surya Narayanan Hari, Matt Thomson

Here, we propose a context-aware routing system, Tryage, that leverages a language model router for optimal selection of expert models from a model library based on analysis of individual input prompts.

Language Modelling Model Selection

Therapeutic algebra of immunomodulatory drug responses at single-cell resolution

no code implementations23 Aug 2022 Jialong Jiang, Sisi Chen, Tiffany Tsou, Christopher S. McGinnis, Tahmineh Khazaei, Qin Zhu, Jong H. Park, Paul Rivaud, Inna-Marie Strazhnik, Eric D. Chow, David A. Sivak, Zev J. Gartner, Matt Thomson

We develop a unified mathematical model that quantitatively describes the transcriptome scale response of myeloid and lymphoid cell types to individual drugs and drug combinations through a single inferred regulatory network.

Engineering flexible machine learning systems by traversing functionally-invariant paths

1 code implementation30 Apr 2022 Guruprasad Raghavan, Bahey Tharwat, Surya Narayanan Hari, Dhruvil Satani, Matt Thomson

We conceptualize the weight space of a neural network as a curved Riemannian manifold equipped with a metric tensor whose spectrum defines low rank subspaces in weight space that accommodate network adaptation without loss of prior knowledge.

Adversarial Robustness Continual Learning +4

Linear Transformations in Autoencoder Latent Space Predict Time Translations in Active Matter System

no code implementations NeurIPS Workshop AI4Scien 2021 Enrique Amaya, Shahriar Shadkhoo, Dominik Schildknecht, Matt Thomson

ML approaches are relevant in active matter, a field that spans scales and studies dynamics of far-from-equilibrium systems where there are significant challenges in predicting macroscopic behavior from microscopic interactions of active particles.

Traversing Geodesics to Grow Biological Structures

no code implementations NeurIPS Workshop AI4Scien 2021 Pranav Bhamidipati, Guruprasad Raghavan, Matt Thomson

Biological tissues reliably grow into precise, functional structures from simple starting states during development.

Total Energy

Cell density controls signal propagation waves in a multicellular synthetic gene circuit

no code implementations16 Jul 2021 Marco Santorelli, Pranav Bhamidipati, Andriu Kavanagh, Victoria Fitts, Trusha Sondkar, Matt Thomson, Leonardo Morsut

Here, we exploit physical-chemical coupling observed in a synthetic patterning circuit in order to control the size and spatial distribution of patterned synthetic cell sheets.

Signaling receptor localization maximizes cellular information acquisition in spatially-structured, natural environments

no code implementations2 Jul 2021 Zitong Jerry Wang, Matt Thomson

By viewing cell surface receptors as a sensor network, we develop an information theoretic framework for computing the optimal spatial organization of a sensing system for a given spatial signaling environment.

Active feature selection discovers minimal gene sets for classifying cell types and disease states with single-cell mRNA-seq data

no code implementations15 Jun 2021 Xiaoqiao Chen, Sisi Chen, Matt Thomson

Here, we introduce an active learning method (ActiveSVM) that identifies minimal but highly-informative gene sets that enable the identification of cell-types, physiological states, and genetic perturbations in single-cell data using a small number of genes.

Active Learning feature selection

Solving hybrid machine learning tasks by traversing weight space geodesics

no code implementations5 Jun 2021 Guruprasad Raghavan, Matt Thomson

Broadly, we introduce a geometric framework that unifies a range of machine learning objectives and that can be applied to multiple classes of neural network architectures.

BIG-bench Machine Learning

Reinforcement Learning reveals fundamental limits on the mixing of active particles

no code implementations28 May 2021 Dominik Schildknecht, Anastasia N. Popova, Jack Stellwagen, Matt Thomson

The control of far-from-equilibrium physical systems, including active materials, has emerged as an important area for the application of reinforcement learning (RL) strategies to derive control policies for physical systems.

Open-Ended Question Answering reinforcement-learning +1

Programming Boundary Deformation Patterns in Active Networks

no code implementations21 Jan 2021 Zijie Qu, Jialong Jiang, Heun Jin Lee, Rob Phillips, Shahriar Shadkhoo, Matt Thomson

Studying a large set of shapes, we observe that the active networks contract in a shape-preserving manner that persists over the course of contraction.

Soft Condensed Matter

Sparsifying networks by traversing Geodesics

no code implementations NeurIPS Workshop DL-IG 2020 Guruprasad Raghavan, Matt Thomson

The geometry of weight spaces and functional manifolds of neural networks play an important role towards 'understanding' the intricacies of ML.

Self-organization of multi-layer spiking neural networks

no code implementations12 Jun 2020 Guruprasad Raghavan, Cong Lin, Matt Thomson

Inspired by this strategy, we attempt to efficiently self-organize large neural networks with an arbitrary number of layers into a wide variety of architectures.

Geometric algorithms for predicting resilience and recovering damage in neural networks

no code implementations23 May 2020 Guruprasad Raghavan, Jiayi Li, Matt Thomson

Biological neural networks have evolved to maintain performance despite significant circuit damage.

Neural networks grown and self-organized by noise

2 code implementations NeurIPS 2019 Guruprasad Raghavan, Matt Thomson

The algorithm is adaptable to a wide-range of input-layer geometries, robust to malfunctioning units in the first layer, and so can be used to successfully grow and self-organize pooling architectures of different pool-sizes and shapes.

Active Learning of Spin Network Models

1 code implementation25 Mar 2019 Jialong Jiang, David A. Sivak, Matt Thomson

We apply the framework to the inference of spin network models and find that designed perturbations can reduce the sampling complexity by $10^6$-fold across a variety of network architectures.

Active Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.