Search Results for author: Benjamin Hoover

Found 8 papers, 6 papers with code

ConceptEvo: Interpreting Concept Evolution in Deep Learning Training

no code implementations30 Mar 2022 Haekyu Park, Seongmin Lee, Benjamin Hoover, Austin Wright, Omar Shaikh, Rahul Duggal, Nilaksh Das, Judy Hoffman, Duen Horng Chau

Deep neural networks (DNNs) have been widely used for decision making, prompting a surge of interest in interpreting how these complex models work.

Decision Making

LMdiff: A Visual Diff Tool to Compare Language Models

1 code implementation EMNLP (ACL) 2021 Hendrik Strobelt, Benjamin Hoover, Arvind Satyanarayan, Sebastian Gehrmann

While different language models are ubiquitous in NLP, it is hard to contrast their outputs and identify which contexts one can handle better than the other.

Shared Interest: Measuring Human-AI Alignment to Identify Recurring Patterns in Model Behavior

1 code implementation20 Jul 2021 Angie Boggust, Benjamin Hoover, Arvind Satyanarayan, Hendrik Strobelt

Saliency methods -- techniques to identify the importance of input features on a model's output -- are a common step in understanding neural network behavior.

FairyTailor: A Multimodal Generative Framework for Storytelling

1 code implementation13 Jul 2021 Eden Bensaid, Mauro Martino, Benjamin Hoover, Jacob Andreas, Hendrik Strobelt

Natural language generation (NLG) for storytelling is especially challenging because it requires the generated text to follow an overall theme while remaining creative and diverse to engage the reader.

Story Generation

Can a Fruit Fly Learn Word Embeddings?

2 code implementations ICLR 2021 Yuchen Liang, Chaitanya K. Ryali, Benjamin Hoover, Leopold Grinberg, Saket Navlakha, Mohammed J. Zaki, Dmitry Krotov

In this work we study a mathematical formalization of this network motif and apply it to learning the correlational structure between words and their context in a corpus of unstructured text, a common natural language processing (NLP) task.

Document Classification Word Embeddings +2

exBERT: A Visual Analysis Tool to Explore Learned Representations in Transformer Models

1 code implementation ACL 2020 Benjamin Hoover, Hendrik Strobelt, Sebastian Gehrmann

Large Transformer-based language models can route and reshape complex information via their multi-headed attention mechanism.

CogMol: Target-Specific and Selective Drug Design for COVID-19 Using Deep Generative Models

no code implementations NeurIPS 2020 Vijil Chenthamarakshan, Payel Das, Samuel C. Hoffman, Hendrik Strobelt, Inkit Padhi, Kar Wai Lim, Benjamin Hoover, Matteo Manica, Jannis Born, Teodoro Laino, Aleksandra Mojsilovic

CogMol also includes insilico screening for assessing toxicity of parent molecules and their metabolites with a multi-task toxicity classifier, synthetic feasibility with a chemical retrosynthesis predictor, and target structure binding with docking simulations.

exBERT: A Visual Analysis Tool to Explore Learned Representations in Transformers Models

1 code implementation11 Oct 2019 Benjamin Hoover, Hendrik Strobelt, Sebastian Gehrmann

We present exBERT, an interactive tool named after the popular BERT language model, that provides insights into the meaning of the contextual representations by matching a human-specified input to similar contexts in a large annotated dataset.

Language Modelling

Cannot find the paper you are looking for? You can Submit a new open access paper.