Search Results for author: Noga Zaslavsky

Found 11 papers, 1 papers with code

Beyond linear regression: mapping models in cognitive neuroscience should align with research goals

no code implementations23 Aug 2022 Anna A. Ivanova, Martin Schrimpf, Stefano Anzellotti, Noga Zaslavsky, Evelina Fedorenko, Leyla Isik

Moreover, we argue that, instead of categorically treating the mapping models as linear or nonlinear, we should instead aim to estimate the complexity of these models.

regression

Towards Human-Agent Communication via the Information Bottleneck Principle

no code implementations30 Jun 2022 Mycal Tucker, Julie Shah, Roger Levy, Noga Zaslavsky

Emergent communication research often focuses on optimizing task-specific utility as a driver for communication.

Informativeness

Scalable pragmatic communication via self-supervision

no code implementations12 Aug 2021 Jennifer Hu, Roger Levy, Noga Zaslavsky

Models of context-sensitive communication often use the Rational Speech Act framework (RSA; Frank & Goodman, 2012), which formulates listeners and speakers in a cooperative reasoning process.

Probing artificial neural networks: insights from neuroscience

no code implementations16 Apr 2021 Anna A. Ivanova, John Hewitt, Noga Zaslavsky

A major challenge in both neuroscience and machine learning is the development of useful tools for understanding complex information processing systems.

BIG-bench Machine Learning

Cloze Distillation: Improving Neural Language Models with Human Next-Word Prediction

no code implementations CONLL 2020 Tiwalayo Eisape, Noga Zaslavsky, Roger Levy

Contemporary autoregressive language models (LMs) trained purely on corpus data have been shown to capture numerous features of human incremental processing.

A Rate-Distortion view of human pragmatic reasoning

no code implementations13 May 2020 Noga Zaslavsky, Jennifer Hu, Roger P. Levy

What computational principles underlie human pragmatic reasoning?

Semantic categories of artifacts and animals reflect efficient coding

no code implementations SCiL 2020 Noga Zaslavsky, Terry Regier, Naftali Tishby, Charles Kemp

Recently, this idea has been cast in terms of a general information-theoretic principle of efficiency, the Information Bottleneck (IB) principle, and it has been shown that this principle accounts for the emergence and evolution of named color categories across languages, including soft structure and patterns of inconsistent naming.

Efficient human-like semantic representations via the Information Bottleneck principle

no code implementations9 Aug 2018 Noga Zaslavsky, Charles Kemp, Terry Regier, Naftali Tishby

This work thus identifies a computational principle that characterizes human semantic systems, and that could usefully inform semantic representations in machines.

Open-Ended Question Answering

Color naming reflects both perceptual structure and communicative need

no code implementations16 May 2018 Noga Zaslavsky, Charles Kemp, Naftali Tishby, Terry Regier

We show that greater communicative precision for warm than for cool colors, and greater communicative need, may both be explained by perceptual structure.

Deep Learning and the Information Bottleneck Principle

1 code implementation9 Mar 2015 Naftali Tishby, Noga Zaslavsky

Deep Neural Networks (DNNs) are analyzed via the theoretical framework of the information bottleneck (IB) principle.

Generalization Bounds

Cannot find the paper you are looking for? You can Submit a new open access paper.