Search Results for author: Benjamin Spector

Found 8 papers, 2 papers with code

Accelerating LLM Inference with Staged Speculative Decoding

no code implementations8 Aug 2023 Benjamin Spector, Chris Re

Recent advances with large language models (LLM) illustrate their diverse capabilities.

Exhaustivity and anti-exhaustivity in the RSA framework: Testing the effect of prior beliefs

1 code implementation14 Feb 2022 Alexandre Cremers, Ethan G. Wilcox, Benjamin Spector

During communication, the interpretation of utterances is sensitive to a listener's probabilistic prior beliefs, something which is captured by one currently influential model of pragmatics, the Rational Speech Act (RSA) framework.

Sentence

Bounding the Last Mile: Efficient Learned String Indexing

no code implementations29 Nov 2021 Benjamin Spector, Andreas Kipf, Kapil Vaidya, Chi Wang, Umar Farooq Minhas, Tim Kraska

RSS achieves this by using the minimal string prefix to sufficiently distinguish the data unlike most learned approaches which index the entire string.

On the Optimality of Vagueness: "Around", "Between", and the Gricean Maxims

no code implementations26 Aug 2020 Paul Egré, Benjamin Spector, Adèle Mortier, Steven Verheyen

Focusing on expressions of approximation such as "around", which are semantically vague, we show that they allow the speaker to convey indirect probabilistic information, in a way that can give the listener a more accurate representation of the information available to the speaker than any more precise expression would (intervals of the form "between").

Informativeness

Preventing Adversarial Use of Datasets through Fair Core-Set Construction

no code implementations24 Oct 2019 Benjamin Spector, Ravi Kumar, Andrew Tomkins

We propose improving the privacy properties of a dataset by publishing only a strategically chosen "core-set" of the data containing a subset of the instances.

Sample-Efficient Reinforcement Learning through Transfer and Architectural Priors

no code implementations7 Jan 2018 Benjamin Spector, Serge Belongie

Recent work in deep reinforcement learning has allowed algorithms to learn complex tasks such as Atari 2600 games just from the reward provided by the game, but these algorithms presently require millions of training steps in order to learn, making them approximately five orders of magnitude slower than humans.

Atari Games reinforcement-learning +2

Cannot find the paper you are looking for? You can Submit a new open access paper.