1 code implementation • NeurIPS 2023 • Daniel Y. Fu, Simran Arora, Jessica Grogan, Isys Johnson, Sabri Eyuboglu, Armin W. Thomas, Benjamin Spector, Michael Poli, Atri Rudra, Christopher Ré
We ask: are there performant architectures that can scale sub-quadratically along sequence length and model dimension?
no code implementations • 8 Aug 2023 • Benjamin Spector, Chris Re
Recent advances with large language models (LLM) illustrate their diverse capabilities.
1 code implementation • 14 Feb 2022 • Alexandre Cremers, Ethan G. Wilcox, Benjamin Spector
During communication, the interpretation of utterances is sensitive to a listener's probabilistic prior beliefs, something which is captured by one currently influential model of pragmatics, the Rational Speech Act (RSA) framework.
no code implementations • 29 Nov 2021 • Benjamin Spector, Andreas Kipf, Kapil Vaidya, Chi Wang, Umar Farooq Minhas, Tim Kraska
RSS achieves this by using the minimal string prefix to sufficiently distinguish the data unlike most learned approaches which index the entire string.
no code implementations • 26 Aug 2020 • Paul Egré, Benjamin Spector, Adèle Mortier, Steven Verheyen
Focusing on expressions of approximation such as "around", which are semantically vague, we show that they allow the speaker to convey indirect probabilistic information, in a way that can give the listener a more accurate representation of the information available to the speaker than any more precise expression would (intervals of the form "between").
no code implementations • 24 Oct 2019 • Benjamin Spector, Ravi Kumar, Andrew Tomkins
We propose improving the privacy properties of a dataset by publishing only a strategically chosen "core-set" of the data containing a subset of the instances.
no code implementations • 7 Jan 2018 • Benjamin Spector, Serge Belongie
Recent work in deep reinforcement learning has allowed algorithms to learn complex tasks such as Atari 2600 games just from the reward provided by the game, but these algorithms presently require millions of training steps in order to learn, making them approximately five orders of magnitude slower than humans.