1 code implementation • 9 Jul 2024 • Tim R. Davidson, Viacheslav Surkov, Veniamin Veselovsky, Giuseppe Russo, Robert West, Caglar Gulcehre
Instead, our results suggest that given a set of alternatives, LMs seek to pick the "best" answer, regardless of its origin.
1 code implementation • 9 Jan 2024 • Tim R. Davidson, Veniamin Veselovsky, Martin Josifoski, Maxime Peyrard, Antoine Bosselut, Michal Kosinski, Robert West
We introduce an approach to evaluate language model (LM) agency using negotiation games.
no code implementations • 7 Oct 2019 • Tim R. Davidson, Jakub M. Tomczak, Efstratios Gavves
Learning suitable latent representations for observed, high-dimensional data is an important research topic underlying many recent advances in machine learning.
1 code implementation • 7 Mar 2019 • Luca Falorsi, Pim de Haan, Tim R. Davidson, Patrick Forré
Unfortunately, this research has primarily focused on distributions defined in Euclidean space, ruling out the usage of one of the most influential class of spaces with non-trivial topologies: Lie groups.
1 code implementation • 12 Jul 2018 • Luca Falorsi, Pim de Haan, Tim R. Davidson, Nicola De Cao, Maurice Weiler, Patrick Forré, Taco S. Cohen
Our experiments show that choosing manifold-valued latent variables that match the topology of the latent data manifold, is crucial to preserve the topological structure and learn a well-behaved latent space.
9 code implementations • 3 Apr 2018 • Tim R. Davidson, Luca Falorsi, Nicola De Cao, Thomas Kipf, Jakub M. Tomczak
But although the default choice of a Gaussian distribution for both the prior and posterior represents a mathematically convenient distribution often leading to competitive results, we show that this parameterization fails to model data with a latent hyperspherical structure.
Ranked #6 on Link Prediction on Cora