5 code implementations • Nature 2021 • John Jumper, Richard Evans, Alexander Pritzel, Tim Green, Michael Figurnov, Olaf Ronneberger, Kathryn Tunyasuvunakool, Russ Bates, Augustin Žídek, Anna Potapenko, Alex Bridgland, Clemens Meyer, Simon A. A. Kohl, Andrew J. Ballard, Andrew Cowie, Bernardino Romera-Paredes, Stanislav Nikolov, Rishub Jain, Jonas Adler, Trevor Back, Stig Petersen, David Reiman, Ellen Clancy, Michal Zielinski, Martin Steinegger, Michalina Pacholska, Tamas Berghammer, Sebastian Bodenstein, David Silver, Oriol Vinyals, Andrew W. Senior, Koray Kavukcuoglu, Pushmeet Kohli, Demis Hassabis
Accurate computational approaches are needed to address this gap and to enable large-scale structural bioinformatics.
no code implementations • ACL 2020 • Angeliki Lazaridou, Anna Potapenko, Olivier Tieleman
We present a method for combining multi-agent communication and traditional data-driven approaches to natural language learning, with an end goal of teaching agents to communicate with humans in natural language.
6 code implementations • ICLR 2020 • Jack W. Rae, Anna Potapenko, Siddhant M. Jayakumar, Timothy P. Lillicrap
We present the Compressive Transformer, an attentive sequence model which compresses past memories for long-range sequence learning.
Ranked #2 on Language Modelling on Hutter Prize
no code implementations • WS 2018 • Valentin Trifonov, Octavian-Eugen Ganea, Anna Potapenko, Thomas Hofmann
Previous research on word embeddings has shown that sparse representations, which can be either learned on top of existing dense embeddings or obtained through model constraints during training time, have the benefit of increased interpretability properties: to some degree, each dimension can be understood by a human and associated with a recognizable feature in the data.
no code implementations • 11 Nov 2017 • Anna Potapenko, Artem Popov, Konstantin Vorontsov
We consider probabilistic topic models and more recent word embedding techniques from a perspective of learning hidden semantic representations.