Search Results for author: Wojciech Marian Czarnecki

Found 28 papers, 11 papers with code

Exploring the Space of Key-Value-Query Models with Intention

no code implementations17 May 2023 Marta Garnelo, Wojciech Marian Czarnecki

Our goal is to determine whether there are any other stackable models in KVQ Space that Attention cannot efficiently approximate, which we can implement with our current deep learning toolbox and that solve problems that are interesting to the community.

Few-Shot Learning

On the Limitations of Elo: Real-World Games, are Transitive, not Additive

1 code implementation21 Jun 2022 Quentin Bertrand, Wojciech Marian Czarnecki, Gauthier Gidel

In this study, we investigate the challenge of identifying the strength of the transitive component in games.

Starcraft Starcraft II

Pick Your Battles: Interaction Graphs as Population-Level Objectives for Strategic Diversity

no code implementations8 Oct 2021 Marta Garnelo, Wojciech Marian Czarnecki, SiQi Liu, Dhruva Tirumala, Junhyuk Oh, Gauthier Gidel, Hado van Hasselt, David Balduzzi

Strategic diversity is often essential in games: in multi-player games, for example, evaluating a player against a diverse set of strategies will yield a more accurate estimate of its performance.

Behavior Priors for Efficient Reinforcement Learning

no code implementations27 Oct 2020 Dhruva Tirumala, Alexandre Galashov, Hyeonwoo Noh, Leonard Hasenclever, Razvan Pascanu, Jonathan Schwarz, Guillaume Desjardins, Wojciech Marian Czarnecki, Arun Ahuja, Yee Whye Teh, Nicolas Heess

In this work we consider how information and architectural constraints can be combined with ideas from the probabilistic modeling literature to learn behavior priors that capture the common movement and interaction patterns that are shared across a set of related tasks or contexts.

Continuous Control Hierarchical Reinforcement Learning +3

A Limited-Capacity Minimax Theorem for Non-Convex Games or: How I Learned to Stop Worrying about Mixed-Nash and Love Neural Nets

no code implementations14 Feb 2020 Gauthier Gidel, David Balduzzi, Wojciech Marian Czarnecki, Marta Garnelo, Yoram Bachrach

Adversarial training, a special case of multi-objective optimization, is an increasingly prevalent machine learning technique: some of its most notable applications include GAN-based generative modeling and self-play techniques in reinforcement learning which have been applied to complex games such as Go or Poker.

Starcraft Starcraft II

A Deep Neural Network's Loss Surface Contains Every Low-dimensional Pattern

no code implementations16 Dec 2019 Wojciech Marian Czarnecki, Simon Osindero, Razvan Pascanu, Max Jaderberg

The work "Loss Landscape Sightseeing with Multi-Point Optimization" (Skorokhodov and Burtsev, 2019) demonstrated that one can empirically find arbitrary 2D binary patterns inside loss surfaces of popular neural networks.

Distilling Policy Distillation

no code implementations6 Feb 2019 Wojciech Marian Czarnecki, Razvan Pascanu, Simon Osindero, Siddhant M. Jayakumar, Grzegorz Swirszcz, Max Jaderberg

The transfer of knowledge from one policy to another is an important tool in Deep Reinforcement Learning.

Grounded Language Learning in a Simulated 3D World

1 code implementation20 Jun 2017 Karl Moritz Hermann, Felix Hill, Simon Green, Fumin Wang, Ryan Faulkner, Hubert Soyer, David Szepesvari, Wojciech Marian Czarnecki, Max Jaderberg, Denis Teplyashin, Marcus Wainwright, Chris Apps, Demis Hassabis, Phil Blunsom

Trained via a combination of reinforcement and unsupervised learning, and beginning with minimal prior knowledge, the agent learns to relate linguistic symbols to emergent perceptual representations of its physical surroundings and to pertinent sequences of actions.

Grounded language learning

Sobolev Training for Neural Networks

no code implementations NeurIPS 2017 Wojciech Marian Czarnecki, Simon Osindero, Max Jaderberg, Grzegorz Świrszcz, Razvan Pascanu

In many cases we only have access to input-output pairs from the ground truth, however it is becoming more common to have access to derivatives of the target output with respect to the input - for example when the ground truth function is itself a neural network such as in network compression or distillation.

Understanding Synthetic Gradients and Decoupled Neural Interfaces

1 code implementation ICML 2017 Wojciech Marian Czarnecki, Grzegorz Świrszcz, Max Jaderberg, Simon Osindero, Oriol Vinyals, Koray Kavukcuoglu

When training neural networks, the use of Synthetic Gradients (SG) allows layers or modules to be trained without update locking - without waiting for a true error gradient to be backpropagated - resulting in Decoupled Neural Interfaces (DNIs).

Local minima in training of neural networks

1 code implementation19 Nov 2016 Grzegorz Swirszcz, Wojciech Marian Czarnecki, Razvan Pascanu

Given that deep networks are highly nonlinear systems optimized by local gradient methods, why do they not seem to be affected by bad local minima?

Reinforcement Learning with Unsupervised Auxiliary Tasks

3 code implementations16 Nov 2016 Max Jaderberg, Volodymyr Mnih, Wojciech Marian Czarnecki, Tom Schaul, Joel Z. Leibo, David Silver, Koray Kavukcuoglu

We also introduce a novel mechanism for focusing this representation upon extrinsic rewards, so that learning can rapidly adapt to the most relevant aspects of the actual task.

reinforcement-learning Reinforcement Learning (RL)

Decoupled Neural Interfaces using Synthetic Gradients

5 code implementations ICML 2017 Max Jaderberg, Wojciech Marian Czarnecki, Simon Osindero, Oriol Vinyals, Alex Graves, David Silver, Koray Kavukcuoglu

Training directed neural networks typically requires forward-propagating data through a computation graph, followed by backpropagating error signal, to produce weight updates.

Learning to SMILE(S)

no code implementations19 Feb 2016 Stanisław Jastrzębski, Damian Leśniak, Wojciech Marian Czarnecki

This paper shows how one can directly apply natural language processing (NLP) methods to classification problems in cheminformatics.

Activity Prediction General Classification

On the consistency of Multithreshold Entropy Linear Classifier

no code implementations18 Apr 2015 Wojciech Marian Czarnecki

Multithreshold Entropy Linear Classifier (MELC) is a recent classifier idea which employs information theoretic concept in order to create a multithreshold maximum margin model.

Fast optimization of Multithreshold Entropy Linear Classifier

no code implementations18 Apr 2015 Rafal Jozefowicz, Wojciech Marian Czarnecki

Multithreshold Entropy Linear Classifier (MELC) is a density based model which searches for a linear projection maximizing the Cauchy-Schwarz Divergence of dataset kernel density estimation.

Density Estimation

Maximum Entropy Linear Manifold for Learning Discriminative Low-dimensional Representation

no code implementations10 Apr 2015 Wojciech Marian Czarnecki, Rafał Józefowicz, Jacek Tabor

Representation learning is currently a very hot topic in modern machine learning, mostly due to the great success of the deep learning methods.

Data Visualization Dimensionality Reduction +2

Extreme Entropy Machines: Robust information theoretic classification

no code implementations21 Jan 2015 Wojciech Marian Czarnecki, Jacek Tabor

The main contribution of this paper is proposing a model based on the information theoretic concepts which on the one hand shows new, entropic perspective on known linear classifiers and on the other leads to a construction of very robust method competetitive with the state of the art non-information theoretic ones (including Support Vector Machines and Extreme Learning Machines).

Classification General Classification

Cluster based RBF Kernel for Support Vector Machines

no code implementations12 Aug 2014 Wojciech Marian Czarnecki, Jacek Tabor

In the classical Gaussian SVM classification we use the feature space projection transforming points to normal distributions with fixed covariance matrices (identity in the standard RBF and the covariance of the whole dataset in Mahalanobis RBF).

General Classification

Multithreshold Entropy Linear Classifier

no code implementations4 Aug 2014 Wojciech Marian Czarnecki, Jacek Tabor

Then we prove that our method is a multithreshold large margin classifier, which shows the analogy to the SVM, while in the same time works with much broader class of hypotheses.

Activity Prediction

Cannot find the paper you are looking for? You can Submit a new open access paper.