Search Results for author: James Bergstra

Found 12 papers, 8 papers with code

A Statistical Guarantee for Representation Transfer in Multitask Imitation Learning

no code implementations2 Nov 2023 Bryan Chan, Karime Pereida, James Bergstra

Transferring representation for multitask imitation learning has the potential to provide improved sample efficiency on learning new tasks, when compared to learning from scratch.

Imitation Learning

Active Perception and Representation for Robotic Manipulation

no code implementations15 Mar 2020 Youssef Zaky, Gaurav Paruthi, Bryan Tripp, James Bergstra

It also enables them to view objects from different distances and viewpoints, providing a rich visual experience from which to learn abstract representations of the environment.

Q-Learning Representation Learning

Autoregressive Policies for Continuous Control Deep Reinforcement Learning

1 code implementation27 Mar 2019 Dmytro Korenkevych, A. Rupam Mahmood, Gautham Vasan, James Bergstra

We introduce a family of stationary autoregressive (AR) stochastic processes to facilitate exploration in continuous control domains.

Continuous Control reinforcement-learning +1

Benchmarking Reinforcement Learning Algorithms on Real-World Robots

2 code implementations20 Sep 2018 A. Rupam Mahmood, Dmytro Korenkevych, Gautham Vasan, William Ma, James Bergstra

The research community is now able to reproduce, analyze and build quickly on these results due to open source implementations of learning algorithms and simulated benchmark tasks.

Benchmarking Continuous Control +2

Setting up a Reinforcement Learning Task with a Real-World Robot

2 code implementations19 Mar 2018 A. Rupam Mahmood, Dmytro Korenkevych, Brent J. Komer, James Bergstra

Reinforcement learning is a promising approach to developing hard-to-engineer adaptive solutions for complex and diverse robotic tasks.

reinforcement-learning Reinforcement Learning (RL)

Theano: A Python framework for fast computation of mathematical expressions

1 code implementation9 May 2016 The Theano Development Team, Rami Al-Rfou, Guillaume Alain, Amjad Almahairi, Christof Angermueller, Dzmitry Bahdanau, Nicolas Ballas, Frédéric Bastien, Justin Bayer, Anatoly Belikov, Alexander Belopolsky, Yoshua Bengio, Arnaud Bergeron, James Bergstra, Valentin Bisson, Josh Bleecher Snyder, Nicolas Bouchard, Nicolas Boulanger-Lewandowski, Xavier Bouthillier, Alexandre de Brébisson, Olivier Breuleux, Pierre-Luc Carrier, Kyunghyun Cho, Jan Chorowski, Paul Christiano, Tim Cooijmans, Marc-Alexandre Côté, Myriam Côté, Aaron Courville, Yann N. Dauphin, Olivier Delalleau, Julien Demouth, Guillaume Desjardins, Sander Dieleman, Laurent Dinh, Mélanie Ducoffe, Vincent Dumoulin, Samira Ebrahimi Kahou, Dumitru Erhan, Ziye Fan, Orhan Firat, Mathieu Germain, Xavier Glorot, Ian Goodfellow, Matt Graham, Caglar Gulcehre, Philippe Hamel, Iban Harlouchet, Jean-Philippe Heng, Balázs Hidasi, Sina Honari, Arjun Jain, Sébastien Jean, Kai Jia, Mikhail Korobov, Vivek Kulkarni, Alex Lamb, Pascal Lamblin, Eric Larsen, César Laurent, Sean Lee, Simon Lefrancois, Simon Lemieux, Nicholas Léonard, Zhouhan Lin, Jesse A. Livezey, Cory Lorenz, Jeremiah Lowin, Qianli Ma, Pierre-Antoine Manzagol, Olivier Mastropietro, Robert T. McGibbon, Roland Memisevic, Bart van Merriënboer, Vincent Michalski, Mehdi Mirza, Alberto Orlandi, Christopher Pal, Razvan Pascanu, Mohammad Pezeshki, Colin Raffel, Daniel Renshaw, Matthew Rocklin, Adriana Romero, Markus Roth, Peter Sadowski, John Salvatier, François Savard, Jan Schlüter, John Schulman, Gabriel Schwartz, Iulian Vlad Serban, Dmitriy Serdyuk, Samira Shabanian, Étienne Simon, Sigurd Spieckermann, S. Ramana Subramanyam, Jakub Sygnowski, Jérémie Tanguay, Gijs van Tulder, Joseph Turian, Sebastian Urban, Pascal Vincent, Francesco Visin, Harm de Vries, David Warde-Farley, Dustin J. Webb, Matthew Willson, Kelvin Xu, Lijun Xue, Li Yao, Saizheng Zhang, Ying Zhang

Since its introduction, it has been one of the most used CPU and GPU mathematical compilers - especially in the machine learning community - and has shown steady performance improvements.

BIG-bench Machine Learning Clustering +2

Hyperopt-Sklearn: Automatic Hyperparameter Configuration for Scikit-Learn

2 code implementations SCIPY 2014 2014 Brent Komer, James Bergstra, Chris Eliasmith

Hyperopt-sklearn is a new software project that provides automatic algorithm configuration of the Scikit-learn machine learning library.

Benchmarking Hyperparameter Optimization

Hyperopt: A Python Library for Optimizing the Hyperparameters of Machine Learning Algorithms

1 code implementation SCIPY 2013 2013 James Bergstra, Dan Yamins, David D. Cox

Sequential model-based optimization (also known as Bayesian optimization) is one of the most efficient methods (per function evaluation) of function minimization.

Bayesian Optimization BIG-bench Machine Learning +2

Theano: new features and speed improvements

no code implementations23 Nov 2012 Frédéric Bastien, Pascal Lamblin, Razvan Pascanu, James Bergstra, Ian Goodfellow, Arnaud Bergeron, Nicolas Bouchard, David Warde-Farley, Yoshua Bengio

Theano is a linear algebra compiler that optimizes a user's symbolically-specified mathematical computations to produce efficient low-level implementations.

BIG-bench Machine Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.