Search Results for author: Audrey Durand

Found 15 papers, 7 papers with code

GrowSpace: Learning How to Shape Plants

no code implementations15 Oct 2021 Yasmeen Hitti, Ionelia Buzatu, Manuel Del Verme, Mark Lefsrud, Florian Golemo, Audrey Durand

We argue that plant responses to an environmental stimulus are a good example of a real-world problem that can be approached within a reinforcement learning (RL)framework.

Fairness

Challenges in Statistical Analysis of Data Collected by a Bandit Algorithm: An Empirical Exploration in Applications to Adaptively Randomized Experiments

no code implementations22 Mar 2021 Joseph Jay Williams, Jacob Nogas, Nina Deliu, Hammad Shaikh, Sofia S. Villar, Audrey Durand, Anna Rafferty

We therefore use our case study of the ubiquitous two-arm binary reward setting to empirically investigate the impact of using Thompson Sampling instead of uniform random assignment.

Comparison of pharmacist evaluation of medication orders with predictions of a machine learning model

1 code implementation3 Nov 2020 Sophie-Camille Hogue, Flora Chen, Geneviève Brassard, Denis Lebel, Jean-François Bussières, Audrey Durand, Maxime Thibault

The objective of this work was to assess the clinical performance of an unsupervised machine learning model aimed at identifying unusual medication orders and pharmacological profiles.

Deep interpretability for GWAS

no code implementations3 Jul 2020 Deepak Sharma, Audrey Durand, Marc-André Legault, Louis-Philippe Lemieux Perreault, Audrey Lemaçon, Marie-Pierre Dubé, Joelle Pineau

Genome-Wide Association Studies are typically conducted using linear models to find genetic variants associated with common diseases.

Old Dog Learns New Tricks: Randomized UCB for Bandit Problems

1 code implementation11 Oct 2019 Sharan Vaswani, Abbas Mehrabian, Audrey Durand, Branislav Kveton

We propose $\tt RandUCB$, a bandit strategy that builds on theoretically derived confidence intervals similar to upper confidence bound (UCB) algorithms, but akin to Thompson sampling (TS), it uses randomization to trade off exploration and exploitation.

Attraction-Repulsion Actor-Critic for Continuous Control Reinforcement Learning

no code implementations17 Sep 2019 Thang Doan, Bogdan Mazoure, Moloud Abdar, Audrey Durand, Joelle Pineau, R. Devon Hjelm

Continuous control tasks in reinforcement learning are important because they provide an important framework for learning in high-dimensional state spaces with deceptive rewards, where the agent can easily become trapped into suboptimal solutions.

Continuous Control reinforcement-learning

Leveraging exploration in off-policy algorithms via normalizing flows

1 code implementation16 May 2019 Bogdan Mazoure, Thang Doan, Audrey Durand, R. Devon Hjelm, Joelle Pineau

The ability to discover approximately optimal policies in domains with sparse rewards is crucial to applying reinforcement learning (RL) in many real-world scenarios.

Continuous Control

Temporal Regularization in Markov Decision Process

2 code implementations1 Nov 2018 Pierre Thodoroff, Audrey Durand, Joelle Pineau, Doina Precup

Several applications of Reinforcement Learning suffer from instability due to high variance.

Atari Games reinforcement-learning

On-line Adaptative Curriculum Learning for GANs

3 code implementations31 Jul 2018 Thang Doan, Joao Monteiro, Isabela Albuquerque, Bogdan Mazoure, Audrey Durand, Joelle Pineau, R. Devon Hjelm

We argue that less expressive discriminators are smoother and have a general coarse grained view of the modes map, which enforces the generator to cover a wide portion of the data distribution support.

Multi-Armed Bandits Stochastic Optimization

Learning to Become an Expert: Deep Networks Applied To Super-Resolution Microscopy

no code implementations28 Mar 2018 Louis-Émile Robitaille, Audrey Durand, Marc-André Gardner, Christian Gagné, Paul De Koninck, Flavie Lavoie-Cardinal

More specifically, we are proposing a system based on a deep neural network that can provide a quantitative quality measure of a STED image of neuronal structures given as input.

Super-Resolution

Estimating Quality in Multi-Objective Bandits Optimization

no code implementations4 Jan 2017 Audrey Durand, Christian Gagné

The question is: how good do estimations of these objectives have to be in order for the solution maximizing the preference function to remain unchanged?

Cannot find the paper you are looking for? You can Submit a new open access paper.