Search Results for author: Chaochao Lu

Found 10 papers, 2 papers with code

Action-Sufficient State Representation Learning for Control with Structural Constraints

no code implementations12 Oct 2021 Biwei Huang, Chaochao Lu, Liu Leqi, José Miguel Hernández-Lobato, Clark Glymour, Bernhard Schölkopf, Kun Zhang

Perceived signals in real-world scenarios are usually high-dimensional and noisy, and finding and using their representation that contains essential and sufficient information required by downstream decision-making tasks will help improve computational efficiency and generalization ability in the tasks.

Decision Making Representation Learning

AdaRL: What, Where, and How to Adapt in Transfer Reinforcement Learning

1 code implementation ICLR 2022 Biwei Huang, Fan Feng, Chaochao Lu, Sara Magliacane, Kun Zhang

We show that by explicitly leveraging this compact representation to encode changes, we can efficiently adapt the policy to the target domain, in which only a few samples are needed and further policy optimization is avoided.

Atari Games reinforcement-learning +1

Nonlinear Invariant Risk Minimization: A Causal Approach

no code implementations24 Feb 2021 Chaochao Lu, Yuhuai Wu, Jośe Miguel Hernández-Lobato, Bernhard Schölkopf

Finally, in the discussion, we further explore the aforementioned assumption and propose a more general hypothesis, called the Agnostic Hypothesis: there exist a set of hidden causal factors affecting both inputs and outcomes.

Representation Learning

Invariant Causal Representation Learning

no code implementations1 Jan 2021 Chaochao Lu, Yuhuai Wu, José Miguel Hernández-Lobato, Bernhard Schölkopf

As an alternative, we propose Invariant Causal Representation Learning (ICRL), a learning paradigm that enables out-of-distribution generalization in the nonlinear setting (i. e., nonlinear representations and nonlinear classifiers).

Out-of-Distribution Generalization Representation Learning

Interpreting Spatially Infinite Generative Models

no code implementations24 Jul 2020 Chaochao Lu, Richard E. Turner, Yingzhen Li, Nate Kushman

In this paper we provide a firm theoretical interpretation for infinite spatial generation, by drawing connections to spatial stochastic processes.

Texture Synthesis

Deconfounding Reinforcement Learning in Observational Settings

1 code implementation26 Dec 2018 Chaochao Lu, Bernhard Schölkopf, José Miguel Hernández-Lobato

Using this benchmark, we demonstrate that the proposed algorithms are superior to traditional RL methods in confounded environments with observational data.

OpenAI Gym reinforcement-learning

Cannot find the paper you are looking for? You can Submit a new open access paper.