1 code implementation • 26 Dec 2018 • Chaochao Lu, Bernhard Schölkopf, José Miguel Hernández-Lobato
Using this benchmark, we demonstrate that the proposed algorithms are superior to traditional RL methods in confounded environments with observational data.
1 code implementation • 11 Oct 2023 • Bo Peng, Xinyuan Chen, Yaohui Wang, Chaochao Lu, Yu Qiao
In this work, we introduce ConditionVideo, a training-free approach to text-to-video generation based on the provided condition, video, and input text, by leveraging the power of off-the-shelf text-to-image generation methods (e. g., Stable Diffusion).
1 code implementation • ICLR 2022 • Biwei Huang, Fan Feng, Chaochao Lu, Sara Magliacane, Kun Zhang
We show that by explicitly leveraging this compact representation to encode changes, we can efficiently adapt the policy to the target domain, in which only a few samples are needed and further policy optimization is avoided.
no code implementations • 15 Apr 2014 • Chaochao Lu, Xiaoou Tang
For the first time, the human-level performance in face verification (97. 53%) on LFW is surpassed.
no code implementations • CVPR 2017 • Chaochao Lu, Michael Hirsch, Bernhard Scholkopf
We describe a modular framework for video frame prediction.
no code implementations • 24 Jul 2020 • Chaochao Lu, Richard E. Turner, Yingzhen Li, Nate Kushman
In this paper we provide a firm theoretical interpretation for infinite spatial generation, by drawing connections to spatial stochastic processes.
no code implementations • 1 Jan 2021 • Chaochao Lu, Yuhuai Wu, José Miguel Hernández-Lobato, Bernhard Schölkopf
As an alternative, we propose Invariant Causal Representation Learning (ICRL), a learning paradigm that enables out-of-distribution generalization in the nonlinear setting (i. e., nonlinear representations and nonlinear classifiers).
no code implementations • 16 Dec 2020 • Chaochao Lu, Biwei Huang, Ke Wang, José Miguel Hernández-Lobato, Kun Zhang, Bernhard Schölkopf
We propose counterfactual RL algorithms to learn both population-level and individual-level policies.
no code implementations • 24 Feb 2021 • Chaochao Lu, Yuhuai Wu, Jośe Miguel Hernández-Lobato, Bernhard Schölkopf
Finally, in the discussion, we further explore the aforementioned assumption and propose a more general hypothesis, called the Agnostic Hypothesis: there exist a set of hidden causal factors affecting both inputs and outcomes.
no code implementations • ICLR 2022 • Chaochao Lu, Yuhuai Wu, José Miguel Hernández-Lobato, Bernhard Schölkopf
Extensive experiments on both synthetic and real-world datasets show that our approach outperforms a variety of baseline methods.
no code implementations • 12 Oct 2021 • Biwei Huang, Chaochao Lu, Liu Leqi, José Miguel Hernández-Lobato, Clark Glymour, Bernhard Schölkopf, Kun Zhang
Perceived signals in real-world scenarios are usually high-dimensional and noisy, and finding and using their representation that contains essential and sufficient information required by downstream decision-making tasks will help improve computational efficiency and generalization ability in the tasks.
no code implementations • 26 Jan 2024 • Chaochao Lu, Chen Qian, Guodong Zheng, Hongxing Fan, Hongzhi Gao, Jie Zhang, Jing Shao, Jingyi Deng, Jinlan Fu, Kexin Huang, Kunchang Li, Lijun Li, LiMin Wang, Lu Sheng, Meiqi Chen, Ming Zhang, Qibing Ren, Sirui Chen, Tao Gui, Wanli Ouyang, Yali Wang, Yan Teng, Yaru Wang, Yi Wang, Yinan He, Yingchun Wang, Yixu Wang, Yongting Zhang, Yu Qiao, Yujiong Shen, Yurong Mou, Yuxi Chen, Zaibin Zhang, Zhelun Shi, Zhenfei Yin, Zhipin Wang
Multi-modal Large Language Models (MLLMs) have shown impressive abilities in generating reasonable responses with respect to multi-modal contents.
no code implementations • 29 Jan 2024 • Heyang Gong, Chaochao Lu, Yu Zhang
In the field of causal modeling, potential outcomes (PO) and structural causal models (SCMs) stand as the predominant frameworks.
no code implementations • 27 Mar 2024 • Meiqi Chen, Yixin Cao, Yan Zhang, Chaochao Lu
Within our framework, we devise a causal graph to elucidate the predictions of MLLMs on VQA problems, and assess the causal effect of biases through an in-depth causal analysis.