Search Results for author: Anji Liu

Found 20 papers, 8 papers with code

Smart Help: Strategic Opponent Modeling for Proactive and Adaptive Robot Assistance in Households

no code implementations13 Apr 2024 Zhihao Cao, Zidong Wang, Siwen Xie, Anji Liu, Lifeng Fan

Our findings illustrate the potential of AI-imbued assistive robots in improving the well-being of vulnerable groups.

RAT: Retrieval Augmented Thoughts Elicit Context-Aware Reasoning in Long-Horizon Generation

no code implementations8 Mar 2024 ZiHao Wang, Anji Liu, Haowei Lin, Jiaqi Li, Xiaojian Ma, Yitao Liang

We explore how iterative revising a chain of thoughts with the help of information retrieval significantly improves large language models' reasoning and generation ability in long-horizon generation tasks, while hugely mitigating hallucination.

Code Generation Hallucination +3

Image Inpainting via Tractable Steering of Diffusion Models

no code implementations28 Nov 2023 Anji Liu, Mathias Niepert, Guy Van Den Broeck

In addition to proposing a new framework for constrained image generation, this paper highlights the benefit of more tractable models and motivates the development of expressive TPMs.

Denoising Image Inpainting

JARVIS-1: Open-World Multi-task Agents with Memory-Augmented Multimodal Language Models

no code implementations10 Nov 2023 ZiHao Wang, Shaofei Cai, Anji Liu, Yonggang Jin, Jinbing Hou, Bowei Zhang, Haowei Lin, Zhaofeng He, Zilong Zheng, Yaodong Yang, Xiaojian Ma, Yitao Liang

Achieving human-like planning and control with multimodal observations in an open world is a key milestone for more functional generalist agents.

Expressive Modeling Is Insufficient for Offline RL: A Tractable Inference Perspective

no code implementations31 Oct 2023 Xuejie Liu, Anji Liu, Guy Van Den Broeck, Yitao Liang

A popular paradigm for offline Reinforcement Learning (RL) tasks is to first fit the offline trajectories to a sequence model, and then prompt the model for actions that lead to high expected return.

Offline RL Reinforcement Learning (RL)

GROOT: Learning to Follow Instructions by Watching Gameplay Videos

no code implementations12 Oct 2023 Shaofei Cai, Bowei Zhang, ZiHao Wang, Xiaojian Ma, Anji Liu, Yitao Liang

We propose to follow reference videos as instructions, which offer expressive goal specifications while eliminating the need for expensive text-gameplay annotations.

Instruction Following

Understanding the Distillation Process from Deep Generative Models to Tractable Probabilistic Circuits

no code implementations16 Feb 2023 Xuejie Liu, Anji Liu, Guy Van Den Broeck, Yitao Liang

In this paper, we theoretically and empirically discover that the performance of a PC can exceed that of its teacher model.

Open-World Multi-Task Control Through Goal-Aware Representation Learning and Adaptive Horizon Prediction

2 code implementations CVPR 2023 Shaofei Cai, ZiHao Wang, Xiaojian Ma, Anji Liu, Yitao Liang

We study the problem of learning goal-conditioned policies in Minecraft, a popular, widely accessible yet challenging open-ended environment for developing human-level multi-task agents.

Representation Learning Zero-shot Generalization

Sparse Probabilistic Circuits via Pruning and Growing

1 code implementation22 Nov 2022 Meihua Dang, Anji Liu, Guy Van Den Broeck

The growing operation increases model capacity by increasing the size of the latent space.

Model Compression

Efficient Meta Reinforcement Learning for Preference-based Fast Adaptation

1 code implementation20 Nov 2022 Zhizhou Ren, Anji Liu, Yitao Liang, Jian Peng, Jianzhu Ma

To bridge this gap, we study the problem of few-shot adaptation in the context of human-in-the-loop reinforcement learning.

Meta Reinforcement Learning reinforcement-learning +1

Scaling Up Probabilistic Circuits by Latent Variable Distillation

no code implementations10 Oct 2022 Anji Liu, Honghua Zhang, Guy Van Den Broeck

We propose to overcome such bottleneck by latent variable distillation: we leverage the less tractable but more expressive deep generative models to provide extra supervision over the latent variables of PCs.

Language Modelling

Lossless Compression with Probabilistic Circuits

1 code implementation ICLR 2022 Anji Liu, Stephan Mandt, Guy Van Den Broeck

To overcome such problems, we establish a new class of tractable lossless compression models that permit efficient encoding and decoding: Probabilistic Circuits (PCs).

Data Compression Image Generation

Tractable Regularization of Probabilistic Circuits

no code implementations NeurIPS 2021 Anji Liu, Guy Van Den Broeck

Instead, we re-think regularization for PCs and propose two intuitive techniques, data softening and entropy regularization, that both take advantage of PCs' tractability and still have an efficient implementation as a computation graph.

Density Estimation

A Compositional Atlas of Tractable Circuit Operations for Probabilistic Inference

1 code implementation NeurIPS 2021 Antonio Vergari, YooJung Choi, Anji Liu, Stefano Teso, Guy Van Den Broeck

Circuit representations are becoming the lingua franca to express and reason about tractable generative and discriminative models.

On Effective Parallelization of Monte Carlo Tree Search

no code implementations15 Jun 2020 Anji Liu, Yitao Liang, Ji Liu, Guy Van Den Broeck, Jianshu Chen

Second, and more importantly, we demonstrate how the proposed necessary conditions can be adopted to design more effective parallel MCTS algorithms.

Atari Games

Off-Policy Deep Reinforcement Learning with Analogous Disentangled Exploration

1 code implementation25 Feb 2020 Anji Liu, Yitao Liang, Guy Van Den Broeck

Off-policy reinforcement learning (RL) is concerned with learning a rewarding policy by executing another policy that gathers samples of experience.

Continuous Control reinforcement-learning +1

Watch the Unobserved: A Simple Approach to Parallelizing Monte Carlo Tree Search

4 code implementations ICLR 2020 Anji Liu, Jianshu Chen, Mingze Yu, Yu Zhai, Xuewen Zhou, Ji Liu

Monte Carlo Tree Search (MCTS) algorithms have achieved great success on many challenging benchmarks (e. g., Computer Go).

Cannot find the paper you are looking for? You can Submit a new open access paper.