no code implementations • 18 Mar 2018 • Jinning Li, Si-Qi Liu, Mengyao Cao
Our scheme includes three models.
no code implementations • 17 Jan 2021 • Jinning Li, Liting Sun, Jianyu Chen, Masayoshi Tomizuka, Wei Zhan
To address this challenge, we propose a hierarchical behavior planning framework with a set of low-level safe controllers and a high-level reinforcement learning algorithm (H-CtRL) as a coordinator for the low-level controllers.
no code implementations • 18 Feb 2021 • Jiachen Li, Hengbo Ma, Zhihao Zhang, Jinning Li, Masayoshi Tomizuka
Due to the existence of frequent interactions and uncertainty in the scene evolution, it is desired for the prediction system to enable relational reasoning on different entities and provide a distribution of future trajectories for each agent.
no code implementations • 9 Nov 2021 • Jinning Li, Chen Tang, Masayoshi Tomizuka, Wei Zhan
Reinforcement Learning (RL) has been shown effective in domains where the agent can learn policies by actively interacting with its operating environment.
no code implementations • 16 Oct 2022 • Ruijie Wang, Zheng Li, Dachun Sun, Shengzhong Liu, Jinning Li, Bing Yin, Tarek Abdelzaher
Second, the potentially dynamic distributions from the initially observable facts to the future facts ask for explicitly modeling the evolving characteristics of new entities.
no code implementations • COLING (WNUT) 2022 • Jinning Li, Shubhanshu Mishra, Ahmed El-Kishky, Sneha Mehta, Vivek Kulkarni
We refer to these annotations as Non-Textual Units (NTUs).
no code implementations • 13 Jun 2023 • Ruijie Wang, Baoyu Li, Yichen Lu, Dachun Sun, Jinning Li, Yuchen Yan, Shengzhong Liu, Hanghang Tong, Tarek F. Abdelzaher
State-of-the-art methods fall short in the speculative reasoning ability, as they assume the correctness of a fact is solely determined by its presence in KG, making them vulnerable to false negative/positive issues.
no code implementations • 18 Sep 2023 • Jinning Li, Xinyi Liu, Banghua Zhu, Jiantao Jiao, Masayoshi Tomizuka, Chen Tang, Wei Zhan
GOLD distills an offline DT policy into a lightweight policy network through guided online safe RL training, which outperforms both the offline DT policy and online safe RL algorithms.
no code implementations • 11 Oct 2023 • Yuxin Chen, Chen Tang, Ran Tian, Chenran Li, Jinning Li, Masayoshi Tomizuka, Wei Zhan
We observe that, generally, a more diverse set of co-play agents during training enhances the generalization performance of the ego agent; however, this improvement varies across distinct scenarios and environments.
1 code implementation • 20 Oct 2023 • Chenkai Sun, Jinning Li, Yi R. Fung, Hou Pong Chan, Tarek Abdelzaher, ChengXiang Zhai, Heng Ji
Automatic response forecasting for news media plays a crucial role in enabling content producers to efficiently predict the impact of news releases and prevent unexpected negative outcomes such as social conflict and moral injury.
1 code implementation • 1 Oct 2021 • Jinning Li, Huajie Shao, Dachun Sun, Ruijie Wang, Yuchen Yan, Jinyang Li, Shengzhong Liu, Hanghang Tong, Tarek Abdelzaher
Inspired by total correlation in information theory, we propose the Information-Theoretic Variational Graph Auto-Encoder (InfoVGAE) that learns to project both users and content items (e. g., posts that represent user views) into an appropriate disentangled latent space.
1 code implementation • 25 May 2023 • Chenkai Sun, Jinning Li, Hou Pong Chan, ChengXiang Zhai, Heng Ji
Our analysis shows that the best-performing models are capable of predicting responses that are consistent with the personas, and as a byproduct, the task formulation also enables many interesting applications in the analysis of social network groups and their opinions, such as the discovery of extreme opinion groups.