no code implementations • 15 Apr 2025 • Quanyu Long, Jianda Chen, Zhengyuan Liu, Nancy F. Chen, Wenya Wang, Sinno Jialin Pan
Large Language Models (LLMs) have demonstrated remarkable capabilities across numerous tasks, yet they often rely on external context to handle complex tasks.
no code implementations • 24 Mar 2025 • Wen Zheng Terence Ng, Jianda Chen, Yuan Xu, Tianwei Zhang
This work addresses the challenge of personalizing trajectories generated in automated decision-making systems by introducing a resource-efficient approach that enables rapid adaptation to individual users' preferences.
1 code implementation • 7 Mar 2025 • Chengqi Zheng, Haiyan Yin, Jianda Chen, Terence Ng, Yew-Soon Ong, Ivor Tsang
In this paper, we introduce SSDE, a novel structure-based approach that enhances plasticity through a fine-grained allocation strategy with Structured Sparsity and Dormant-guided Exploration.
no code implementations • 21 Feb 2025 • Zichen Chen, Jiaao Chen, Jianda Chen, Misha Sra
Current financial LLM agent benchmarks are inadequate.
1 code implementation • 9 Nov 2024 • Jianda Chen, Wen Zheng Terence Ng, Zichen Chen, Sinno Jialin Pan, Tianwei Zhang
SCR augments state metric-based representations by incorporating extensive temporal information into the update step of bisimulation metric learning.
no code implementations • 16 Oct 2024 • Wen Zheng Terence Ng, Jianda Chen, Tianwei Zhang
Offline Reinforcement Learning (RL) offers an attractive alternative to interactive data acquisition by leveraging pre-existing datasets.
no code implementations • 16 Oct 2024 • Wen Zheng Terence Ng, Jianda Chen, Sinno Jialin Pan, Tianwei Zhang
Deploying a safe mobile robot policy in scenarios with human pedestrians is challenging due to their unpredictable movements.
no code implementations • 14 Aug 2024 • Quanyu Long, Jianda Chen, Wenya Wang, Sinno Jialin Pan
In-context learning (ICL) has proven to be a significant capability with the advancement of Large Language models (LLMs).
1 code implementation • 15 Nov 2023 • Zichen Chen, Jianda Chen, Ambuj Singh, Misha Sra
Large Language Models (LLMs) have achieved remarkable success in natural language tasks, yet understanding their reasoning processes remains a significant challenge.
no code implementations • 29 Mar 2023 • Zichen Chen, Jianda Chen, YuanYuan Chen, Han Yu, Ambuj K Singh, Misha Sra
By comparing the explanations generated by LMExplainer with those of other models, we show that our approach offers more comprehensive and clearer explanations of the reasoning process.
1 code implementation • ICLR 2022 • Jianda Chen, Sinno Jialin Pan
How to learn an effective reinforcement learning-based model for control tasks from high-level visual observations is a practical and challenging problem.
no code implementations • NeurIPS 2020 • Jianda Chen, Shangyu Chen, Sinno Jialin Pan
In this paper, we propose a deep reinforcement learning (DRL) based framework to efficiently perform runtime channel pruning on convolutional neural networks (CNNs).
no code implementations • 25 Sep 2019 • Haiyan Yin, Jianda Chen, Sinno Jialin Pan
First, we propose a new reasoning paradigm to infer the novelty for the partially observable states, which is built upon forward dynamics prediction.
no code implementations • 3 Jul 2017 • Haiyan Yin, Jianda Chen, Sinno Jialin Pan
In deep reinforcement learning (RL) tasks, an efficient exploration mechanism should be able to encourage an agent to take actions that lead to less frequent states which may yield higher accumulative future return.