Search Results for author: Jialong Wu

Found 12 papers, 8 papers with code

DINER: Debiasing Aspect-based Sentiment Analysis with Multi-variable Causal Inference

1 code implementation2 Mar 2024 Jialong Wu, Linhai Zhang, Deyu Zhou, Guoqiang Xu

However, most of the present debiasing methods focus on single-variable causal inference, which is not suitable for ABSA with two input variables (the target aspect and the review).

Aspect-Based Sentiment Analysis Aspect-Based Sentiment Analysis (ABSA) +3

STAR: Constraint LoRA with Dynamic Active Learning for Data-Efficient Fine-Tuning of Large Language Models

1 code implementation2 Mar 2024 Linhai Zhang, Jialong Wu, Deyu Zhou, Guoqiang Xu

For poor model calibration, we incorporate the regularization method during LoRA training to keep the model from being over-confident, and the Monte-Carlo dropout mechanism is employed to enhance the uncertainty estimation.

Active Learning Few-Shot Learning

Constituency Parsing using LLMs

no code implementations30 Oct 2023 Xuefeng Bai, Jialong Wu, Yulong Chen, Zhongqing Wang, Yue Zhang

Constituency parsing is a fundamental yet unsolved natural language processing task.

Constituency Parsing

HarmonyDream: Task Harmonization Inside World Models

no code implementations30 Sep 2023 Haoyu Ma, Jialong Wu, Ningya Feng, Chenjun Xiao, Dong Li, Jianye Hao, Jianmin Wang, Mingsheng Long

Model-based reinforcement learning (MBRL) holds the promise of sample-efficient learning by utilizing a world model, which models how the environment works and typically encompasses components for two tasks: observation modeling and reward modeling.

Atari Games 100k Model-based Reinforcement Learning +1

Agents: An Open-source Framework for Autonomous Language Agents

1 code implementation14 Sep 2023 Wangchunshu Zhou, Yuchen Eleanor Jiang, Long Li, Jialong Wu, Tiannan Wang, Shi Qiu, Jintian Zhang, Jing Chen, Ruipu Wu, Shuai Wang, Shiding Zhu, Jiyu Chen, Wentao Zhang, Xiangru Tang, Ningyu Zhang, Huajun Chen, Peng Cui, Mrinmaya Sachan

Recent advances on large language models (LLMs) enable researchers and developers to build autonomous language agents that can automatically solve various tasks and interact with environments, humans, and other agents using natural language interfaces.

Pre-training Contextualized World Models with In-the-wild Videos for Reinforcement Learning

1 code implementation NeurIPS 2023 Jialong Wu, Haoyu Ma, Chaoyi Deng, Mingsheng Long

To tackle this issue, we introduce Contextualized World Models (ContextWM) that explicitly separate context and dynamics modeling to overcome the complexity and diversity of in-the-wild videos and facilitate knowledge transfer between distinct scenes.

Autonomous Driving Model-based Reinforcement Learning +3

CLIPood: Generalizing CLIP to Out-of-Distributions

1 code implementation2 Feb 2023 Yang Shu, Xingzhuo Guo, Jialong Wu, Ximei Wang, Jianmin Wang, Mingsheng Long

This paper aims at generalizing CLIP to out-of-distribution test data on downstream tasks.

Out-of-Dynamics Imitation Learning from Multimodal Demonstrations

1 code implementation13 Nov 2022 Yiwen Qiu, Jialong Wu, Zhangjie Cao, Mingsheng Long

Existing imitation learning works mainly assume that the demonstrator who collects demonstrations shares the same dynamics as the imitator.

Imitation Learning

Real-Time And Robust 3D Object Detection with Roadside LiDARs

no code implementations11 Jul 2022 Walter Zimmer, Jialong Wu, Xingcheng Zhou, Alois C. Knoll

This work aims to address the challenges in autonomous driving by focusing on the 3D perception of the environment using roadside LiDARs.

Autonomous Driving Domain Adaptation +3

Flowformer: Linearizing Transformers with Conservation Flows

1 code implementation13 Feb 2022 Haixu Wu, Jialong Wu, Jiehui Xu, Jianmin Wang, Mingsheng Long

By respectively conserving the incoming flow of sinks for source competition and the outgoing flow of sources for sink allocation, Flow-Attention inherently generates informative attentions without using specific inductive biases.

Ranked #4 on D4RL on D4RL

D4RL Offline RL +2

Supported Policy Optimization for Offline Reinforcement Learning

3 code implementations13 Feb 2022 Jialong Wu, Haixu Wu, Zihan Qiu, Jianmin Wang, Mingsheng Long

Policy constraint methods to offline reinforcement learning (RL) typically utilize parameterization or regularization that constrains the policy to perform actions within the support set of the behavior policy.

Offline RL reinforcement-learning +1

Cannot find the paper you are looking for? You can Submit a new open access paper.