no code implementations • 7 Dec 2018 • Zhuo Xu, Chen Tang, Masayoshi Tomizuka
Although deep reinforcement learning (deep RL) methods have lots of strengths that are favorable if applied to autonomous driving, real deep RL applications in autonomous driving have been slowed down by the modeling gap between the source (training) domain and the target (deployment) domain.
2 code implementations • 26 Oct 2019 • Daniel Seita, David Chan, Roshan Rao, Chen Tang, Mandi Zhao, John Canny
Learning from demonstrations is a popular tool for accelerating and reducing the exploration requirements of reinforcement learning.
no code implementations • 11 Nov 2019 • Chen Tang, Jianyu Chen, Masayoshi Tomizuka
Current methods for long-term trajectory prediction cannot guarantee the physical feasibility of predicted distribution.
no code implementations • 21 Oct 2020 • Chen Tang, Wenyu Sun, Zhuqing Yuan, Yongpan Liu
To accelerate deep CNN models, this paper proposes a novel spatially adaptive framework that can dynamically generate pixel-wise sparsity according to the input image.
no code implementations • 23 Feb 2021 • Chen Tang, Nishan Srishankar, Sujitha Martin, Masayoshi Tomizuka
Explainability is essential for autonomous vehicles and other robotics systems interacting with humans and other objects during operation.
no code implementations • 9 Nov 2021 • Jinning Li, Chen Tang, Masayoshi Tomizuka, Wei Zhan
Reinforcement Learning (RL) has been shown effective in domains where the agent can learn policies by actively interacting with its operating environment.
no code implementations • NeurIPS 2021 • Chen Tang, Wei Zhan, Masayoshi Tomizuka
In this work, we argue that one of the typical formulations of VAEs in multi-agent modeling suffers from an issue we refer to as social posterior collapse, i. e., the model is prone to ignoring historical social context when predicting the future trajectory of an agent.
1 code implementation • 6 Mar 2022 • Chen Tang, Frank Guerin, Chenghua Lin
In recent years, considerable research has been dedicated to the application of neural models in the field of natural language generation (NLG).
1 code implementation • 16 Mar 2022 • Chen Tang, Kai Ouyang, Zhi Wang, Yifei Zhu, YaoWei Wang, Wen Ji, Wenwu Zhu
For example, MPQ search on ResNet18 with our indicators takes only 0. 06 s, which improves time efficiency exponentially compared to iterative search methods.
no code implementations • 28 Mar 2022 • Lingfeng Sun, Chen Tang, Yaru Niu, Enna Sachdeva, Chiho Choi, Teruhisa Misu, Masayoshi Tomizuka, Wei Zhan
To address these issues, we propose a novel approach to avoid KL vanishing and induce an interpretable interactive latent space with pseudo labels.
no code implementations • 19 Apr 2022 • Chen Tang, Wei Zhan, Masayoshi Tomizuka
Moreover, to properly evaluate an IBP model with offline datasets, we propose a Shapley-value-based metric to verify if the prediction model satisfies the inherent temporal independence of an interventional distribution.
no code implementations • 21 Apr 2022 • Chen Tang, Haoyu Zhai, Kai Ouyang, Zhi Wang, Yifei Zhu, Wenwu Zhu
We propose to feed different data samples with varying quantization schemes to achieve a data-dependent dynamic inference, at a fine-grained layer level.
1 code implementation • 21 Apr 2022 • Chenfeng Xu, Tian Li, Chen Tang, Lingfeng Sun, Kurt Keutzer, Masayoshi Tomizuka, Alireza Fathi, Wei Zhan
It is hard to replicate these approaches in trajectory forecasting due to the lack of adequate trajectory data (e. g., 34K samples in the nuScenes dataset).
1 code implementation • 19 Oct 2022 • Henglin Huang, Chen Tang, Tyler Loakman, Frank Guerin, Chenghua Lin
In spite of the success of prior works with the application of pre-trained models, current neural models for Chinese stories still struggle to generate high-quality long text narratives.
1 code implementation • 19 Oct 2022 • Chen Tang, Zhihao Zhang, Tyler Loakman, Chenghua Lin, Frank Guerin
To improve the performance of long text generation, recent studies have leveraged automatically planned event structures (i. e. storylines) to guide story generation.
1 code implementation • 22 Oct 2022 • Chen Tang, Chenghua Lin, Henglin Huang, Frank Guerin, Zhihao Zhang
One of the key challenges of automatic story generation is how to generate a long narrative that can maintain fluency, relevance, and coherence.
1 code implementation • 27 Oct 2022 • Chen Tang, Hongbo Zhang, Tyler Loakman, Chenghua Lin, Frank Guerin
In this paper, we propose a novel framework to improve medical dialogue generation by considering features centered on domain-specific terminology.
no code implementations • 14 Feb 2023 • Chen Tang, Kai Ouyang, Zenghao Chai, Yunpeng Bai, Yuan Meng, Zhi Wang, Wenwu Zhu
This general and dataset-independent property makes us search for the MPQ policy over a rather small-scale proxy dataset and then the policy can be directly used to quantize the model trained on a large-scale dataset.
1 code implementation • ICCV 2023 • Chen Tang, Li Lyna Zhang, Huiqiang Jiang, Jiahang Xu, Ting Cao, Quanlu Zhang, Yuqing Yang, Zhi Wang, Mao Yang
However, prior supernet training methods that rely on uniform sampling suffer from the gradient conflict issue: the sampled subnets can have vastly different model sizes (e. g., 50M vs. 2G FLOPs), leading to different optimization directions and inferior performance.
no code implementations • 24 Mar 2023 • Wei-Jer Chang, Chen Tang, Chenran Li, Yeping Hu, Masayoshi Tomizuka, Wei Zhan
To ensure that autonomous vehicles take safe and efficient maneuvers in different interactive traffic scenarios, we should be able to evaluate autonomous vehicles against reactive agents with different social characteristics in the simulation environment.
no code implementations • 3 Apr 2023 • Kai Ouyang, Wenhao Zheng, Chen Tang, Xuanji Xiao, Hai-Tao Zheng
To tackle this issue, we argue that a trade-off should be achieved between the introduction of large amounts of auxiliary information and the protection of valuable information related to CVR.
no code implementations • 25 Apr 2023 • Hongyang Jiang, Jingqi Huang, Chen Tang, Xiaoqing Zhang, Mengdi Gao, Jiang Liu
Concretely, the HITL CAD system was implemented on the multiple instance learning (MIL), where eye-tracking gaze maps were beneficial to cherry-pick diagnosis-related instances.
1 code implementation • 10 May 2023 • Hongbo Zhang, Chen Tang, Tyler Loakman, Chenghua Lin, Stefan Goetze
In this paper, we propose a novel context-aware graph-attention model (Context-aware GAT), which can effectively incorporate global features of relevant knowledge graphs based on a context-enhanced knowledge aggregation process.
no code implementations • 12 May 2023 • Kai Ouyang, Chen Tang, Wenhao Zheng, Xiangjin Xie, Xuanji Xiao, Jian Dong, Hai-Tao Zheng, Zhi Wang
To address this issue, we propose using knowledge soft integration to balance the utilization of multimodal features and the curse of knowledge problem it brings about.
1 code implementation • 6 Jun 2023 • Tyler Loakman, Chen Tang, Chenghua Lin
Previous work in phonetically-grounded language generation has mainly focused on domains such as lyrics and poetry.
no code implementations • 14 Jun 2023 • Ce Hao, Catherine Weaver, Chen Tang, Kenta Kawamoto, Masayoshi Tomizuka, Wei Zhan
Hierarchical reinforcement learning (RL) can accelerate long-horizon decision-making by temporally abstracting a policy into multiple levels.
no code implementations • NeurIPS 2023 • Chenran Li, Chen Tang, Haruki Nishimura, Jean Mercat, Masayoshi Tomizuka, Wei Zhan
Specifically, we formulate the customization problem as a Markov Decision Process (MDP) with a reward function that combines 1) the inherent reward of the demonstration; and 2) the add-on reward specified by the downstream task.
1 code implementation • 28 Jun 2023 • Chen Tang, Hongbo Zhang, Tyler Loakman, Chenghua Lin, Frank Guerin
Further analysis also shows that our representation learning framework can fill the semantic gap by coagulating representations of both text and graph knowledge.
no code implementations • 18 Sep 2023 • Yiheng Li, Seth Z. Zhao, Chenfeng Xu, Chen Tang, Chenran Li, Mingyu Ding, Masayoshi Tomizuka, Wei Zhan
We propose to augment both HD maps and trajectories and apply pre-training strategies on top of them.
no code implementations • 18 Sep 2023 • Jinning Li, Xinyi Liu, Banghua Zhu, Jiantao Jiao, Masayoshi Tomizuka, Chen Tang, Wei Zhan
GOLD distills an offline DT policy into a lightweight policy network through guided online safe RL training, which outperforms both the offline DT policy and online safe RL algorithms.
1 code implementation • 19 Sep 2023 • Bohao Yang, Chen Tang, Chenghua Lin
In this paper, We propose a novel framework that models dialogues between patients and healthcare professionals using AMR graphs, where the neural networks incorporate textual and graphical knowledge with a dual attention mechanism.
no code implementations • 22 Sep 2023 • Bohao Yang, Chen Tang, Kun Zhao, Chenghao Xiao, Chenghua Lin
Large Language Models (LLMs) have demonstrated remarkable performance across a wide range of natural language processing tasks.
no code implementations • 11 Oct 2023 • Yuxin Chen, Chen Tang, Ran Tian, Chenran Li, Jinning Li, Masayoshi Tomizuka, Wei Zhan
We observe that, generally, a more diverse set of co-play agents during training enhances the generalization performance of the ego agent; however, this improvement varies across distinct scenarios and environments.
1 code implementation • 24 Oct 2023 • Chen Tang, Shun Wang, Tomas Goldsack, Chenghua Lin
Abstracts derived from biomedical literature possess distinct domain-specific characteristics, including specialised writing styles and biomedical terminologies, which necessitate a deep understanding of the related literature.
1 code implementation • 24 Oct 2023 • Tomas Goldsack, Zhihao Zhang, Chen Tang, Carolina Scarton, Chenghua Lin
Previous approaches for automatic lay summarisation are exclusively reliant on the source article that, given it is written for a technical audience (e. g., researchers), is unlikely to explicitly define all technical concepts or state all of the background information that is relevant for a lay audience.
no code implementations • 31 Oct 2023 • Chen Tang, Frank Guerin, Chenghua Lin
This paper presents a tool called ``ACL Anthology Helper''.
no code implementations • 3 Nov 2023 • Tommaso Benciolini, Chen Tang, Marion Leibold, Catherine Weaver, Masayoshi Tomizuka, Wei Zhan
In the exploration, a MPC collects diverse data by balancing the racing objectives and the exploration criterion; then the GP is re-trained.
no code implementations • 13 Nov 2023 • Rongwei Lu, Yutong Jiang, Yinan Mao, Chen Tang, Bin Chen, Laizhong Cui, Zhi Wang
Assigning varying compression ratios to workers with distinct data distributions and volumes is thus a promising solution.
no code implementations • 14 Nov 2023 • Hongyang Jiang, Mengdi Gao, Zirong Liu, Chen Tang, Xiaoqing Zhang, Shuai Jiang, Wu Yuan, Jiang Liu
In this work, we propose a human-in-the-loop, label-free early DR diagnosis framework called GlanceSeg, based on SAM.
1 code implementation • 19 Nov 2023 • Chen Tang, Tyler Loakman, Chenghua Lin
These results underscore the effectiveness of our model in leveraging context and event features to improve the quality of generated narratives.
no code implementations • 3 Jan 2024 • Chen Tang, Yuan Meng, Jiacheng Jiang, Shuzhao Xie, Rongwei Lu, Xinzhu Ma, Zhi Wang, Wenwu Zhu
Conversely, mixed-precision quantization (MPQ) is advocated to compress the model effectively by allocating heterogeneous bit-width for layers.
no code implementations • 22 Feb 2024 • Catherine Weaver, Chen Tang, Ce Hao, Kenta Kawamoto, Masayoshi Tomizuka, Wei Zhan
Thus, we propose BeTAIL: Behavior Transformer Adversarial Imitation Learning, which combines a Behavior Transformer (BeT) policy from human demonstrations with online AIL.
no code implementations • 20 Mar 2024 • Tyler Loakman, Chen Tang, Chenghua Lin
Previous work in phonologically and phonetically grounded language generation has mainly focused on domains such as puns and poetry.
no code implementations • 1 Apr 2024 • Bohao Yang, Kun Zhao, Chen Tang, Liang Zhan, Chenghua Lin
Trainable evaluation metrics are commonly trained with true positive and randomly selected negative responses, resulting in a tendency for them to assign a higher score to the responses that share higher content similarity with a given context.
no code implementations • 8 Apr 2024 • Qun Li, Yuan Meng, Chen Tang, Jiacheng Jiang, Zhi Wang
Quantization is a promising technique for reducing the bit-width of deep models to improve their runtime performance and storage efficiency, and thus becomes a fundamental step for deployment.
no code implementations • 15 Apr 2024 • Haojun Sun, Chen Tang, Zhi Wang, Yuan Meng, Jingyan Jiang, Xinzhu Ma, Wenwu Zhu
Diffusion models have emerged as preeminent contenders in the realm of generative models.
no code implementations • ECCV 2020 • Wenyu Sun, Chen Tang, Weigui Li, Zhuqing Yuan, Huazhong Yang, Yongpan Liu
This paper proposes a deep video compression method to simultaneously encode multiple frames with Frame-Conv3D and differential modulation.