Search Results for author: Chen Tang

Found 14 papers, 1 papers with code

Arbitrary Bit-width Network: A Joint Layer-Wise Quantization and Adaptive Inference Approach

no code implementations21 Apr 2022 Chen Tang, Haoyu Zhai, Kai Ouyang, Zhi Wang, Yifei Zhu, Wenwu Zhu

We propose to feed different data samples with varying quantization schemes to achieve a data-dependent dynamic inference, at a fine-grained layer level.

Quantization

PreTraM: Self-Supervised Pre-training via Connecting Trajectory and Map

no code implementations21 Apr 2022 Chenfeng Xu, Tian Li, Chen Tang, Lingfeng Sun, Kurt Keutzer, Masayoshi Tomizuka, Alireza Fathi, Wei Zhan

It is hard to replicate these approaches in trajectory forecasting due to the lack of adequate trajectory data (e. g., 34K samples in the nuScenes dataset).

Contrastive Learning Representation Learning +1

Interventional Behavior Prediction: Avoiding Overly Confident Anticipation in Interactive Prediction

no code implementations19 Apr 2022 Chen Tang, Wei Zhan, Masayoshi Tomizuka

Moreover, to properly evaluate an IBP model with offline datasets, we propose a Shapley-value-based metric to testify if the prediction model satisfies the inherent temporal independence of an interventional distribution.

Mixed-Precision Neural Network Quantization via Learned Layer-wise Importance

no code implementations16 Mar 2022 Chen Tang, Kai Ouyang, Zhi Wang, Yifei Zhu, YaoWei Wang, Wen Ji, Wenwu Zhu

The exponentially large discrete search space in mixed-precision quantization (MPQ) makes it hard to determine the optimal bit-width for each layer.

Quantization

Recent Advances in Neural Text Generation: A Task-Agnostic Survey

no code implementations6 Mar 2022 Chen Tang, Frank Guerin, Yucheng Li, Chenghua Lin

In recent years much effort has been devoted to applying neural models to the task of natural language generation.

Text Generation

Exploring Social Posterior Collapse in Variational Autoencoder for Interaction Modeling

no code implementations NeurIPS 2021 Chen Tang, Wei Zhan, Masayoshi Tomizuka

In this work, we argue that one of the typical formulations of VAEs in multi-agent modeling suffers from an issue we refer to as social posterior collapse, i. e., the model is prone to ignoring historical social context when predicting the future trajectory of an agent.

Graph Attention Trajectory Forecasting

Dealing with the Unknown: Pessimistic Offline Reinforcement Learning

no code implementations9 Nov 2021 Jinning Li, Chen Tang, Masayoshi Tomizuka, Wei Zhan

Reinforcement Learning (RL) has been shown effective in domains where the agent can learn policies by actively interacting with its operating environment.

reinforcement-learning

Grounded Relational Inference: Domain Knowledge Driven Explainable Autonomous Driving

no code implementations23 Feb 2021 Chen Tang, Nishan Srishankar, Sujitha Martin, Masayoshi Tomizuka

Explainability is essential for autonomous vehicles and other robotics systems interacting with humans and other objects during operation.

Autonomous Driving

Adaptive Pixel-wise Structured Sparse Network for Efficient CNNs

no code implementations21 Oct 2020 Chen Tang, Wenyu Sun, Zhuqing Yuan, Yongpan Liu

To accelerate deep CNN models, this paper proposes a novel spatially adaptive framework that can dynamically generate pixel-wise sparsity according to the input image.

General Classification Image Classification +4

ZPD Teaching Strategies for Deep Reinforcement Learning from Demonstrations

2 code implementations26 Oct 2019 Daniel Seita, David Chan, Roshan Rao, Chen Tang, Mandi Zhao, John Canny

Learning from demonstrations is a popular tool for accelerating and reducing the exploration requirements of reinforcement learning.

Atari Games Q-Learning +1

Zero-shot Deep Reinforcement Learning Driving Policy Transfer for Autonomous Vehicles based on Robust Control

no code implementations7 Dec 2018 Zhuo Xu, Chen Tang, Masayoshi Tomizuka

Although deep reinforcement learning (deep RL) methods have lots of strengths that are favorable if applied to autonomous driving, real deep RL applications in autonomous driving have been slowed down by the modeling gap between the source (training) domain and the target (deployment) domain.

Autonomous Driving reinforcement-learning

Cannot find the paper you are looking for? You can Submit a new open access paper.