Search Results for author: Chen Tang

Found 59 papers, 24 papers with code

PRANCE: Joint Token-Optimization and Structural Channel-Pruning for Adaptive ViT Inference

no code implementations6 Jul 2024 Ye Li, Chen Tang, Yuan Meng, Jiajun Fan, Zenghao Chai, Xinzhu Ma, Zhi Wang, Wenwu Zhu

We introduce PRANCE, a Vision Transformer compression framework that jointly optimizes the activated channels and reduces tokens, based on the characteristics of inputs.

Combinatorial Optimization Decision Making

HGNET: A Hierarchical Feature Guided Network for Occupancy Flow Field Prediction

no code implementations1 Jul 2024 Zhan Chen, Chen Tang, Lu Xiong

Additionally, to enhance the temporal consistency and causal relationships of the predictions, we propose a Time Series Memory framework to learn the conditional distribution models of the prediction outputs at future time steps from multivariate time series.

Autonomous Driving Time Series +1

BioMNER: A Dataset for Biomedical Method Entity Recognition

no code implementations28 Jun 2024 Chen Tang, Bohao Yang, Kun Zhao, Bo Lv, Chenghao Xiao, Frank Guerin, Chenghua Lin

Named entity recognition (NER) stands as a fundamental and pivotal task within the realm of Natural Language Processing.

Information Retrieval named-entity-recognition +2

Q-DiT: Accurate Post-Training Quantization for Diffusion Transformers

1 code implementation25 Jun 2024 Lei Chen, Yuan Meng, Chen Tang, Xinzhu Ma, Jingyan Jiang, Xin Wang, Zhi Wang, Wenwu Zhu

Specifically, when quantizing DiT-XL/2 to W8A8 on ImageNet 256x256, Q-DiT achieves a remarkable reduction in FID by 1. 26 compared to the baseline.

Image Generation Quantization

X-ray Made Simple: Radiology Report Generation and Evaluation with Layman's Terms

1 code implementation25 Jun 2024 Kun Zhao, Chenghao Xiao, Chen Tang, Bohao Yang, Kai Ye, Noura Al Moubayed, Liang Zhan, Chenghua Lin

Last, we show that training on the layman's terms dataset encourages models to focus on the semantics of the reports, as opposed to overfitting to learning the report templates.

SimsChat: A Customisable Persona-Driven Role-Playing Agent

1 code implementation25 Jun 2024 Bohao Yang, Dong Liu, Chen Tang, Chenghao Xiao, Kun Zhao, Chao Li, Lin Yuan, Guang Yang, Lanxiao Huang, Chenghua Lin

In this work, we introduce the Customisable Conversation Agent Framework, which employs LLMs to simulate real-world characters that can be freely customised according to different user preferences.

MEReQ: Max-Ent Residual-Q Inverse RL for Sample-Efficient Alignment from Intervention

no code implementations24 Jun 2024 Yuxin Chen, Chen Tang, Chenran Li, Ran Tian, Peter Stone, Masayoshi Tomizuka, Wei Zhan

Instead of inferring the complete human behavior characteristics, MEReQ infers a residual reward function that captures the discrepancy between the human expert's and the prior policy's underlying reward functions.

Imitation Learning Q-Learning

Evaluating the Generalization Ability of Quantized LLMs: Benchmark, Analysis, and Toolbox

1 code implementation15 Jun 2024 Yijun Liu, Yuan Meng, Fang Wu, Shenhao Peng, Hang Yao, Chaoyu Guan, Chen Tang, Xinzhu Ma, Zhi Wang, Wenwu Zhu

Based on this benchmark, we conduct extensive experiments with two well-known LLMs (English and Chinese) and four quantization algorithms to investigate this topic in-depth, yielding several counter-intuitive and valuable findings, e. g., models quantized using a calibration set with the same distribution as the test data are not necessarily optimal.

Quantization

STAR: Skeleton-aware Text-based 4D Avatar Generation with In-Network Motion Retargeting

1 code implementation7 Jun 2024 Zenghao Chai, Chen Tang, Yongkang Wong, Mohan Kankanhalli

The creation of 4D avatars (i. e., animated 3D avatars) from text description typically uses text-to-image (T2I) diffusion models to synthesize 3D avatars in the canonical space and subsequently applies animation with target motions.

motion retargeting

Causal prompting model-based offline reinforcement learning

no code implementations3 Jun 2024 Xuehui Yu, Yi Guan, Rujia Shen, Xin Li, Chen Tang, Jingchi Jiang

To tackle these issues, we introduce the Causal Prompting Reinforcement Learning (CPRL) framework, designed for highly suboptimal and resource-constrained online scenarios.

Offline RL reinforcement-learning +1

SLIDE: A Framework Integrating Small and Large Language Models for Open-Domain Dialogues Evaluation

1 code implementation24 May 2024 Kun Zhao, Bohao Yang, Chen Tang, Chenghua Lin, Liang Zhan

Our approach introduces several techniques: (1) Contrastive learning to differentiate between robust and non-robust response embeddings; (2) A novel metric for semantic sensitivity that combines embedding cosine distances with similarity learned through neural networks, and (3) a strategy for incorporating the evaluation results from both the SLM and LLMs.

Contrastive Learning Dialogue Evaluation

Investigating the Impact of Quantization on Adversarial Robustness

no code implementations8 Apr 2024 Qun Li, Yuan Meng, Chen Tang, Jiacheng Jiang, Zhi Wang

Quantization is a promising technique for reducing the bit-width of deep models to improve their runtime performance and storage efficiency, and thus becomes a fundamental step for deployment.

Adversarial Robustness Quantization

Structured Information Matters: Incorporating Abstract Meaning Representation into LLMs for Improved Open-Domain Dialogue Evaluation

no code implementations1 Apr 2024 Bohao Yang, Kun Zhao, Chen Tang, Liang Zhan, Chenghua Lin

Trainable evaluation metrics are commonly trained with true positive and randomly selected negative responses, resulting in a tendency for them to assign a higher score to the responses that share higher content similarity with a given context.

Abstract Meaning Representation Dialogue Evaluation +2

Train & Constrain: Phonologically Informed Tongue-Twister Generation from Topics and Paraphrases

no code implementations20 Mar 2024 Tyler Loakman, Chen Tang, Chenghua Lin

Previous work in phonologically and phonetically grounded language generation has mainly focused on domains such as puns and poetry.

Language Modelling Text Generation

BeTAIL: Behavior Transformer Adversarial Imitation Learning from Human Racing Gameplay

no code implementations22 Feb 2024 Catherine Weaver, Chen Tang, Ce Hao, Kenta Kawamoto, Masayoshi Tomizuka, Wei Zhan

Thus, we propose BeTAIL: Behavior Transformer Adversarial Imitation Learning, which combines a Behavior Transformer (BeT) policy from human demonstrations with online AIL.

Imitation Learning

Retraining-free Model Quantization via One-Shot Weight-Coupling Learning

1 code implementation CVPR 2024 Chen Tang, Yuan Meng, Jiacheng Jiang, Shuzhao Xie, Rongwei Lu, Xinzhu Ma, Zhi Wang, Wenwu Zhu

Conversely, mixed-precision quantization (MPQ) is advocated to compress the model effectively by allocating heterogeneous bit-width for layers.

Model Compression Quantization

A Cross-Attention Augmented Model for Event-Triggered Context-Aware Story Generation

1 code implementation19 Nov 2023 Chen Tang, Tyler Loakman, Chenghua Lin

These results underscore the effectiveness of our model in leveraging context and event features to improve the quality of generated narratives.

Story Generation

DAGC: Data-Volume-Aware Adaptive Sparsification Gradient Compression for Distributed Machine Learning in Mobile Computing

no code implementations13 Nov 2023 Rongwei Lu, Yutong Jiang, Yinan Mao, Chen Tang, Bin Chen, Laizhong Cui, Zhi Wang

Assigning varying compression ratios to workers with distinct data distributions and volumes is thus a promising solution.

Improving Biomedical Abstractive Summarisation with Knowledge Aggregation from Citation Papers

1 code implementation24 Oct 2023 Chen Tang, Shun Wang, Tomas Goldsack, Chenghua Lin

Abstracts derived from biomedical literature possess distinct domain-specific characteristics, including specialised writing styles and biomedical terminologies, which necessitate a deep understanding of the related literature.

Enhancing Biomedical Lay Summarisation with External Knowledge Graphs

1 code implementation24 Oct 2023 Tomas Goldsack, Zhihao Zhang, Chen Tang, Carolina Scarton, Chenghua Lin

Previous approaches for automatic lay summarisation are exclusively reliant on the source article that, given it is written for a technical audience (e. g., researchers), is unlikely to explicitly define all technical concepts or state all of the background information that is relevant for a lay audience.

Decoder Knowledge Graphs

Quantifying Agent Interaction in Multi-agent Reinforcement Learning for Cost-efficient Generalization

no code implementations11 Oct 2023 Yuxin Chen, Chen Tang, Ran Tian, Chenran Li, Jinning Li, Masayoshi Tomizuka, Wei Zhan

We observe that, generally, a more diverse set of co-play agents during training enhances the generalization performance of the ego agent; however, this improvement varies across distinct scenarios and environments.

Multi-agent Reinforcement Learning

Effective Distillation of Table-based Reasoning Ability from LLMs

1 code implementation22 Sep 2023 Bohao Yang, Chen Tang, Kun Zhao, Chenghao Xiao, Chenghua Lin

Large Language Models (LLMs) have demonstrated remarkable performance across a wide range of natural language processing tasks.

Table-to-Text Generation

Improving Medical Dialogue Generation with Abstract Meaning Representations

1 code implementation19 Sep 2023 Bohao Yang, Chen Tang, Chenghua Lin

In this paper, We propose a novel framework that models dialogues between patients and healthcare professionals using AMR graphs, where the neural networks incorporate textual and graphical knowledge with a dual attention mechanism.

Dialogue Generation

Guided Online Distillation: Promoting Safe Reinforcement Learning by Offline Demonstration

no code implementations18 Sep 2023 Jinning Li, Xinyi Liu, Banghua Zhu, Jiantao Jiao, Masayoshi Tomizuka, Chen Tang, Wei Zhan

GOLD distills an offline DT policy into a lightweight policy network through guided online safe RL training, which outperforms both the offline DT policy and online safe RL algorithms.

Autonomous Driving Decision Making +3

Enhancing Dialogue Generation via Dynamic Graph Knowledge Aggregation

1 code implementation28 Jun 2023 Chen Tang, Hongbo Zhang, Tyler Loakman, Chenghua Lin, Frank Guerin

Further analysis also shows that our representation learning framework can fill the semantic gap by coagulating representations of both text and graph knowledge.

Dialogue Generation Graph Attention +2

Residual Q-Learning: Offline and Online Policy Customization without Value

no code implementations NeurIPS 2023 Chenran Li, Chen Tang, Haruki Nishimura, Jean Mercat, Masayoshi Tomizuka, Wei Zhan

Specifically, we formulate the customization problem as a Markov Decision Process (MDP) with a reward function that combines 1) the inherent reward of the demonstration; and 2) the add-on reward specified by the downstream task.

Imitation Learning Q-Learning

Skill-Critic: Refining Learned Skills for Hierarchical Reinforcement Learning

no code implementations14 Jun 2023 Ce Hao, Catherine Weaver, Chen Tang, Kenta Kawamoto, Masayoshi Tomizuka, Wei Zhan

Our Skill-Critic algorithm optimizes both the low-level and high-level policies; these policies are initialized and regularized by the latent space learned from offline demonstrations to guide the parallel policy optimization.

Decision Making Hierarchical Reinforcement Learning +2

TwistList: Resources and Baselines for Tongue Twister Generation

1 code implementation6 Jun 2023 Tyler Loakman, Chen Tang, Chenghua Lin

Previous work in phonetically-grounded language generation has mainly focused on domains such as lyrics and poetry.

Text Generation

Knowledge Soft Integration for Multimodal Recommendation

no code implementations12 May 2023 Kai Ouyang, Chen Tang, Wenhao Zheng, Xiangjin Xie, Xuanji Xiao, Jian Dong, Hai-Tao Zheng, Zhi Wang

To address this issue, we propose using knowledge soft integration to balance the utilization of multimodal features and the curse of knowledge problem it brings about.

Graph Neural Network Multimodal Recommendation +1

CADGE: Context-Aware Dialogue Generation Enhanced with Graph-Structured Knowledge Aggregation

1 code implementation10 May 2023 Hongbo Zhang, Chen Tang, Tyler Loakman, Chenghua Lin, Stefan Goetze

In this paper, we propose a novel context-aware graph-attention model (Context-aware GAT), which can effectively incorporate global features of relevant knowledge graphs based on a context-enhanced knowledge aggregation process.

Dialogue Generation Graph Attention +2

Eye tracking guided deep multiple instance learning with dual cross-attention for fundus disease detection

no code implementations25 Apr 2023 Hongyang Jiang, Jingqi Huang, Chen Tang, Xiaoqing Zhang, Mengdi Gao, Jiang Liu

Concretely, the HITL CAD system was implemented on the multiple instance learning (MIL), where eye-tracking gaze maps were beneficial to cherry-pick diagnosis-related instances.

Multiple Instance Learning

Click-aware Structure Transfer with Sample Weight Assignment for Post-Click Conversion Rate Estimation

no code implementations3 Apr 2023 Kai Ouyang, Wenhao Zheng, Chen Tang, Xuanji Xiao, Hai-Tao Zheng

To tackle this issue, we argue that a trade-off should be achieved between the introduction of large amounts of auxiliary information and the protection of valuable information related to CVR.

Multi-Task Learning

Editing Driver Character: Socially-Controllable Behavior Generation for Interactive Traffic Simulation

no code implementations24 Mar 2023 Wei-Jer Chang, Chen Tang, Chenran Li, Yeping Hu, Masayoshi Tomizuka, Wei Zhan

To ensure that autonomous vehicles take safe and efficient maneuvers in different interactive traffic scenarios, we should be able to evaluate autonomous vehicles against reactive agents with different social characteristics in the simulation environment.

Autonomous Driving

ElasticViT: Conflict-aware Supernet Training for Deploying Fast Vision Transformer on Diverse Mobile Devices

1 code implementation ICCV 2023 Chen Tang, Li Lyna Zhang, Huiqiang Jiang, Jiahang Xu, Ting Cao, Quanlu Zhang, Yuqing Yang, Zhi Wang, Mao Yang

However, prior supernet training methods that rely on uniform sampling suffer from the gradient conflict issue: the sampled subnets can have vastly different model sizes (e. g., 50M vs. 2G FLOPs), leading to different optimization directions and inferior performance.

Neural Architecture Search

SEAM: Searching Transferable Mixed-Precision Quantization Policy through Large Margin Regularization

no code implementations14 Feb 2023 Chen Tang, Kai Ouyang, Zenghao Chai, Yunpeng Bai, Yuan Meng, Zhi Wang, Wenwu Zhu

This general and dataset-independent property makes us search for the MPQ policy over a rather small-scale proxy dataset and then the policy can be directly used to quantize the model trained on a large-scale dataset.

Quantization

Terminology-aware Medical Dialogue Generation

1 code implementation27 Oct 2022 Chen Tang, Hongbo Zhang, Tyler Loakman, Chenghua Lin, Frank Guerin

In this paper, we propose a novel framework to improve medical dialogue generation by considering features centered on domain-specific terminology.

Dialogue Generation

EtriCA: Event-Triggered Context-Aware Story Generation Augmented by Cross Attention

1 code implementation22 Oct 2022 Chen Tang, Chenghua Lin, Henglin Huang, Frank Guerin, Zhihao Zhang

One of the key challenges of automatic story generation is how to generate a long narrative that can maintain fluency, relevance, and coherence.

Story Generation

NGEP: A Graph-based Event Planning Framework for Story Generation

1 code implementation19 Oct 2022 Chen Tang, Zhihao Zhang, Tyler Loakman, Chenghua Lin, Frank Guerin

To improve the performance of long text generation, recent studies have leveraged automatically planned event structures (i. e. storylines) to guide story generation.

Hallucination Story Generation

Improving Chinese Story Generation via Awareness of Syntactic Dependencies and Semantics

1 code implementation19 Oct 2022 Henglin Huang, Chen Tang, Tyler Loakman, Frank Guerin, Chenghua Lin

In spite of the success of prior works with the application of pre-trained models, current neural models for Chinese stories still struggle to generate high-quality long text narratives.

Denoising Representation Learning +1

PreTraM: Self-Supervised Pre-training via Connecting Trajectory and Map

1 code implementation21 Apr 2022 Chenfeng Xu, Tian Li, Chen Tang, Lingfeng Sun, Kurt Keutzer, Masayoshi Tomizuka, Alireza Fathi, Wei Zhan

It is hard to replicate these approaches in trajectory forecasting due to the lack of adequate trajectory data (e. g., 34K samples in the nuScenes dataset).

Contrastive Learning Representation Learning +1

Arbitrary Bit-width Network: A Joint Layer-Wise Quantization and Adaptive Inference Approach

no code implementations21 Apr 2022 Chen Tang, Haoyu Zhai, Kai Ouyang, Zhi Wang, Yifei Zhu, Wenwu Zhu

We propose to feed different data samples with varying quantization schemes to achieve a data-dependent dynamic inference, at a fine-grained layer level.

Quantization

Interventional Behavior Prediction: Avoiding Overly Confident Anticipation in Interactive Prediction

no code implementations19 Apr 2022 Chen Tang, Wei Zhan, Masayoshi Tomizuka

Moreover, to properly evaluate an IBP model with offline datasets, we propose a Shapley-value-based metric to verify if the prediction model satisfies the inherent temporal independence of an interventional distribution.

Mixed-Precision Neural Network Quantization via Learned Layer-wise Importance

1 code implementation16 Mar 2022 Chen Tang, Kai Ouyang, Zhi Wang, Yifei Zhu, YaoWei Wang, Wen Ji, Wenwu Zhu

For example, MPQ search on ResNet18 with our indicators takes only 0. 06 s, which improves time efficiency exponentially compared to iterative search methods.

Quantization

Recent Advances in Neural Text Generation: A Task-Agnostic Survey

1 code implementation6 Mar 2022 Chen Tang, Frank Guerin, Chenghua Lin

In recent years, considerable research has been dedicated to the application of neural models in the field of natural language generation (NLG).

Text Generation

Exploring Social Posterior Collapse in Variational Autoencoder for Interaction Modeling

no code implementations NeurIPS 2021 Chen Tang, Wei Zhan, Masayoshi Tomizuka

In this work, we argue that one of the typical formulations of VAEs in multi-agent modeling suffers from an issue we refer to as social posterior collapse, i. e., the model is prone to ignoring historical social context when predicting the future trajectory of an agent.

Graph Attention Trajectory Forecasting

Dealing with the Unknown: Pessimistic Offline Reinforcement Learning

no code implementations9 Nov 2021 Jinning Li, Chen Tang, Masayoshi Tomizuka, Wei Zhan

Reinforcement Learning (RL) has been shown effective in domains where the agent can learn policies by actively interacting with its operating environment.

reinforcement-learning Reinforcement Learning (RL)

Grounded Relational Inference: Domain Knowledge Driven Explainable Autonomous Driving

no code implementations23 Feb 2021 Chen Tang, Nishan Srishankar, Sujitha Martin, Masayoshi Tomizuka

Explainability is essential for autonomous vehicles and other robotics systems interacting with humans and other objects during operation.

Autonomous Driving

Adaptive Pixel-wise Structured Sparse Network for Efficient CNNs

no code implementations21 Oct 2020 Chen Tang, Wenyu Sun, Zhuqing Yuan, Yongpan Liu

To accelerate deep CNN models, this paper proposes a novel spatially adaptive framework that can dynamically generate pixel-wise sparsity according to the input image.

General Classification Image Classification +4

ZPD Teaching Strategies for Deep Reinforcement Learning from Demonstrations

2 code implementations26 Oct 2019 Daniel Seita, David Chan, Roshan Rao, Chen Tang, Mandi Zhao, John Canny

Learning from demonstrations is a popular tool for accelerating and reducing the exploration requirements of reinforcement learning.

Atari Games Q-Learning +2

Zero-shot Deep Reinforcement Learning Driving Policy Transfer for Autonomous Vehicles based on Robust Control

no code implementations7 Dec 2018 Zhuo Xu, Chen Tang, Masayoshi Tomizuka

Although deep reinforcement learning (deep RL) methods have lots of strengths that are favorable if applied to autonomous driving, real deep RL applications in autonomous driving have been slowed down by the modeling gap between the source (training) domain and the target (deployment) domain.

Autonomous Driving

Cannot find the paper you are looking for? You can Submit a new open access paper.