Search Results for author: Jingkang Wang

Found 14 papers, 5 papers with code

Just Label What You Need: Fine-Grained Active Selection for Perception and Prediction through Partially Labeled Scenes

no code implementations8 Apr 2021 Sean Segal, Nishanth Kumar, Sergio Casas, Wenyuan Zeng, Mengye Ren, Jingkang Wang, Raquel Urtasun

As data collection is often significantly cheaper than labeling in this domain, the decision of which subset of examples to label can have a profound impact on model performance.

Active Learning

Adversarial Attacks On Multi-Agent Communication

no code implementations ICCV 2021 James Tu, TsunHsuan Wang, Jingkang Wang, Sivabalan Manivasagam, Mengye Ren, Raquel Urtasun

Growing at a fast pace, modern autonomous systems will soon be deployed at scale, opening up the possibility for cooperative multi-agent systems.

Domain Adaptation

Cost-Efficient Online Hyperparameter Optimization

no code implementations17 Jan 2021 Jingkang Wang, Mengye Ren, Ilija Bogunovic, Yuwen Xiong, Raquel Urtasun

Recent work on hyperparameters optimization (HPO) has shown the possibility of training certain hyperparameters together with regular parameters.

Hyperparameter Optimization

AdvSim: Generating Safety-Critical Scenarios for Self-Driving Vehicles

no code implementations CVPR 2021 Jingkang Wang, Ava Pun, James Tu, Sivabalan Manivasagam, Abbas Sadat, Sergio Casas, Mengye Ren, Raquel Urtasun

Importantly, by simulating directly from sensor data, we obtain adversarial scenarios that are safety-critical for the full autonomy stack.

Learning to Communicate and Correct Pose Errors

no code implementations10 Nov 2020 Nicholas Vadivelu, Mengye Ren, James Tu, Jingkang Wang, Raquel Urtasun

Learned communication makes multi-agent systems more effective by aggregating distributed information.

Motion Forecasting object-detection +1

Policy Learning Using Weak Supervision

1 code implementation NeurIPS 2021 Jingkang Wang, Hongyi Guo, Zhaowei Zhu, Yang Liu

Most existing policy learning solutions require the learning agents to receive high-quality supervision signals such as well-designed rewards in reinforcement learning (RL) or high-quality expert demonstrations in behavioral cloning (BC).

BabyAI++: Towards Grounded-Language Learning beyond Memorization

1 code implementation15 Apr 2020 Tianshi Cao, Jingkang Wang, Yining Zhang, Sivabalan Manivasagam

Although recent works have shown the benefits of instructive texts in goal-conditioned RL, few have studied whether descriptive texts help agents to generalize across dynamic environments.

Grounded language learning reinforcement-learning

Towards A Unified Min-Max Framework for Adversarial Exploration and Robustness

no code implementations25 Sep 2019 Jingkang Wang, Tianyun Zhang, Sijia Liu, Pin-Yu Chen, Jiacen Xu, Makan Fardad, Bo Li

The worst-case training principle that minimizes the maximal adversarial loss, also known as adversarial training (AT), has shown to be a state-of-the-art approach for enhancing adversarial robustness against norm-ball bounded input perturbations.

Adversarial Attack Adversarial Robustness

Adversarial Attack Generation Empowered by Min-Max Optimization

1 code implementation NeurIPS 2021 Jingkang Wang, Tianyun Zhang, Sijia Liu, Pin-Yu Chen, Jiacen Xu, Makan Fardad, Bo Li

In this paper, we show how a general framework of min-max optimization over multiple domains can be leveraged to advance the design of different types of adversarial attacks.

Adversarial Attack Adversarial Robustness

One Bit Matters: Understanding Adversarial Examples as the Abuse of Redundancy

no code implementations23 Oct 2018 Jingkang Wang, Ruoxi Jia, Gerald Friedland, Bo Li, Costas Spanos

Despite the great success achieved in machine learning (ML), adversarial examples have caused concerns with regards to its trustworthiness: A small perturbation of an input results in an arbitrary failure of an otherwise seemingly well-trained ML model.

Decision Making

Reinforcement Learning with Perturbed Rewards

1 code implementation ICLR 2019 Jingkang Wang, Yang Liu, Bo Li

For instance, the state-of-the-art PPO algorithm is able to obtain 84. 6% and 80. 8% improvements on average score for five Atari games, with error rates as 10% and 30% respectively.

Atari Games reinforcement-learning

Multiple Character Embeddings for Chinese Word Segmentation

no code implementations ACL 2019 Jingkang Wang, Jianing Zhou, Jie zhou, Gongshen Liu

Chinese word segmentation (CWS) is often regarded as a character-based sequence labeling task in most current works which have achieved great success with the help of powerful neural networks.

Chinese Word Segmentation

The Helmholtz Method: Using Perceptual Compression to Reduce Machine Learning Complexity

1 code implementation10 Jul 2018 Gerald Friedland, Jingkang Wang, Ruoxi Jia, Bo Li

This paper proposes a fundamental answer to a frequently asked question in multimedia computing and machine learning: Do artifacts from perceptual compression contribute to error in the machine learning process and if so, how much?

Cannot find the paper you are looking for? You can Submit a new open access paper.