no code implementations • 8 Apr 2021 • Sean Segal, Nishanth Kumar, Sergio Casas, Wenyuan Zeng, Mengye Ren, Jingkang Wang, Raquel Urtasun
As data collection is often significantly cheaper than labeling in this domain, the decision of which subset of examples to label can have a profound impact on model performance.
no code implementations • ICCV 2021 • James Tu, TsunHsuan Wang, Jingkang Wang, Sivabalan Manivasagam, Mengye Ren, Raquel Urtasun
Growing at a fast pace, modern autonomous systems will soon be deployed at scale, opening up the possibility for cooperative multi-agent systems.
no code implementations • 17 Jan 2021 • Jingkang Wang, Mengye Ren, Ilija Bogunovic, Yuwen Xiong, Raquel Urtasun
Recent work on hyperparameters optimization (HPO) has shown the possibility of training certain hyperparameters together with regular parameters.
no code implementations • CVPR 2021 • Jingkang Wang, Ava Pun, James Tu, Sivabalan Manivasagam, Abbas Sadat, Sergio Casas, Mengye Ren, Raquel Urtasun
Importantly, by simulating directly from sensor data, we obtain adversarial scenarios that are safety-critical for the full autonomy stack.
no code implementations • 10 Nov 2020 • Nicholas Vadivelu, Mengye Ren, James Tu, Jingkang Wang, Raquel Urtasun
Learned communication makes multi-agent systems more effective by aggregating distributed information.
1 code implementation • NeurIPS 2021 • Jingkang Wang, Hongyi Guo, Zhaowei Zhu, Yang Liu
Most existing policy learning solutions require the learning agents to receive high-quality supervision signals such as well-designed rewards in reinforcement learning (RL) or high-quality expert demonstrations in behavioral cloning (BC).
1 code implementation • 15 Apr 2020 • Tianshi Cao, Jingkang Wang, Yining Zhang, Sivabalan Manivasagam
Although recent works have shown the benefits of instructive texts in goal-conditioned RL, few have studied whether descriptive texts help agents to generalize across dynamic environments.
no code implementations • 25 Sep 2019 • Jingkang Wang, Tianyun Zhang, Sijia Liu, Pin-Yu Chen, Jiacen Xu, Makan Fardad, Bo Li
The worst-case training principle that minimizes the maximal adversarial loss, also known as adversarial training (AT), has shown to be a state-of-the-art approach for enhancing adversarial robustness against norm-ball bounded input perturbations.
1 code implementation • NeurIPS 2021 • Jingkang Wang, Tianyun Zhang, Sijia Liu, Pin-Yu Chen, Jiacen Xu, Makan Fardad, Bo Li
In this paper, we show how a general framework of min-max optimization over multiple domains can be leveraged to advance the design of different types of adversarial attacks.
no code implementations • 23 Oct 2018 • Jingkang Wang, Ruoxi Jia, Gerald Friedland, Bo Li, Costas Spanos
Despite the great success achieved in machine learning (ML), adversarial examples have caused concerns with regards to its trustworthiness: A small perturbation of an input results in an arbitrary failure of an otherwise seemingly well-trained ML model.
1 code implementation • ICLR 2019 • Jingkang Wang, Yang Liu, Bo Li
For instance, the state-of-the-art PPO algorithm is able to obtain 84. 6% and 80. 8% improvements on average score for five Atari games, with error rates as 10% and 30% respectively.
no code implementations • ACL 2019 • Jingkang Wang, Jianing Zhou, Jie zhou, Gongshen Liu
Chinese word segmentation (CWS) is often regarded as a character-based sequence labeling task in most current works which have achieved great success with the help of powerful neural networks.
1 code implementation • 10 Jul 2018 • Gerald Friedland, Jingkang Wang, Ruoxi Jia, Bo Li
This paper proposes a fundamental answer to a frequently asked question in multimedia computing and machine learning: Do artifacts from perceptual compression contribute to error in the machine learning process and if so, how much?
no code implementations • CVPR 2018 • Yiping Chen, Jingkang Wang, Jonathan Li, Cewu Lu, Zhipeng Luo, Han Xue, Cheng Wang
Learning autonomous-driving policies is one of the most challenging but promising tasks for computer vision.