no code implementations • 7 Jun 2022 • Tung Phan-Minh, Forbes Howington, Ting-Sheng Chu, Sang Uk Lee, Momchil S. Tomov, Nanxiang Li, Caglayan Dicle, Samuel Findler, Francisco Suarez-Ruiz, Robert Beaudoin, Bo Yang, Sammy Omari, Eric M. Wolff
In this paper, we introduce the first learning-based planner to drive a car in dense, urban traffic using Inverse Reinforcement Learning (IRL).
no code implementations • 1 Jan 2021 • Nanxiang Li, Shabnam Ghaffarzadegan, Liu Ren
We show both theoretically and experimentally, the VAE ensemble objective encourages the linear transformations connecting the VAEs to be trivial transformations, aligning the latent representations of different models to be "alike".
no code implementations • 27 Sep 2020 • Liang Gou, Lincan Zou, Nanxiang Li, Michael Hofmann, Arvind Kumar Shekar, Axel Wendt, Liu Ren
In this work, we propose a visual analytics system, VATLD, equipped with a disentangled representation learning and semantic adversarial learning, to assess, understand, and improve the accuracy and robustness of traffic light detectors in autonomous driving applications.
1 code implementation • 3 Jan 2020 • Shen Yan, Huan Song, Nanxiang Li, Lincan Zou, Liu Ren
Unsupervised domain adaptation studies the problem of utilizing a relevant source domain with abundant labels to build predictive modeling for an unannotated target domain.
Ranked #35 on Domain Generalization on PACS
no code implementations • ICLR 2020 • Nanxiang Li, Shabnam Ghaffarzadegan, Liu Ren
Recent advancements in unsupervised disentangled representation learning focus on extending the variational autoencoder (VAE) with an augmented objective function to balance the trade-off between disentanglement and reconstruction.