no code implementations • 30 Oct 2023 • Ruya Karagulle, Nikos Arechiga, Andrew Best, Jonathan DeCastro, Necmiye Ozay
Our approach incorporates priority ordering of signal temporal logic (STL) formulas, describing traffic rules, into a learning framework.
no code implementations • 30 Aug 2023 • Anna Kawakami, Luke Guerdan, Yanghuidi Cheng, Matthew Lee, Scott Carter, Nikos Arechiga, Kate Glazko, Haiyi Zhu, Kenneth Holstein
A growing body of research has explored how to support humans in making better use of AI-based decision support, including via training and onboarding.
no code implementations • 16 Jun 2023 • Nikos Arechiga, Frank Permenter, Binyang Song, Chenyang Yuan
Denoising diffusion models trained at web-scale have revolutionized image generation.
no code implementations • 26 May 2023 • Binyang Song, Chenyang Yuan, Frank Permenter, Nikos Arechiga, Faez Ahmed
Generative AI models have made significant progress in automating the creation of 3D shapes, which has the potential to transform car design.
3 code implementations • 4 May 2022 • Robert Dyro, Edward Schmerling, Nikos Arechiga, Marco Pavone
Many existing approaches to bilevel optimization employ first-order sensitivity analysis, based on the implicit function theorem (IFT), for the lower problem to derive a gradient of the lower problem solution with respect to its parameters; this IFT gradient is then used in a first-order optimization method for the upper problem.
no code implementations • 9 Feb 2022 • Nikos Arechiga, Francine Chen, Rumen Iliev, Emily Sumner, Scott Carter, Alex Filipowicz, Nayeli Bravo, Monica Van, Kate Glazko, Kalani Murakami, Laurent Denoue, Candice Hogan, Katharine Sieck, Charlene Wu, Kent Lyons
In this work, we focus on methods for personalizing interventions based on an individual's demographics to shift the preferences of consumers to be more positive towards Battery Electric Vehicles (BEVs).
no code implementations • 7 Dec 2021 • Nikos Arechiga, Francine Chen, Yan-Ying Chen, Yanxia Zhang, Rumen Iliev, Heishiro Toyoda, Kent Lyons
We develop a deep neural network (MACSYMA) to address the symbolic regression problem as an end-to-end supervised learning problem.
1 code implementation • ICLR 2021 • Kaidi Cao, Yining Chen, Junwei Lu, Nikos Arechiga, Adrien Gaidon, Tengyu Ma
Real-world large-scale datasets are heteroskedastic and imbalanced -- labels have varying levels of uncertainty and label distributions are long-tailed.
Ranked #11 on
Image Classification
on WebVision-1000
no code implementations • 12 Sep 2019 • Nikos Arechiga, Jonathan DeCastro, Soonho Kong, Karen Leung
We describe the concept of logical scaffolds, which can be used to improve the quality of software that relies on AI components.
7 code implementations • NeurIPS 2019 • Kaidi Cao, Colin Wei, Adrien Gaidon, Nikos Arechiga, Tengyu Ma
Deep learning algorithms can fare poorly when the training dataset suffers from heavy class-imbalance but the testing criterion requires good generalization on less frequent classes.
Ranked #4 on
Long-tail learning with class descriptors
on CUB-LT