no code implementations • CVPR 2024 • Weiyao Wang, Pierre Gleize, Hao Tang, Xingyu Chen, Kevin J Liang, Matt Feiszli
Neural Radiance Fields (NeRF) exhibit remarkable performance for Novel View Synthesis (NVS) given a set of 2D images.
no code implementations • 22 Dec 2023 • Nikhil Mehta, Kevin J Liang, Jing Huang, Fu-Jen Chu, Li Yin, Tal Hassner
Out-of-distribution (OOD) detection is an important topic for real-world machine learning systems, but settings with limited in-distribution samples have been underexplored.
Out-of-Distribution Detection
Out of Distribution (OOD) Detection
no code implementations • 2 Dec 2023 • Vinay K Verma, Nikhil Mehta, Kevin J Liang, Aakansha Mishra, Lawrence Carin
Zero-shot learning (ZSL) is a promising approach to generalizing a model to categories unseen during training by leveraging class attributes, but challenges remain.
2 code implementations • CVPR 2024 • Kristen Grauman, Andrew Westbury, Lorenzo Torresani, Kris Kitani, Jitendra Malik, Triantafyllos Afouras, Kumar Ashutosh, Vijay Baiyya, Siddhant Bansal, Bikram Boote, Eugene Byrne, Zach Chavis, Joya Chen, Feng Cheng, Fu-Jen Chu, Sean Crane, Avijit Dasgupta, Jing Dong, Maria Escobar, Cristhian Forigua, Abrham Gebreselasie, Sanjay Haresh, Jing Huang, Md Mohaiminul Islam, Suyog Jain, Rawal Khirodkar, Devansh Kukreja, Kevin J Liang, Jia-Wei Liu, Sagnik Majumder, Yongsen Mao, Miguel Martin, Effrosyni Mavroudi, Tushar Nagarajan, Francesco Ragusa, Santhosh Kumar Ramakrishnan, Luigi Seminara, Arjun Somayazulu, Yale Song, Shan Su, Zihui Xue, Edward Zhang, Jinxu Zhang, Angela Castillo, Changan Chen, Xinzhu Fu, Ryosuke Furuta, Cristina Gonzalez, Prince Gupta, Jiabo Hu, Yifei HUANG, Yiming Huang, Weslie Khoo, Anush Kumar, Robert Kuo, Sach Lakhavani, Miao Liu, Mi Luo, Zhengyi Luo, Brighid Meredith, Austin Miller, Oluwatumininu Oguntola, Xiaqing Pan, Penny Peng, Shraman Pramanick, Merey Ramazanova, Fiona Ryan, Wei Shan, Kiran Somasundaram, Chenan Song, Audrey Southerland, Masatoshi Tateno, Huiyu Wang, Yuchen Wang, Takuma Yagi, Mingfei Yan, Xitong Yang, Zecheng Yu, Shengxin Cindy Zha, Chen Zhao, Ziwei Zhao, Zhifan Zhu, Jeff Zhuo, Pablo Arbelaez, Gedas Bertasius, David Crandall, Dima Damen, Jakob Engel, Giovanni Maria Farinella, Antonino Furnari, Bernard Ghanem, Judy Hoffman, C. V. Jawahar, Richard Newcombe, Hyun Soo Park, James M. Rehg, Yoichi Sato, Manolis Savva, Jianbo Shi, Mike Zheng Shou, Michael Wray
We present Ego-Exo4D, a diverse, large-scale multimodal multiview video dataset and benchmark challenge.
no code implementations • ICCV 2023 • Peri Akiva, Jing Huang, Kevin J Liang, Rama Kovvuri, Xingyu Chen, Matt Feiszli, Kristin Dana, Tal Hassner
Understanding the visual world from the perspective of humans (egocentric) has been a long-standing challenge in computer vision.
1 code implementation • 24 Oct 2022 • Samrudhdhi B Rangrej, Kevin J Liang, Tal Hassner, James J Clark
Many online action prediction models observe complete frames to locate and attend to informative subregions in the frames called glimpses and recognize an ongoing action based on global and local information.
1 code implementation • 13 Oct 2022 • Jing Huang, Kevin J Liang, Rama Kovvuri, Tal Hassner
Most existing OCR methods focus on alphanumeric characters due to the popularity of English and numbers, as well as their corresponding datasets.
1 code implementation • CVPR 2022 • Kevin J Liang, Samrudhdhi B. Rangrej, Vladan Petrovic, Tal Hassner
Our results show that TraNFS is on-par with leading FSL methods on clean support sets, yet outperforms them, by far, in the presence of label noise.
1 code implementation • CVPR 2022 • Li Yin, Juan M Perez-Rua, Kevin J Liang
We study the challenging incremental few-shot object detection (iFSD) setting.
no code implementations • 7 Jan 2022 • Sachin Konan, Kevin J Liang, Li Yin
In many applications, such as autonomous driving, hand manipulation, or robot navigation, object detection methods must be able to detect objects unseen in the training set.
no code implementations • 29 Sep 2021 • Samrudhdhi Bharatkumar Rangrej, Kevin J Liang, Xi Yin, Guan Pang, Theofanis Karaletsos, Lior Wolf, Tal Hassner
Few-shot learning (FSL) methods aim to generalize a model to new unseen classes using only a small number of support examples.
no code implementations • 27 Apr 2021 • Weituo Hao, Mostafa El-Khamy, Jungwon Lee, Jianyi Zhang, Kevin J Liang, Changyou Chen, Lawrence Carin
Federated learning has emerged as an important distributed learning paradigm, where a server aggregates a global model from many client-trained models while having no access to the client data.
1 code implementation • CVPR 2021 • Jing Huang, Guan Pang, Rama Kovvuri, Mandy Toh, Kevin J Liang, Praveen Krishnan, Xi Yin, Tal Hassner
Recent advances in OCR have shown that an end-to-end (E2E) training pipeline that includes both detection and recognition leads to the best results.
1 code implementation • CVPR 2021 • Vinay Kumar Verma, Kevin J Liang, Nikhil Mehta, Piyush Rai, Lawrence Carin
However, the growth in the number of additional parameters of many of these types of methods can be computationally expensive at larger scales, at times prohibitively so.
no code implementations • 17 Mar 2021 • Nathan Inkawhich, Kevin J Liang, Jingyang Zhang, Huanrui Yang, Hai Li, Yiran Chen
During the online phase of the attack, we then leverage representations of highly related proxy classes from the whitebox distribution to fool the blackbox model into predicting the desired target class.
no code implementations • ICLR 2021 • Kevin J Liang, Weituo Hao, Dinghan Shen, Yufan Zhou, Weizhu Chen, Changyou Chen, Lawrence Carin
Large-scale language models have recently demonstrated impressive empirical performance.
no code implementations • 2 Oct 2020 • John B. Sigman, Gregory P. Spell, Kevin J Liang, Lawrence Carin
The data sources described earlier make two "domains": a hand-collected data domain of images with threats, and a real-world domain of images assumed without threats.
no code implementations • 13 Aug 2020 • Weituo Hao, Nikhil Mehta, Kevin J Liang, Pengyu Cheng, Mostafa El-Khamy, Lawrence Carin
Experiments on MNIST, FashionMNIST, and CIFAR-10 demonstrate WAFFLe's significant improvement to local test performance and fairness while simultaneously providing an extra layer of security.
no code implementations • NeurIPS 2020 • Nathan Inkawhich, Kevin J Liang, Binghui Wang, Matthew Inkawhich, Lawrence Carin, Yiran Chen
We consider the blackbox transfer-based targeted adversarial attack threat model in the realm of deep neural network (DNN) image classifiers.
no code implementations • ICLR 2020 • Nathan Inkawhich, Kevin J Liang, Lawrence Carin, Yiran Chen
Almost all current adversarial attacks of CNN classifiers rely on information derived from the output layer of the network.
no code implementations • 21 Apr 2020 • Nikhil Mehta, Kevin J Liang, Vinay K Verma, Lawrence Carin
Naively trained neural networks tend to experience catastrophic forgetting in sequential task settings, where data from previous tasks are unavailable.
no code implementations • 11 Feb 2020 • Yuewei Yang, Kevin J Liang, Lawrence Carin
These missing annotations can be problematic, as the standard cross-entropy loss employed to train object detection models treats classification as a positive-negative (PN) problem: unlabeled regions are implicitly assumed to be background.
no code implementations • 13 Dec 2019 • Kevin J Liang, John B. Sigman, Gregory P. Spell, Dan Strellis, William Chang, Felix Liu, Tejas Mehta, Lawrence Carin
We show performance of our models on held-out evaluation sets, analyze several design parameters, and demonstrate the potential of such systems for automated detection of threats that can be found in airports.
1 code implementation • NeurIPS 2019 • Kevin J Liang, Guoyin Wang, Yitong Li, Ricardo Henao, Lawrence Carin
We investigate time-dependent data analysis from the perspective of recurrent kernel machines, from which models with hidden units and gated memory cells arise naturally.
no code implementations • ICLR 2019 • Kevin J Liang, Chunyuan Li, Guoyin Wang, Lawrence Carin
We hypothesize that this is at least in part due to the evolution of the generator distribution and the catastrophic forgetting tendency of neural networks, which leads to the discriminator losing the ability to remember synthesized samples from previous instantiations of the generator.