no code implementations • ICLR 2019 • Lili Meng, Bo Zhao, Bo Chang, Gao Huang, Frederick Tung, Leonid Sigal
Our model is efficient, as it proposes a separable spatio-temporal mechanism for video attention, while being able to identify important parts of the video both spatially and temporally.
no code implementations • 27 Jan 2023 • Wonho Bae, Mohamed Osama Ahmed, Frederick Tung, Gabriel L. Oliveira
In this work, we propose to train TPPs in a meta learning framework, where each sequence is treated as a different task, via a novel framing of TPPs as neural processes (NPs).
no code implementations • 19 Nov 2022 • Mahmoud Salem, Mohamed Osama Ahmed, Frederick Tung, Gabriel Oliveira
This commonly encountered operational context calls for principled techniques for training ML models with the option to abstain from predicting when uncertain.
1 code implementation • 30 May 2022 • Yu Gong, Greg Mori, Frederick Tung
Data imbalance, in which a plurality of the data samples come from a small proportion of labels, poses a challenge in training deep neural networks.
no code implementations • 29 Sep 2021 • Golara Javadi, Frederick Tung, Gabriel L. Oliveira
Parameter sharing approaches for deep multi-task learning share a common intuition: for a single network to perform multiple prediction tasks, the network needs to support multiple specialized execution paths.
1 code implementation • 20 Jun 2021 • Raquel Aoki, Frederick Tung, Gabriel L. Oliveira
In contrast to single-task learning, in which a separate model is trained for each target, multi-task learning (MTL) optimizes a single model to predict multiple related targets simultaneously.
no code implementations • ECCV 2020 • Mengyao Zhai, Lei Chen, JiaWei He, Megha Nawhal, Frederick Tung, Greg Mori
In contrast, we propose a parameter efficient framework, Piggyback GAN, which learns the current task by building a set of convolutional and deconvolutional filters that are factorized into filters of the models trained on previous tasks.
1 code implementation • CVPR 2021 • Xiaobin Chang, Frederick Tung, Greg Mori
We propose Discriminative Prototype DTW (DP-DTW), a novel method to learn class-specific discriminative prototypes for temporal recognition tasks.
1 code implementation • ICCV 2019 • Frederick Tung, Greg Mori
Knowledge distillation is a widely applicable technique for training a student neural network under the guidance of a trained teacher network.
no code implementations • ECCV 2018 • Changan Chen, Frederick Tung, Naveen Vedula, Greg Mori
Deep neural network compression has the potential to bring modern resource-hungry deep networks to resource-limited devices.
no code implementations • CVPR 2018 • Frederick Tung, Greg Mori
This allows us to take advantage of the complementary nature of pruning and quantization and to recover from premature pruning errors, which is not possible with current two-stage approaches.
no code implementations • 28 Oct 2017 • Lili Meng, Frederick Tung, James J. Little, Julien Valentin, Clarence de Silva
Camera relocalization plays a vital role in many robotics and computer vision tasks, such as global localization, recovery from tracking failure and loop closure detection.
no code implementations • ICLR 2018 • Bo Chang, Lili Meng, Eldad Haber, Frederick Tung, David Begert
Deep residual networks (ResNets) and their variants are widely used in many computer vision applications and natural language processing tasks.
1 code implementation • 22 Oct 2017 • Lili Meng, Jianhui Chen, Frederick Tung, James J. Little, Julien Valentin, Clarence W. de Silva
Camera relocalization plays a vital role in many robotics and computer vision tasks, such as global localization, recovery from tracking failure, and loop closure detection.
no code implementations • 28 Jul 2017 • Frederick Tung, Srikanth Muralidharan, Greg Mori
When approaching a novel visual recognition problem in a specialized image domain, a common strategy is to start with a pre-trained deep neural network and fine-tune it to the specialized domain.
no code implementations • 3 Jun 2017 • Nazanin Mehrasa, Yatao Zhong, Frederick Tung, Luke Bornn, Greg Mori
Activity analysis in which multiple people interact across a large space is challenging due to the interplay of individual actions and collective group dynamics.