Search Results for author: Frederick Tung

Found 24 papers, 7 papers with code

Similarity-Preserving Knowledge Distillation

1 code implementation ICCV 2019 Frederick Tung, Greg Mori

Knowledge distillation is a widely applicable technique for training a student neural network under the guidance of a trained teacher network.

Knowledge Distillation Neural Network Compression

RankSim: Ranking Similarity Regularization for Deep Imbalanced Regression

1 code implementation30 May 2022 Yu Gong, Greg Mori, Frederick Tung

Data imbalance, in which a plurality of the data samples come from a small proportion of labels, poses a challenge in training deep neural networks.

Inductive Bias regression +2

Learning Discriminative Prototypes with Dynamic Time Warping

1 code implementation CVPR 2021 Xiaobin Chang, Frederick Tung, Greg Mori

We propose Discriminative Prototype DTW (DP-DTW), a novel method to learn class-specific discriminative prototypes for temporal recognition tasks.

Action Segmentation Dynamic Time Warping +4

Heterogeneous Multi-task Learning with Expert Diversity

1 code implementation20 Jun 2021 Raquel Aoki, Frederick Tung, Gabriel L. Oliveira

In contrast to single-task learning, in which a separate model is trained for each target, multi-task learning (MTL) optimizes a single model to predict multiple related targets simultaneously.

Multi-Task Learning

Backtracking Regression Forests for Accurate Camera Relocalization

1 code implementation22 Oct 2017 Lili Meng, Jianhui Chen, Frederick Tung, James J. Little, Julien Valentin, Clarence W. de Silva

Camera relocalization plays a vital role in many robotics and computer vision tasks, such as global localization, recovery from tracking failure, and loop closure detection.

Camera Relocalization Loop Closure Detection +2

Tree Cross Attention

1 code implementation29 Sep 2023 Leo Feng, Frederick Tung, Hossein Hajimirsadeghi, Yoshua Bengio, Mohamed Osama Ahmed

In this work, we propose Tree Cross Attention (TCA) - a module based on Cross Attention that only retrieves information from a logarithmic $\mathcal{O}(\log(N))$ number of tokens for performing inference.

Multi-level Residual Networks from Dynamical Systems View

no code implementations ICLR 2018 Bo Chang, Lili Meng, Eldad Haber, Frederick Tung, David Begert

Deep residual networks (ResNets) and their variants are widely used in many computer vision applications and natural language processing tasks.

General Classification Image Classification

Exploiting Points and Lines in Regression Forests for RGB-D Camera Relocalization

no code implementations28 Oct 2017 Lili Meng, Frederick Tung, James J. Little, Julien Valentin, Clarence de Silva

Camera relocalization plays a vital role in many robotics and computer vision tasks, such as global localization, recovery from tracking failure and loop closure detection.

Camera Relocalization Loop Closure Detection +1

Fine-Pruning: Joint Fine-Tuning and Compression of a Convolutional Network with Bayesian Optimization

no code implementations28 Jul 2017 Frederick Tung, Srikanth Muralidharan, Greg Mori

When approaching a novel visual recognition problem in a specialized image domain, a common strategy is to start with a pre-trained deep neural network and fine-tune it to the specialized domain.

Bayesian Optimization Network Pruning

Learning Person Trajectory Representations for Team Activity Analysis

no code implementations3 Jun 2017 Nazanin Mehrasa, Yatao Zhong, Frederick Tung, Luke Bornn, Greg Mori

Activity analysis in which multiple people interact across a large space is challenging due to the interplay of individual actions and collective group dynamics.

CLIP-Q: Deep Network Compression Learning by In-Parallel Pruning-Quantization

no code implementations CVPR 2018 Frederick Tung, Greg Mori

This allows us to take advantage of the complementary nature of pruning and quantization and to recover from premature pruning errors, which is not possible with current two-stage approaches.

Image Classification Network Pruning +3

Constraint-Aware Deep Neural Network Compression

no code implementations ECCV 2018 Changan Chen, Frederick Tung, Naveen Vedula, Greg Mori

Deep neural network compression has the potential to bring modern resource-hungry deep networks to resource-limited devices.

Bayesian Optimization Neural Network Compression +1

Where and when to look? Spatial-temporal attention for action recognition in videos

no code implementations ICLR 2019 Lili Meng, Bo Zhao, Bo Chang, Gao Huang, Frederick Tung, Leonid Sigal

Our model is efficient, as it proposes a separable spatio-temporal mechanism for video attention, while being able to identify important parts of the video both spatially and temporally.

Action Recognition In Videos Temporal Action Localization +1

Piggyback GAN: Efficient Lifelong Learning for Image Conditioned Generation

no code implementations ECCV 2020 Mengyao Zhai, Lei Chen, JiaWei He, Megha Nawhal, Frederick Tung, Greg Mori

In contrast, we propose a parameter efficient framework, Piggyback GAN, which learns the current task by building a set of convolutional and deconvolutional filters that are factorized into filters of the models trained on previous tasks.

DYNASHARE: DYNAMIC NEURAL NETWORKS FOR MULTI-TASK LEARNING

1 code implementation29 Sep 2021 Golara Javadi, Frederick Tung, Gabriel L. Oliveira

Parameter sharing approaches for deep multi-task learning share a common intuition: for a single network to perform multiple prediction tasks, the network needs to support multiple specialized execution paths.

Multi-Task Learning

Gumbel-Softmax Selective Networks

no code implementations19 Nov 2022 Mahmoud Salem, Mohamed Osama Ahmed, Frederick Tung, Gabriel Oliveira

This commonly encountered operational context calls for principled techniques for training ML models with the option to abstain from predicting when uncertain.

Meta Temporal Point Processes

no code implementations27 Jan 2023 Wonho Bae, Mohamed Osama Ahmed, Frederick Tung, Gabriel L. Oliveira

In this work, we propose to train TPPs in a meta learning framework, where each sequence is treated as a different task, via a novel framing of TPPs as neural processes (NPs).

Meta-Learning Point Processes

Ranking Regularization for Critical Rare Classes: Minimizing False Positives at a High True Positive Rate

no code implementations CVPR 2023 Mohammadi Kiarash, Zhao He, Mengyao Zhai, Frederick Tung

In this paper, we present a novel approach to address the challenge of minimizing false positives for systems that need to operate at a high true positive rate.

Memory Efficient Neural Processes via Constant Memory Attention Block

no code implementations23 May 2023 Leo Feng, Frederick Tung, Hossein Hajimirsadeghi, Yoshua Bengio, Mohamed Osama Ahmed

Neural Processes (NPs) are popular meta-learning methods for efficiently modelling predictive uncertainty.

Meta-Learning

Constant Memory Attention Block

no code implementations21 Jun 2023 Leo Feng, Frederick Tung, Hossein Hajimirsadeghi, Yoshua Bengio, Mohamed Osama Ahmed

Modern foundation model architectures rely on attention mechanisms to effectively capture context.

Point Processes

Prompting-based Temporal Domain Generalization

no code implementations3 Oct 2023 Sepidehsadat Hosseini, Mengyao Zhai, Hossein Hajimirsadegh, Frederick Tung

Machine learning traditionally assumes that the training and testing data are distributed independently and identically.

Domain Generalization Time Series +1

AdaFlood: Adaptive Flood Regularization

no code implementations6 Nov 2023 Wonho Bae, Yi Ren, Mohamad Osama Ahmed, Frederick Tung, Danica J. Sutherland, Gabriel L. Oliveira

Although neural networks are conventionally optimized towards zero training loss, it has been recently learned that targeting a non-zero training loss threshold, referred to as a flood level, often enables better test time generalization.

Pretext Training Algorithms for Event Sequence Data

no code implementations16 Feb 2024 Yimu Wang, He Zhao, Ruizhi Deng, Frederick Tung, Greg Mori

Pretext training followed by task-specific fine-tuning has been a successful approach in vision and language domains.

Contrastive Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.