Search Results for author: Mathieu Tuli

Found 6 papers, 4 papers with code

3D Face Tracking from 2D Video through Iterative Dense UV to Image Flow

no code implementations15 Apr 2024 Felix Taubner, Prashant Raina, Mathieu Tuli, Eu Wern Teh, Chul Lee, Jinmiao Huang

Because such methods are expensive and due to the widespread availability of 2D videos, recent methods have focused on how to perform monocular 3D face tracking.

Disentanglement Face Model

Learning to Follow Instructions in Text-Based Games

1 code implementation8 Nov 2022 Mathieu Tuli, Andrew C. Li, Pashootan Vaezipoor, Toryn Q. Klassen, Scott Sanner, Sheila A. McIlraith

Text-based games present a unique class of sequential decision making problem in which agents interact with a partially observable, simulated environment via actions and observations conveyed through natural language.

Decision Making Instruction Following +2

Exploiting Explainable Metrics for Augmented SGD

2 code implementations CVPR 2022 Mahdi S. Hosseini, Mathieu Tuli, Konstantinos N. Plataniotis

In this paper, we address the following question: \textit{can we probe intermediate layers of a deep neural network to identify and quantify the learning quality of each layer?}

Stochastic Optimization

Towards Robust and Automatic Hyper-Parameter Tunning

1 code implementation28 Nov 2021 Mathieu Tuli, Mahdi S. Hosseini, Konstantinos N. Plataniotis

In this work, we introduce a new class of HPO method and explore how the low-rank factorization of the convolutional weights of intermediate layers of a convolutional neural network can be used to define an analytical response surface for optimizing hyper-parameters, using only training data.

Bayesian Optimization

CONet: Channel Optimization for Convolutional Neural Networks

1 code implementation15 Aug 2021 Mahdi S. Hosseini, Jia Shu Zhang, Zhe Liu, Andre Fu, Jingxuan Su, Mathieu Tuli, Sepehr Hosseini, Arsh Kadakia, Haoran Wang, Konstantinos N. Plataniotis

To solve this, we introduce an efficient dynamic scaling algorithm -- CONet -- that automatically optimizes channel sizes across network layers for a given CNN.

Neural Architecture Search

Response Modeling of Hyper-Parameters for Deep Convolution Neural Network

no code implementations1 Jan 2021 Mathieu Tuli, Mahdi S. Hosseini, Konstantinos N Plataniotis

Hyper-parameter optimization (HPO) is critical in training high performing Deep Neural Networks (DNN).

Cannot find the paper you are looking for? You can Submit a new open access paper.