Search Results for author: Quynh Nguyen

Found 17 papers, 1 papers with code

TaDeR: A New Task Dependency Recommendation for Project Management Platform

no code implementations12 May 2022 Quynh Nguyen, Dac H. Nguyen, Son T. Huynh, Hoa K. Dam, Binh T. Nguyen

This paper proposes an efficient task dependency recommendation algorithm to suggest tasks dependent on a given task that the user has just created.

Feature Engineering Management

When Are Solutions Connected in Deep Networks?

1 code implementation NeurIPS 2021 Quynh Nguyen, Pierre Brechet, Marco Mondelli

More specifically, we show that: (i) under generic assumptions on the features of intermediate layers, it suffices that the last two hidden layers have order of $\sqrt{N}$ neurons, and (ii) if subsets of features at each layer are linearly separable, then no over-parameterization is needed to show the connectivity.

On the Proof of Global Convergence of Gradient Descent for Deep ReLU Networks with Linear Widths

no code implementations24 Jan 2021 Quynh Nguyen

Some highlights of our setting: (i) all the layers are trained with standard gradient descent, (ii) the network has standard parameterization as opposed to the NTK one, and (iii) the network has a single wide layer as opposed to having all wide hidden layers as in most of NTK-related results.

A Fully Rigorous Proof of the Derivation of Xavier and He's Initialization for Deep ReLU Networks

no code implementations21 Jan 2021 Quynh Nguyen

A fully rigorous proof of the derivation of Xavier/He's initialization for ReLU nets is given.

A Note on Connectivity of Sublevel Sets in Deep Learning

no code implementations21 Jan 2021 Quynh Nguyen

It is shown that for deep neural networks, a single wide layer of width $N+1$ ($N$ being the number of training samples) suffices to prove the connectivity of sublevel sets of the training loss function.

Tight Bounds on the Smallest Eigenvalue of the Neural Tangent Kernel for Deep ReLU Networks

no code implementations21 Dec 2020 Quynh Nguyen, Marco Mondelli, Guido Montufar

In this paper, we provide tight bounds on the smallest eigenvalue of NTK matrices for deep ReLU nets, both in the limiting case of infinite widths and for finite widths.

Memorization

Global Convergence of Deep Networks with One Wide Layer Followed by Pyramidal Topology

no code implementations NeurIPS 2020 Quynh Nguyen, Marco Mondelli

Recent works have shown that gradient descent can find a global minimum for over-parameterized neural networks where the widths of all the hidden layers scale polynomially with $N$ ($N$ being the number of training samples).

On Connected Sublevel Sets in Deep Learning

no code implementations22 Jan 2019 Quynh Nguyen

This paper shows that every sublevel set of the loss function of a class of deep over-parameterized neural nets with piecewise linear activation functions is connected and unbounded.

On the loss landscape of a class of deep neural networks with no bad local valleys

no code implementations ICLR 2019 Quynh Nguyen, Mahesh Chandra Mukkamala, Matthias Hein

We identify a class of over-parameterized deep neural networks with standard activation functions and cross-entropy loss which provably have no bad local valley, in the sense that from any point in parameter space there exists a continuous path on which the cross-entropy loss is non-increasing and gets arbitrarily close to zero.

The loss surface and expressivity of deep convolutional neural networks

no code implementations ICLR 2018 Quynh Nguyen, Matthias Hein

We show that such CNNs produce linearly independent features at a “wide” layer which has more neurons than the number of training samples.

Optimization Landscape and Expressivity of Deep CNNs

no code implementations ICML 2018 Quynh Nguyen, Matthias Hein

We show that such CNNs produce linearly independent features at a "wide" layer which has more neurons than the number of training samples.

The loss surface of deep and wide neural networks

no code implementations ICML 2017 Quynh Nguyen, Matthias Hein

While the optimization problem behind deep neural networks is highly non-convex, it is frequently observed in practice that training deep networks seems possible without getting stuck in suboptimal points.

Latent Embeddings for Zero-shot Classification

no code implementations CVPR 2016 Yongqin Xian, Zeynep Akata, Gaurav Sharma, Quynh Nguyen, Matthias Hein, Bernt Schiele

We train the model with a ranking based objective function which penalizes incorrect rankings of the true class for a given image.

Classification General Classification +1

An Efficient Multilinear Optimization Framework for Hypergraph Matching

no code implementations9 Nov 2015 Quynh Nguyen, Francesco Tudisco, Antoine Gautier, Matthias Hein

Hypergraph matching has recently become a popular approach for solving correspondence problems in computer vision as it allows to integrate higher-order geometric information.

Hypergraph Matching

A Flexible Tensor Block Coordinate Ascent Scheme for Hypergraph Matching

no code implementations CVPR 2015 Quynh Nguyen, Antoine Gautier, Matthias Hein

We propose two algorithms which both come along with the guarantee of monotonic ascent in the matching score on the set of discrete assignment matrices.

Graph Matching Hypergraph Matching +1

Cannot find the paper you are looking for? You can Submit a new open access paper.