Search Results for author: Timothy Nguyen

Found 7 papers, 3 papers with code

Is forgetting less a good inductive bias for forward transfer?

no code implementations14 Mar 2023 Jiefeng Chen, Timothy Nguyen, Dilan Gorur, Arslan Chaudhry

We argue that the measure of forward transfer to a task should not be affected by the restrictions placed on the continual learner in order to preserve knowledge of previous tasks.

Continual Learning Image Classification +1

Architecture Matters in Continual Learning

no code implementations1 Feb 2022 Seyed Iman Mirzadeh, Arslan Chaudhry, Dong Yin, Timothy Nguyen, Razvan Pascanu, Dilan Gorur, Mehrdad Farajtabar

However, in this work, we show that the choice of architecture can significantly impact the continual learning performance, and different architectures lead to different trade-offs between the ability to remember previous tasks and learning new ones.

Continual Learning

A Response to Economics as Gauge Theory

no code implementations7 Dec 2021 Timothy Nguyen

We provide an analysis of the recent work by Malaney-Weinstein on "Economics as Gauge Theory" presented on November 10, 2021 at the Money and Banking Workshop hosted by University of Chicago.

Dataset Distillation with Infinitely Wide Convolutional Networks

2 code implementations NeurIPS 2021 Timothy Nguyen, Roman Novak, Lechao Xiao, Jaehoon Lee

The effectiveness of machine learning algorithms arises from being able to extract useful features from large amounts of data.

Image Classification Meta-Learning

Dataset Meta-Learning from Kernel-Ridge Regression

no code implementations ICLR 2021 Timothy Nguyen, Zhourong Chen, Jaehoon Lee

One of the most fundamental aspects of any machine learning algorithm is the training data used by the algorithm.

Meta-Learning regression

Dataset Meta-Learning from Kernel Ridge-Regression

1 code implementation30 Oct 2020 Timothy Nguyen, Zhourong Chen, Jaehoon Lee

One of the most fundamental aspects of any machine learning algorithm is the training data used by the algorithm.

Meta-Learning regression

Measuring Calibration in Deep Learning

2 code implementations2 Apr 2019 Jeremy Nixon, Mike Dusenberry, Ghassen Jerfel, Timothy Nguyen, Jeremiah Liu, Linchuan Zhang, Dustin Tran

In this paper, we perform a comprehensive empirical study of choices in calibration measures including measuring all probabilities rather than just the maximum prediction, thresholding probability values, class conditionality, number of bins, bins that are adaptive to the datapoint density, and the norm used to compare accuracies to confidences.

Cannot find the paper you are looking for? You can Submit a new open access paper.