no code implementations • 14 Mar 2023 • Jiefeng Chen, Timothy Nguyen, Dilan Gorur, Arslan Chaudhry
We argue that the measure of forward transfer to a task should not be affected by the restrictions placed on the continual learner in order to preserve knowledge of previous tasks.
no code implementations • 1 Feb 2022 • Seyed Iman Mirzadeh, Arslan Chaudhry, Dong Yin, Timothy Nguyen, Razvan Pascanu, Dilan Gorur, Mehrdad Farajtabar
However, in this work, we show that the choice of architecture can significantly impact the continual learning performance, and different architectures lead to different trade-offs between the ability to remember previous tasks and learning new ones.
no code implementations • 7 Dec 2021 • Timothy Nguyen
We provide an analysis of the recent work by Malaney-Weinstein on "Economics as Gauge Theory" presented on November 10, 2021 at the Money and Banking Workshop hosted by University of Chicago.
2 code implementations • NeurIPS 2021 • Timothy Nguyen, Roman Novak, Lechao Xiao, Jaehoon Lee
The effectiveness of machine learning algorithms arises from being able to extract useful features from large amounts of data.
no code implementations • ICLR 2021 • Timothy Nguyen, Zhourong Chen, Jaehoon Lee
One of the most fundamental aspects of any machine learning algorithm is the training data used by the algorithm.
1 code implementation • 30 Oct 2020 • Timothy Nguyen, Zhourong Chen, Jaehoon Lee
One of the most fundamental aspects of any machine learning algorithm is the training data used by the algorithm.
2 code implementations • 2 Apr 2019 • Jeremy Nixon, Mike Dusenberry, Ghassen Jerfel, Timothy Nguyen, Jeremiah Liu, Linchuan Zhang, Dustin Tran
In this paper, we perform a comprehensive empirical study of choices in calibration measures including measuring all probabilities rather than just the maximum prediction, thresholding probability values, class conditionality, number of bins, bins that are adaptive to the datapoint density, and the norm used to compare accuracies to confidences.