Search Results for author: Newsha Ardalani

Found 9 papers, 1 papers with code

Time and the Value of Data

no code implementations17 Mar 2022 Ehsan Valavi, Joel Hestness, Newsha Ardalani, Marco Iansiti

In addition, we argue that increasing the stock of data by including older datasets may, in fact, damage the model's accuracy.

Time Dependency, Data Flow, and Competitive Advantage

no code implementations17 Mar 2022 Ehsan Valavi, Joel Hestness, Marco Iansiti, Newsha Ardalani, Feng Zhu, Karim R. Lakhani

Relating the text topics to various business areas of interest, we argue that competing in a business area in which data value decays rapidly alters strategies to acquire competitive advantage.

Model Architecture Controls Gradient Descent Dynamics: A Combinatorial Path-Based Formula

no code implementations25 Sep 2019 Xin Zhou, Newsha Ardalani

However, our theoretical understanding of how model architecture affects performance or accuracy is limited.

Beyond Human-Level Accuracy: Computational Challenges in Deep Learning

1 code implementation3 Sep 2019 Joel Hestness, Newsha Ardalani, Greg Diamos

However, recent prior work shows that as dataset sizes grow, DL model accuracy and model size grow predictably.

A Static Analysis-based Cross-Architecture Performance Prediction Using Machine Learning

no code implementations18 Jun 2019 Newsha Ardalani, Urmish Thakker, Aws Albarghouthi, Karu Sankaralingam

Porting code from CPU to GPU is costly and time-consuming; Unless much time is invested in development and optimization, it is not obvious, a priori, how much speed-up is achievable or how much room is left for improvement.

Empirically Characterizing Overparameterization Impact on Convergence

no code implementations ICLR 2019 Newsha Ardalani, Joel Hestness, Gregory Diamos

A long-held conventional wisdom states that larger models train more slowly when using gradient descent.

A Proposed Hierarchy of Deep Learning Tasks

no code implementations27 Sep 2018 Joel Hestness, Sharan Narang, Newsha Ardalani, Heewoo Jun, Hassan Kianinejad, Md. Mostofa Ali Patwary, Yang Yang, Yanqi Zhou, Gregory Diamos, Kenneth Church

As the pace of deep learning innovation accelerates, it becomes increasingly important to organize the space of problems by relative difficultly.

Deep Learning Scaling is Predictable, Empirically

no code implementations1 Dec 2017 Joel Hestness, Sharan Narang, Newsha Ardalani, Gregory Diamos, Heewoo Jun, Hassan Kianinejad, Md. Mostofa Ali Patwary, Yang Yang, Yanqi Zhou

As DL application domains grow, we would like a deeper understanding of the relationships between training set size, computational scale, and model accuracy improvements to advance the state-of-the-art.

Machine Translation Neural Architecture Search +1

Cannot find the paper you are looking for? You can Submit a new open access paper.