4 code implementations • 25 Feb 2019 • Chris Ying, Aaron Klein, Esteban Real, Eric Christiansen, Kevin Murphy, Frank Hutter
Recent advances in neural architecture search (NAS) demand tremendous computational resources, which makes it difficult to reproduce experiments and imposes a barrier-to-entry to researchers without access to large-scale computation.
1 code implementation • 24 Jan 2019 • Yang You, Jonathan Hseu, Chris Ying, James Demmel, Kurt Keutzer, Cho-Jui Hsieh
LEGW enables Sqrt Scaling scheme to be useful in practice and as a result we achieve much better results than the Linear Scaling learning rate scheme.
no code implementations • 16 Nov 2018 • Chris Ying, Sameer Kumar, Dehao Chen, Tao Wang, Youlong Cheng
Deep learning is extremely computationally intensive, and hardware vendors have responded by building faster accelerators in large clusters.
no code implementations • 1 Jan 2018 • Chris Ying, Katerina Fragkiadaki
Current convolutional neural networks algorithms for video object tracking spend the same amount of computation for each object and video frame.
3 code implementations • ICLR 2018 • Samuel L. Smith, Pieter-Jan Kindermans, Chris Ying, Quoc V. Le
We can further reduce the number of parameter updates by increasing the learning rate $\epsilon$ and scaling the batch size $B \propto \epsilon$.