no code implementations • 14 Sep 2024 • Chengxi Ye, Grace Chu, Yanfeng Liu, Yichi Zhang, Lukasz Lew, Andrew Howard
The discontinuous operations inherent in quantization and sparsification introduce obstacles to backpropagation.
no code implementations • 8 May 2024 • Matt Schoenbauer, Daniele Moro, Lukasz Lew, Andrew Howard
Quantization-aware training comes with a fundamental challenge: the derivative of quantization functions such as rounding are zero almost everywhere and nonexistent elsewhere.
6 code implementations • 16 Apr 2024 • Danfeng Qin, Chas Leichner, Manolis Delakis, Marco Fornoni, Shixin Luo, Fan Yang, Weijun Wang, Colby Banbury, Chengxi Ye, Berkin Akin, Vaibhav Aggarwal, Tenghui Zhu, Daniele Moro, Andrew Howard
We present the latest generation of MobileNets, known as MobileNetV4 (MNv4), featuring universally efficient architecture designs for mobile devices.
Ranked #430 on Image Classification on ImageNet
no code implementations • CVPR 2024 • Marina Neseem, Conor McCullough, Randy Hsin, Chas Leichner, Shan Li, In Suk Chong, Andrew Howard, Lukasz Lew, Sherief Reda, Ville-Mikko Rautio, Daniele Moro
Our analysis reveals that non-quantized elementwise operations which are prevalent in layers such as parameterized activation functions batch normalization and quantization scaling dominate the inference cost of low-precision models.
1 code implementation • NeurIPS 2023 • Shuyang Sun, Weijun Wang, Qihang Yu, Andrew Howard, Philip Torr, Liang-Chieh Chen
This paper presents a new mechanism to facilitate the training of mask transformers for efficient panoptic segmentation, democratizing its deployment.
1 code implementation • 20 Jul 2022 • Elijah Cole, Kimberly Wilber, Grant van Horn, Xuan Yang, Marco Fornoni, Pietro Perona, Serge Belongie, Andrew Howard, Oisin Mac Aodha
Weakly supervised object localization (WSOL) aims to learn representations that encode object location using only image-level category labels.
1 code implementation • 22 Dec 2021 • Weijun Wang, Andrew Howard
We present a next-generation neural network architecture, MOSAIC, for efficient and accurate semantic image segmentation on mobile devices.
no code implementations • 18 Jun 2021 • Marco Fornoni, Chaochao Yan, Liangchen Luo, Kimberly Wilber, Alex Stark, Yin Cui, Boqing Gong, Andrew Howard
When interacting with objects through cameras, or pictures, users often have a specific intent.
no code implementations • 7 May 2021 • Mingda Zhang, Chun-Te Chu, Andrey Zhmoginov, Andrew Howard, Brendan Jou, Yukun Zhu, Li Zhang, Rebecca Hwa, Adriana Kovashka
With early termination, the average cost can be further reduced to 198M MAdds while maintaining accuracy of 80. 0% on ImageNet.
Ranked #724 on Image Classification on ImageNet
no code implementations • 11 Feb 2021 • Alexander J. Mustill, Melvyn B. Davies, Sarah Blunt, Andrew Howard
In this paper, we explore scenarios for the excitation of the eccentricity of the planet in binary systems such as this, considering planet-planet scattering, Lidov-Kozai cycles from the binary acting on a single-planet system, or Lidov-Kozai cycles acting on a two-planet system that also undergoes scattering.
Earth and Planetary Astrophysics
no code implementations • 4 Jan 2021 • Keren Ye, Adriana Kovashka, Mark Sandler, Menglong Zhu, Andrew Howard, Marco Fornoni
In this paper we address the question: can task-specific detectors be trained and represented as a shared set of weights, plus a very small set of additional weights for each task?
no code implementations • 10 Dec 2020 • Liangchen Luo, Mark Sandler, Zi Lin, Andrey Zhmoginov, Andrew Howard
Knowledge distillation is one of the most popular and effective techniques for knowledge transfer, model compression and semi-supervised learning.
1 code implementation • 28 Oct 2020 • Steve Bryson, Michelle Kunimoto, Ravi K. Kopparapu, Jeffrey L. Coughlin, William J. Borucki, David Koch, Victor Silva Aguirre, Christopher Allen, Geert Barentsen, Natalie. M. Batalha, Travis Berger, Alan Boss, Lars A. Buchhave, Christopher J. Burke, Douglas A. Caldwell, Jennifer R. Campbell, Joseph Catanzarite, Hema Chandrasekharan, William J. Chaplin, Jessie L. Christiansen, Jorgen Christensen-Dalsgaard, David R. Ciardi, Bruce D. Clarke, William D. Cochran, Jessie L. Dotson, Laurance R. Doyle, Eduardo Seperuelo Duarte, Edward W. Dunham, Andrea K. Dupree, Michael Endl, James L. Fanson, Eric B. Ford, Maura Fujieh, Thomas N. Gautier III, John C. Geary, Ronald L Gilliland, Forrest R. Girouard, Alan Gould, Michael R. Haas, Christopher E. Henze, Matthew J. Holman, Andrew Howard, Steve B. Howell, Daniel Huber, Roger C. Hunter, Jon M. Jenkins, Hans Kjeldsen, Jeffery Kolodziejczak, Kipp Larson, David W. Latham, Jie Li, Savita Mathur, Soren Meibom, Chris Middour, Robert L. Morris, Timothy D. Morton, Fergal Mullally, Susan E. Mullally, David Pletcher, Andrej Prsa, Samuel N. Quinn, Elisa V. Quintana, Darin Ragozzine, Solange V. Ramirez, Dwight T. Sanderfer, Dimitar Sasselov, Shawn E. Seader, Megan Shabram, Avi Shporer, Jeffrey C. Smith, Jason H. Steffen, Martin Still, Guillermo Torres, John Troeltzsch, Joseph D. Twicken, Akm Kamal Uddin, Jeffrey E. Van Cleve, Janice Voss, Lauren Weiss, William F. Welsh, Bill Wohler, Khadeejah A Zamudio
We present occurrence rates for rocky planets in the habitable zones (HZ) of main-sequence dwarf stars based on the Kepler DR25 planet candidate catalog and Gaia-based stellar properties.
Earth and Planetary Astrophysics Solar and Stellar Astrophysics
no code implementations • 10 Oct 2020 • Qifei Wang, Junjie Ke, Joshua Greaves, Grace Chu, Gabriel Bender, Luciano Sbaiz, Alec Go, Andrew Howard, Feng Yang, Ming-Hsuan Yang, Jeff Gilbert, Peyman Milanfar
This approach effectively reduces the total number of parameters and FLOPS, encouraging positive knowledge transfer while mitigating negative interference across domains.
no code implementations • 18 Aug 2020 • Grace Chu, Okan Arikan, Gabriel Bender, Weijun Wang, Achille Brighton, Pieter-Jan Kindermans, Hanxiao Liu, Berkin Akin, Suyog Gupta, Andrew Howard
Hardware-aware neural architecture designs have been predominantly focusing on optimizing model performance on single hardware and model development complexity, where another important factor, model deployment complexity, has been largely ignored.
no code implementations • 7 Sep 2019 • Mark Sandler, Jonathan Baccash, Andrey Zhmoginov, Andrew Howard
We explore the question of how the resolution of the input image ("input resolution") affects the performance of a neural network when compared to the resolution of the hidden layers ("internal resolution").
5 code implementations • 12 Jun 2019 • Aakanksha Chowdhery, Pete Warden, Jonathon Shlens, Andrew Howard, Rocky Rhodes
To facilitate the development of microcontroller friendly models, we present a new dataset, Visual Wake Words, that represents a common microcontroller vision use-case of identifying whether a person is present in the image or not, and provides a realistic benchmark for tiny vision models.
1 code implementation • 4 Jun 2019 • Grace Chu, Brian Potetz, Weijun Wang, Andrew Howard, Yang song, Fernando Brucher, Thomas Leung, Hartwig Adam
By leveraging geolocation information we improve top-1 accuracy in iNaturalist from 70. 1% to 79. 0% for a strong baseline image-only model.
63 code implementations • ICCV 2019 • Andrew Howard, Mark Sandler, Grace Chu, Liang-Chieh Chen, Bo Chen, Mingxing Tan, Weijun Wang, Yukun Zhu, Ruoming Pang, Vijay Vasudevan, Quoc V. Le, Hartwig Adam
We achieve new state of the art results for mobile classification, detection and segmentation.
Ranked #9 on Dichotomous Image Segmentation on DIS-TE1
no code implementations • ICLR 2019 • Pramod Kaushik Mudrakarta, Mark Sandler, Andrey Zhmoginov, Andrew Howard
We introduce a novel method that enables parameter-efficient transfer and multitask learning.
no code implementations • 15 Apr 2019 • Sergei Alyamkin, Matthew Ardi, Alexander C. Berg, Achille Brighton, Bo Chen, Yiran Chen, Hsin-Pai Cheng, Zichen Fan, Chen Feng, Bo Fu, Kent Gauen, Abhinav Goel, Alexander Goncharenko, Xuyang Guo, Soonhoi Ha, Andrew Howard, Xiao Hu, Yuanjun Huang, Donghyun Kang, Jaeyoun Kim, Jong Gook Ko, Alexander Kondratyev, Junhyeok Lee, Seungjae Lee, Suwoong Lee, Zichao Li, Zhiyu Liang, Juzheng Liu, Xin Liu, Yang Lu, Yung-Hsiang Lu, Deeptanshu Malik, Hong Hanh Nguyen, Eunbyung Park, Denis Repin, Liang Shen, Tao Sheng, Fei Sun, David Svitov, George K. Thiruvathukal, Baiwu Zhang, Jingchi Zhang, Xiaopeng Zhang, Shaojie Zhuo
In addition to mobile phones, many autonomous systems rely on visual data for making decisions and some of these systems have limited energy (such as unmanned aerial vehicles also called drones and mobile robots).
no code implementations • ICLR 2019 • Pramod Kaushik Mudrakarta, Mark Sandler, Andrey Zhmoginov, Andrew Howard
We introduce a novel method that enables parameter-efficient transfer and multi-task learning with deep neural networks.
no code implementations • 3 Oct 2018 • Sergei Alyamkin, Matthew Ardi, Achille Brighton, Alexander C. Berg, Yiran Chen, Hsin-Pai Cheng, Bo Chen, Zichen Fan, Chen Feng, Bo Fu, Kent Gauen, Jongkook Go, Alexander Goncharenko, Xuyang Guo, Hong Hanh Nguyen, Andrew Howard, Yuanjun Huang, Donghyun Kang, Jaeyoun Kim, Alexander Kondratyev, Seungjae Lee, Suwoong Lee, Junhyeok Lee, Zhiyu Liang, Xin Liu, Juzheng Liu, Zichao Li, Yang Lu, Yung-Hsiang Lu, Deeptanshu Malik, Eunbyung Park, Denis Repin, Tao Sheng, Liang Shen, Fei Sun, David Svitov, George K. Thiruvathukal, Baiwu Zhang, Jingchi Zhang, Xiaopeng Zhang, Shaojie Zhuo
The Low-Power Image Recognition Challenge (LPIRC, https://rebootingcomputing. ieee. org/lpirc) is an annual competition started in 2015.
28 code implementations • CVPR 2019 • Mingxing Tan, Bo Chen, Ruoming Pang, Vijay Vasudevan, Mark Sandler, Andrew Howard, Quoc V. Le
In this paper, we propose an automated mobile neural architecture search (MNAS) approach, which explicitly incorporate model latency into the main objective so that the search can identify a model that achieves a good trade-off between accuracy and latency.
Ranked #902 on Image Classification on ImageNet
1 code implementation • CVPR 2018 • Yin Cui, Yang song, Chen Sun, Andrew Howard, Serge Belongie
We propose a measure to estimate domain similarity via Earth Mover's Distance and demonstrate that transfer learning benefits from pre-training on a source domain that is similar to the target domain by this measure.
Ranked #33 on Fine-Grained Image Classification on CUB-200-2011
Fine-Grained Image Classification Fine-Grained Visual Categorization +1
4 code implementations • ECCV 2018 • Tien-Ju Yang, Andrew Howard, Bo Chen, Xiao Zhang, Alec Go, Mark Sandler, Vivienne Sze, Hartwig Adam
This work proposes an algorithm, called NetAdapt, that automatically adapts a pre-trained deep neural network to a mobile platform given a resource budget.
156 code implementations • CVPR 2018 • Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, Liang-Chieh Chen
In this paper we describe a new mobile architecture, MobileNetV2, that improves the state of the art performance of mobile models on multiple tasks and benchmarks as well as across a spectrum of different model sizes.
Ranked #7 on Retinal OCT Disease Classification on OCT2017
20 code implementations • CVPR 2018 • Benoit Jacob, Skirmantas Kligys, Bo Chen, Menglong Zhu, Matthew Tang, Andrew Howard, Hartwig Adam, Dmitry Kalenichenko
The rising popularity of intelligent mobile devices and the daunting computational cost of deep learning-based models call for efficient and accurate on-device inference schemes.
1 code implementation • 20 Nov 2015 • Jonathan Krause, Benjamin Sapp, Andrew Howard, Howard Zhou, Alexander Toshev, Tom Duerig, James Philbin, Li Fei-Fei
Current approaches for fine-grained recognition do the following: First, recruit experts to annotate a dataset of images, optionally also collecting more structured data in the form of part annotations and bounding boxes.
Ranked #6 on Fine-Grained Image Classification on CUB-200-2011 (using extra training data)