Search Results for author: Andrew Howard

Found 23 papers, 10 papers with code

MOSAIC: Mobile Segmentation via decoding Aggregated Information and encoded Context

no code implementations22 Dec 2021 Weijun Wang, Andrew Howard

We present a next-generation neural network architecture, MOSAIC, for efficient and accurate semantic image segmentation on mobile devices.

Semantic Segmentation

Bridging the Gap Between Object Detection and User Intent via Query-Modulation

no code implementations18 Jun 2021 Marco Fornoni, Chaochao Yan, Liangchen Luo, Kimberly Wilber, Alex Stark, Yin Cui, Boqing Gong, Andrew Howard

This often leads to incorrect results, such as lack of a high-confidence detection on the object of interest, or detection with a wrong class label.

object-detection Object Detection +1

BasisNet: Two-stage Model Synthesis for Efficient Inference

no code implementations7 May 2021 Mingda Zhang, Chun-Te Chu, Andrey Zhmoginov, Andrew Howard, Brendan Jou, Yukun Zhu, Li Zhang, Rebecca Hwa, Adriana Kovashka

With early termination, the average cost can be further reduced to 198M MAdds while maintaining accuracy of 80. 0% on ImageNet.

Dynamical orbital evolution scenarios of the wide-orbit eccentric planet HR 5183b

no code implementations11 Feb 2021 Alexander J. Mustill, Melvyn B. Davies, Sarah Blunt, Andrew Howard

In this paper, we explore scenarios for the excitation of the eccentricity of the planet in binary systems such as this, considering planet-planet scattering, Lidov-Kozai cycles from the binary acting on a single-planet system, or Lidov-Kozai cycles acting on a two-planet system that also undergoes scattering.

Earth and Planetary Astrophysics

SpotPatch: Parameter-Efficient Transfer Learning for Mobile Object Detection

no code implementations4 Jan 2021 Keren Ye, Adriana Kovashka, Mark Sandler, Menglong Zhu, Andrew Howard, Marco Fornoni

In this paper we address the question: can task-specific detectors be trained and represented as a shared set of weights, plus a very small set of additional weights for each task?

object-detection Object Detection +1

Large-Scale Generative Data-Free Distillation

no code implementations10 Dec 2020 Liangchen Luo, Mark Sandler, Zi Lin, Andrey Zhmoginov, Andrew Howard

Knowledge distillation is one of the most popular and effective techniques for knowledge transfer, model compression and semi-supervised learning.

Knowledge Distillation Model Compression +1

The Occurrence of Rocky Habitable Zone Planets Around Solar-Like Stars from Kepler Data

1 code implementation28 Oct 2020 Steve Bryson, Michelle Kunimoto, Ravi K. Kopparapu, Jeffrey L. Coughlin, William J. Borucki, David Koch, Victor Silva Aguirre, Christopher Allen, Geert Barentsen, Natalie. M. Batalha, Travis Berger, Alan Boss, Lars A. Buchhave, Christopher J. Burke, Douglas A. Caldwell, Jennifer R. Campbell, Joseph Catanzarite, Hema Chandrasekharan, William J. Chaplin, Jessie L. Christiansen, Jorgen Christensen-Dalsgaard, David R. Ciardi, Bruce D. Clarke, William D. Cochran, Jessie L. Dotson, Laurance R. Doyle, Eduardo Seperuelo Duarte, Edward W. Dunham, Andrea K. Dupree, Michael Endl, James L. Fanson, Eric B. Ford, Maura Fujieh, Thomas N. Gautier III, John C. Geary, Ronald L Gilliland, Forrest R. Girouard, Alan Gould, Michael R. Haas, Christopher E. Henze, Matthew J. Holman, Andrew Howard, Steve B. Howell, Daniel Huber, Roger C. Hunter, Jon M. Jenkins, Hans Kjeldsen, Jeffery Kolodziejczak, Kipp Larson, David W. Latham, Jie Li, Savita Mathur, Soren Meibom, Chris Middour, Robert L. Morris, Timothy D. Morton, Fergal Mullally, Susan E. Mullally, David Pletcher, Andrej Prsa, Samuel N. Quinn, Elisa V. Quintana, Darin Ragozzine, Solange V. Ramirez, Dwight T. Sanderfer, Dimitar Sasselov, Shawn E. Seader, Megan Shabram, Avi Shporer, Jeffrey C. Smith, Jason H. Steffen, Martin Still, Guillermo Torres, John Troeltzsch, Joseph D. Twicken, Akm Kamal Uddin, Jeffrey E. Van Cleve, Janice Voss, Lauren Weiss, William F. Welsh, Bill Wohler, Khadeejah A Zamudio

We present occurrence rates for rocky planets in the habitable zones (HZ) of main-sequence dwarf stars based on the Kepler DR25 planet candidate catalog and Gaia-based stellar properties.

Earth and Planetary Astrophysics Solar and Stellar Astrophysics

Multi-path Neural Networks for On-device Multi-domain Visual Classification

no code implementations10 Oct 2020 Qifei Wang, Junjie Ke, Joshua Greaves, Grace Chu, Gabriel Bender, Luciano Sbaiz, Alec Go, Andrew Howard, Feng Yang, Ming-Hsuan Yang, Jeff Gilbert, Peyman Milanfar

This approach effectively reduces the total number of parameters and FLOPS, encouraging positive knowledge transfer while mitigating negative interference across domains.

General Classification Neural Architecture Search +1

Discovering Multi-Hardware Mobile Models via Architecture Search

no code implementations18 Aug 2020 Grace Chu, Okan Arikan, Gabriel Bender, Weijun Wang, Achille Brighton, Pieter-Jan Kindermans, Hanxiao Liu, Berkin Akin, Suyog Gupta, Andrew Howard

Hardware-aware neural architecture designs have been predominantly focusing on optimizing model performance on single hardware and model development complexity, where another important factor, model deployment complexity, has been largely ignored.

Neural Architecture Search

Non-discriminative data or weak model? On the relative importance of data and model resolution

no code implementations7 Sep 2019 Mark Sandler, Jonathan Baccash, Andrey Zhmoginov, Andrew Howard

We explore the question of how the resolution of the input image ("input resolution") affects the performance of a neural network when compared to the resolution of the hidden layers ("internal resolution").

Visual Wake Words Dataset

3 code implementations12 Jun 2019 Aakanksha Chowdhery, Pete Warden, Jonathon Shlens, Andrew Howard, Rocky Rhodes

To facilitate the development of microcontroller friendly models, we present a new dataset, Visual Wake Words, that represents a common microcontroller vision use-case of identifying whether a person is present in the image or not, and provides a realistic benchmark for tiny vision models.

Geo-Aware Networks for Fine-Grained Recognition

1 code implementation4 Jun 2019 Grace Chu, Brian Potetz, Weijun Wang, Andrew Howard, Yang song, Fernando Brucher, Thomas Leung, Hartwig Adam

By leveraging geolocation information we improve top-1 accuracy in iNaturalist from 70. 1% to 79. 0% for a strong baseline image-only model.

Fine-Grained Image Classification General Classification

MnasNet: Platform-Aware Neural Architecture Search for Mobile

16 code implementations CVPR 2019 Mingxing Tan, Bo Chen, Ruoming Pang, Vijay Vasudevan, Mark Sandler, Andrew Howard, Quoc V. Le

In this paper, we propose an automated mobile neural architecture search (MNAS) approach, which explicitly incorporate model latency into the main objective so that the search can identify a model that achieves a good trade-off between accuracy and latency.

Image Classification Neural Architecture Search +2

Large Scale Fine-Grained Categorization and Domain-Specific Transfer Learning

1 code implementation CVPR 2018 Yin Cui, Yang song, Chen Sun, Andrew Howard, Serge Belongie

We propose a measure to estimate domain similarity via Earth Mover's Distance and demonstrate that transfer learning benefits from pre-training on a source domain that is similar to the target domain by this measure.

Fine-Grained Image Classification Fine-Grained Visual Categorization +1

NetAdapt: Platform-Aware Neural Network Adaptation for Mobile Applications

4 code implementations ECCV 2018 Tien-Ju Yang, Andrew Howard, Bo Chen, Xiao Zhang, Alec Go, Mark Sandler, Vivienne Sze, Hartwig Adam

This work proposes an algorithm, called NetAdapt, that automatically adapts a pre-trained deep neural network to a mobile platform given a resource budget.

Image Classification

MobileNetV2: Inverted Residuals and Linear Bottlenecks

120 code implementations CVPR 2018 Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, Liang-Chieh Chen

In this paper we describe a new mobile architecture, MobileNetV2, that improves the state of the art performance of mobile models on multiple tasks and benchmarks as well as across a spectrum of different model sizes.

Image Classification object-detection +4

Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference

21 code implementations CVPR 2018 Benoit Jacob, Skirmantas Kligys, Bo Chen, Menglong Zhu, Matthew Tang, Andrew Howard, Hartwig Adam, Dmitry Kalenichenko

The rising popularity of intelligent mobile devices and the daunting computational cost of deep learning-based models call for efficient and accurate on-device inference schemes.

General Classification Quantization

The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition

1 code implementation20 Nov 2015 Jonathan Krause, Benjamin Sapp, Andrew Howard, Howard Zhou, Alexander Toshev, Tom Duerig, James Philbin, Li Fei-Fei

Current approaches for fine-grained recognition do the following: First, recruit experts to annotate a dataset of images, optionally also collecting more structured data in the form of part annotations and bounding boxes.

Active Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.