Search Results for author: Yangqing Jia

Found 20 papers, 9 papers with code

DistriFusion: Distributed Parallel Inference for High-Resolution Diffusion Models

1 code implementation29 Feb 2024 Muyang Li, Tianle Cai, Jiaxin Cao, Qinsheng Zhang, Han Cai, Junjie Bai, Yangqing Jia, Ming-Yu Liu, Kai Li, Song Han

To overcome this dilemma, we observe the high similarity between the input from adjacent diffusion steps and propose displaced patch parallelism, which takes advantage of the sequential nature of the diffusion process by reusing the pre-computed feature maps from the previous timestep to provide context for the current step.

Characterizing Deep Learning Training Workloads on Alibaba-PAI

no code implementations14 Oct 2019 Mengdi Wang, Chen Meng, Guoping Long, Chuan Wu, Jun Yang, Wei. Lin, Yangqing Jia

One critical issue for efficiently operating practical AI clouds, is to characterize the computing and data transfer demands of these workloads, and more importantly, the training performance given the underlying software framework and hardware configurations.

ChamNet: Towards Efficient Network Design through Platform-Aware Model Adaptation

1 code implementation CVPR 2019 Xiaoliang Dai, Peizhao Zhang, Bichen Wu, Hongxu Yin, Fei Sun, Yanghan Wang, Marat Dukhan, Yunqing Hu, Yiming Wu, Yangqing Jia, Peter Vajda, Matt Uyttendaele, Niraj K. Jha

We formulate platform-aware NN architecture search in an optimization framework and propose a novel algorithm to search for optimal architectures aided by efficient accuracy and resource (latency and/or energy) predictors.

Bayesian Optimization Efficient Neural Network +1

High performance ultra-low-precision convolutions on mobile devices

no code implementations6 Dec 2017 Andrew Tulloch, Yangqing Jia

Many applications of mobile deep learning, especially real-time computer vision workloads, are constrained by computation power.

Vocal Bursts Intensity Prediction

Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour

70 code implementations8 Jun 2017 Priya Goyal, Piotr Dollár, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, Kaiming He

To achieve this result, we adopt a hyper-parameter-free linear scaling rule for adjusting learning rates as a function of minibatch size and develop a new warmup scheme that overcomes optimization challenges early in training.

Stochastic Optimization

MatchNet: Unifying Feature and Metric Learning for Patch-Based Matching

2 code implementations CVPR 2015 Xufeng Han, Thomas Leung, Yangqing Jia, Rahul Sukthankar, Alexander C. Berg

We perform a comprehensive set of experiments on standard datasets to carefully study the contributions of each aspect of MatchNet, with direct comparisons to established methods.

Computational Efficiency Metric Learning +1

Going Deeper with Convolutions

79 code implementations CVPR 2015 Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, Andrew Rabinovich

We propose a deep convolutional neural network architecture codenamed "Inception", which was responsible for setting the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC 2014).

General Classification Image Classification +2

Caffe: Convolutional Architecture for Fast Feature Embedding

2 code implementations20 Jun 2014 Yangqing Jia, Evan Shelhamer, Jeff Donahue, Sergey Karayev, Jonathan Long, Ross Girshick, Sergio Guadarrama, Trevor Darrell

The framework is a BSD-licensed C++ library with Python and MATLAB bindings for training and deploying general-purpose convolutional neural networks and other deep models efficiently on commodity architectures.

Clustering Dimensionality Reduction +1

One-Shot Adaptation of Supervised Deep Convolutional Models

no code implementations21 Dec 2013 Judy Hoffman, Eric Tzeng, Jeff Donahue, Yangqing Jia, Kate Saenko, Trevor Darrell

In other words, are deep CNNs trained on large amounts of labeled data as susceptible to dataset bias as previous methods have been shown to be?

Domain Adaptation Image Classification

Deep Convolutional Ranking for Multilabel Image Annotation

no code implementations17 Dec 2013 Yunchao Gong, Yangqing Jia, Thomas Leung, Alexander Toshev, Sergey Ioffe

Multilabel image annotation is one of the most important challenges in computer vision with many real-world applications.

DeCAF: A Deep Convolutional Activation Feature for Generic Visual Recognition

8 code implementations6 Oct 2013 Jeff Donahue, Yangqing Jia, Oriol Vinyals, Judy Hoffman, Ning Zhang, Eric Tzeng, Trevor Darrell

We evaluate whether features extracted from the activation of a deep convolutional network trained in a fully supervised fashion on a large, fixed set of object recognition tasks can be re-purposed to novel generic tasks.

Clustering Domain Adaptation +3

Why Size Matters: Feature Coding as Nystrom Sampling

no code implementations15 Jan 2013 Oriol Vinyals, Yangqing Jia, Trevor Darrell

Recently, the computer vision and machine learning community has been in favor of feature extraction pipelines that rely on a coding step followed by a linear classifier, due to their overall simplicity, well understood properties of linear classifiers, and their computational efficiency.

Computational Efficiency

Learning with Recursive Perceptual Representations

no code implementations NeurIPS 2012 Oriol Vinyals, Yangqing Jia, Li Deng, Trevor Darrell

The use of random projections is key to our method, as we show in the experiments section, in which we observe a consistent improvement over previous --often more complicated-- methods on several vision and speech benchmarks.

Image Classification Object Recognition

Heavy-tailed Distances for Gradient Based Image Descriptors

no code implementations NeurIPS 2011 Yangqing Jia, Trevor Darrell

Many applications in computer vision measure the similarity between images or image patches based on some statistics such as oriented gradients.

Factorized Latent Spaces with Structured Sparsity

no code implementations NeurIPS 2010 Yangqing Jia, Mathieu Salzmann, Trevor Darrell

Recent approaches to multi-view learning have shown that factorizing the information into parts that are shared across all views and parts that are private to each view could effectively account for the dependencies and independencies between the different input modalities.


Cannot find the paper you are looking for? You can Submit a new open access paper.