no code implementations • 10 Apr 2022 • Xin Dong, Barbara De Salvo, Meng Li, Chiao Liu, Zhongnan Qu, H. T. Kung, Ziyun Li
We design deep neural networks (DNNs) and corresponding networks' splittings to distribute DNNs' workload to camera sensors and a centralized aggregator on head mounted devices to meet system performance targets in inference accuracy and latency under the given hardware resource constraints.
no code implementations • 17 Feb 2022 • Jiaxu Li, Yin Xu, Ying Wang, Meng Li, Jinghan He, Chen-Ching Liu, Kevin P. Schneider
In this paper, a distribution system service restoration method considering the electricity-water-gas interdependency is proposed.
no code implementations • 21 Nov 2021 • Yawen Wu, Zhepeng Wang, Dewen Zeng, Meng Li, Yiyu Shi, Jingtong Hu
To tackle this problem, we propose a collaborative contrastive learning framework consisting of two approaches: feature fusion and neighborhood matching, by which a unified feature space among clients is learned for better data representations.
1 code implementation • 18 Nov 2021 • Haoqi Fan, Tullie Murrell, Heng Wang, Kalyan Vasudev Alwala, Yanghao Li, Yilei Li, Bo Xiong, Nikhila Ravi, Meng Li, Haichuan Yang, Jitendra Malik, Ross Girshick, Matt Feiszli, Aaron Adcock, Wan-Yen Lo, Christoph Feichtenhofer
We introduce PyTorchVideo, an open-source deep-learning library that provides a rich set of modular, efficient, and reproducible components for a variety of video understanding tasks, including classification, detection, self-supervised learning, and low-level processing.
no code implementations • 2 Nov 2021 • Cole Hawkins, Haichuan Yang, Meng Li, Liangzhen Lai, Vikas Chandra
Low-rank tensor compression has been proposed as a promising approach to reduce the memory and compute requirements of neural networks for their deployment on edge devices.
no code implementations • 1 Nov 2021 • Jiaqi Gu, Hyoukjun Kwon, Dilin Wang, Wei Ye, Meng Li, Yu-Hsin Chen, Liangzhen Lai, Vikas Chandra, David Z. Pan
Therefore, we propose HRViT, which enhances ViTs to learn semantically-rich and spatially-precise multi-scale representations by integrating high-resolution multi-branch architectures with ViTs.
1 code implementation • 27 Oct 2021 • Kun Li, Meng Li, Yanling Li, Min Lin
The traditional trend prediction models can better predict the short trend than the long trend.
no code implementations • 15 Oct 2021 • Haichuan Yang, Yuan Shangguan, Dilin Wang, Meng Li, Pierce Chuang, Xiaohui Zhang, Ganesh Venkatesh, Ozlem Kalinli, Vikas Chandra
From wearables to powerful smart devices, modern automatic speech recognition (ASR) models run on a variety of edge devices with different computational budgets.
no code implementations • 29 Sep 2021 • Yonggan Fu, Qixuan Yu, Meng Li, Xu Ouyang, Vikas Chandra, Yingyan Lin
Contrastive learning, which learns visual representations by enforcing feature consistency under different augmented views, has emerged as one of the most effective unsupervised learning methods.
no code implementations • 29 Sep 2021 • Yawen Wu, Zhepeng Wang, Dewen Zeng, Meng Li, Yiyu Shi, Jingtong Hu
Federated learning (FL) enables distributed clients to learn a shared model for prediction while keeping the training data local on each client.
no code implementations • ICLR 2022 • Chengyue Gong, Dilin Wang, Meng Li, Xinlei Chen, Zhicheng Yan, Yuandong Tian, Qiang Liu, Vikas Chandra
In this work, we observe that the poor performance is due to a gradient conflict issue: the gradients of different sub-networks conflict with that of the supernet more severely in ViTs than CNNs, which leads to early saturation in training and inferior convergence.
no code implementations • 9 Jul 2021 • Dilin Wang, Yuan Shangguan, Haichuan Yang, Pierce Chuang, Jiatong Zhou, Meng Li, Ganesh Venkatesh, Ozlem Kalinli, Vikas Chandra
We apply noisy training to improve both dense and sparse state-of-the-art Emformer models and observe consistent WER reduction.
no code implementations • 29 Apr 2021 • Meng Li, Changyan Lin, Heng Wu, Jiasong Li, Hongshuai Cao
Since the mapping relationship between definitized intra-interventional X-ray and undefined pre-interventional Computed Tomography(CT) is uncertain, auxiliary positioning devices or body markers, such as medical implants, are commonly used to determine this relationship.
1 code implementation • 26 Apr 2021 • Chengyue Gong, Dilin Wang, Meng Li, Vikas Chandra, Qiang Liu
To alleviate this problem, in this work, we introduce novel loss functions in vision transformer training to explicitly encourage diversity across patch representations for more discriminative feature extraction.
Ranked #7 on
Semantic Segmentation
on Cityscapes val
no code implementations • 2 Mar 2021 • Meng Li, Shiyu Zhou, Bo Xu
Experimental results on segmented speech data show that the proposed MTL framework outperforms the baseline single-task learning (STL) framework in ASR task.
2 code implementations • 16 Feb 2021 • Dilin Wang, Chengyue Gong, Meng Li, Qiang Liu, Vikas Chandra
Weight-sharing NAS builds a supernet that assembles all the architectures as its sub-networks and jointly trains the supernet with the sub-networks.
Ranked #4 on
Neural Architecture Search
on ImageNet
no code implementations • 14 Feb 2021 • Xiaoyan Wang, Xi Lin, Meng Li
We call such a mobility market with AV renting options the "AV crowdsourcing market".
1 code implementation • ICLR 2021 • Yonggan Fu, Han Guo, Meng Li, Xin Yang, Yining Ding, Vikas Chandra, Yingyan Lin
In this paper, we attempt to explore low-precision training from a new perspective as inspired by recent findings in understanding DNN training: we conjecture that DNNs' precision might have a similar effect as the learning rate during DNN training, and advocate dynamic precision along the training trajectory for further boosting the time/energy efficiency of DNN training.
no code implementations • 18 Jan 2021 • Xiangzhuo Xing, Yue Sun, Xiaolei Yi, Meng Li, Jiajia Feng, Yan Meng, Yufeng Zhang, Wenchong Li, Nan Zhou, Xiude He, Jun-Yi Ge, Wei Zhou, Tsuyoshi Tamegai, Zhixiang Shi
FeSe$_{1-x}$Te$_{x}$ superconductors manifest some intriguing electronic properties depending on the value of $x$.
Superconductivity Materials Science
no code implementations • 31 Dec 2020 • Can Peng, Kun Zhao, Sam Maksoud, Meng Li, Brian C. Lovell
Incremental learning requires a model to continually learn new tasks from streaming data.
no code implementations • 11 Dec 2020 • Zhiyun Fan, Meng Li, Shiyu Zhou, Bo Xu
Then we demonstrate the effectiveness of wav2vec 2. 0 on the two tasks respectively.
no code implementations • 30 Nov 2020 • Hsin-Pai Cheng, Feng Liang, Meng Li, Bowen Cheng, Feng Yan, Hai Li, Vikas Chandra, Yiran Chen
We use ScaleNAS to create high-resolution models for two different tasks, ScaleNet-P for human pose estimation and ScaleNet-S for semantic segmentation.
Ranked #2 on
Multi-Person Pose Estimation
on CrowdPose
(using extra training data)
no code implementations • 27 Nov 2020 • Zejian Liu, Meng Li
For a general class of kernels, we establish convergence rates of the posterior measure of the regression function and its derivatives, which are both minimax optimal up to a logarithmic factor for functions in certain classes.
no code implementations • CVPR 2021 • Chengyue Gong, Dilin Wang, Meng Li, Vikas Chandra, Qiang Liu
Data augmentation (DA) is an essential technique for training state-of-the-art deep learning systems.
2 code implementations • CVPR 2021 • Dilin Wang, Meng Li, Chengyue Gong, Vikas Chandra
Our discovered model family, AttentiveNAS models, achieves top-1 accuracy from 77. 3% to 80. 7% on ImageNet, and outperforms SOTA models, including BigNAS and Once-for-All networks.
Ranked #9 on
Neural Architecture Search
on ImageNet
no code implementations • 28 Oct 2020 • Yongan Zhang, Yonggan Fu, Weiwen Jiang, Chaojian Li, Haoran You, Meng Li, Vikas Chandra, Yingyan Lin
Powerful yet complex deep neural networks (DNNs) have fueled a booming demand for efficient DNN solutions to bring DNN-powered intelligence into numerous applications.
2 code implementations • 25 Oct 2020 • Kailai Li, Meng Li, Uwe D. Hanebeck
LiLi-OM (Livox LiDAR-inertial odometry and mapping) is real-time capable and achieves superior accuracy over state-of-the-art systems for both LiDAR types on public data sets of mechanical LiDARs and in experiments using the Livox Horizon.
Robotics
no code implementations • 8 Jul 2020 • Hsin-Pai Cheng, Tunhou Zhang, Yixing Zhang, Shi-Yu Li, Feng Liang, Feng Yan, Meng Li, Vikas Chandra, Hai Li, Yiran Chen
To preserve graph correlation information in encoding, we propose NASGEM which stands for Neural Architecture Search via Graph Embedding Method.
1 code implementation • IEEE Transactions on Medical Imaging 2020 • Meng Li, William Hsu, Xiaodong Xie, Jason Cong, Wen Gao
We combine these two methods and demonstrate their effectiveness on both CNN-based neural networks and WGAN-based neural networks with comprehensive experiments.
1 code implementation • 17 Jun 2020 • Zhengjia Wang, John Magnotti, Michael S. Beauchamp, Meng Li
In particular, we show that the estimated coefficient functions are rate optimal in the minimax sense under the $L_2$ norm and resemble a phase transition phenomenon.
Methodology Statistics Theory Statistics Theory
no code implementations • 2 Jun 2020 • Zejian Liu, Meng Li
We study the problem of estimating the derivatives of the regression function, which has a wide range of applications as a key nonparametric functional of unknown functions.
6 code implementations • 20 Mar 2020 • Fan Zhang, Meng Li, Guisheng Zhai, Yizhao Liu
Therefore, our multi-branch and multi-scale learning network(MMAL-Net) has good classification ability and robustness for images of different scales.
Ranked #3 on
Fine-Grained Image Classification
on FGVC Aircraft
Fine-Grained Image Classification
Fine-Grained Image Recognition
+2
no code implementations • 5 Mar 2020 • Qitao Shi, Ya-Lin Zhang, Longfei Li, Xinxing Yang, Meng Li, Jun Zhou
Machine learning techniques have been widely applied in Internet companies for various tasks, acting as an essential driving force, and feature engineering has been generally recognized as a crucial tache when constructing machine learning systems.
no code implementations • 13 Feb 2020 • Meng Li, Yilei Li, Pierce Chuang, Liangzhen Lai, Vikas Chandra
Neural network accelerator is a key enabler for the on-device AI inference, for which energy efficiency is an important metric.
no code implementations • 13 Feb 2020 • Ke Zhang, Meng Li, Zhengchao Zhang, Xi Lin, Fang He
Multi-vehicle routing problem with soft time windows (MVRPSTW) is an indispensable constituent in urban logistics distribution systems.
no code implementations • 10 Feb 2020 • Lei Yang, Zheyu Yan, Meng Li, Hyoukjun Kwon, Liangzhen Lai, Tushar Krishna, Vikas Chandra, Weiwen Jiang, Yiyu Shi
Neural Architecture Search (NAS) has demonstrated its power on various AI accelerating platforms such as Field Programmable Gate Arrays (FPGAs) and Graphic Processing Units (GPUs).
1 code implementation • 4 Jan 2020 • Zijian Zeng, Meng Li
We develop a Bayesian median autoregressive (BayesMAR) model for time series forecasting.
Applications Econometrics Methodology
1 code implementation • CVPR 2020 • Rongjie Liu, Meng Li, Li Ma
Fast and effective image compression for multi-dimensional images has become increasingly important for efficient storage and transfer of massive amounts of high-resolution images and videos.
1 code implementation • 4 Nov 2019 • Jie Zhao, Lei Dai, Mo Zhang, Fei Yu, Meng Li, Hongfeng Li, Wenjia Wang, Li Zhang
The experimental results show that the PGU-net+ has superior accuracy than the previous state-of-the-art methods on cervical nuclei segmentation.
1 code implementation • ICLR 2020 • Dilin Wang, Meng Li, Lemeng Wu, Vikas Chandra, Qiang Liu
Designing energy-efficient networks is of critical importance for enabling state-of-the-art deep learning in mobile and edge settings where the computation and energy budgets are highly limited.
no code implementations • 9 Sep 2019 • Yun Shang, Meng Li
Quantum state transfer between different sites is a significant problem for quantum networks and quantum computers.
Quantum Physics
no code implementations • 22 Aug 2019 • Jiexiang Wang, Cheng Bian, Meng Li, Xin Yang, Kai Ma, Wenao Ma, Jin Yuan, Xinghao Ding, Yefeng Zheng
Automatic and accurate segmentation for retinal and choroidal layers of Optical Coherence Tomography (OCT) is crucial for detection of various ocular diseases.
1 code implementation • 24 Jun 2019 • Meng Li, Lin Wu, Arnold Wiliem, Kun Zhao, Teng Zhang, Brian C. Lovell
Histopathology image analysis can be considered as a Multiple instance learning (MIL) problem, where the whole slide histopathology image (WSI) is regarded as a bag of instances (i. e, patches) and the task is to predict a single class label to the WSI.
no code implementations • 1 Dec 2018 • Meng Li, Yan Zhang, Haicheng She, Jinqiong Zhou, Jia Jia, Danmei He, Li Zhang
The change of retinal vasculature is an early sign of many vascular and systematic diseases, such as diabetes and hypertension.
no code implementations • 24 Oct 2018 • Zhengchao Zhang, Meng Li, Xi Lin, Yinhai Wang, Fang He
Multistep traffic forecasting on road networks is a crucial task in successful intelligent transportation system applications.
1 code implementation • 1 Sep 2018 • Yusha Liu, Meng Li, Jeffrey S. Morris
Mass spectrometry proteomics, characterized by spiky, spatially heterogeneous functional data, can be used to identify potential cancer biomarkers.
Methodology
no code implementations • 18 Jul 2018 • Meng Li, Shiwen Shen, Wen Gao, William Hsu, Jason Cong
Computed tomography (CT) is increasingly being used for cancer screening, such as early detection of lung cancer.
1 code implementation • 2 Jun 2018 • Yue Zhao, Meng Li, Liangzhen Lai, Naveen Suda, Damon Civin, Vikas Chandra
Experiments show that accuracy can be increased by 30% for the CIFAR-10 dataset with only 5% globally shared data.
no code implementations • SEMEVAL 2018 • Meng Li, Zhenyuan Dong, Zhihao Fan, Kongming Meng, Jinghua Cao, Guanqi Ding, Yu-Han Liu, Jiawei Shan, Binyang Li
This paper presents a UIR-Miner system for emotion and sentiment analysis evaluation in Twitter in SemEval 2018.
no code implementations • 15 Feb 2018 • Jun Lu, Meng Li, David Dunson
Dirichlet process mixture (DPM) models tend to produce many small clusters regardless of whether they are needed to accurately characterize the data - this is particularly true for large data sets.
1 code implementation • 2 Nov 2017 • Meng Li, Li Ma
Effective learning of asymmetric and local features in images and other data observed on multi-dimensional grids is a challenging objective critical for a wide range of image processing applications involving biomedical and natural images.
no code implementations • ICLR 2018 • Meng Li, Liangzhen Lai, Naveen Suda, Vikas Chandra, David Z. Pan
Massive data exist among user local platforms that usually cannot support deep neural network (DNN) training due to computation and storage resource constraints.