no code implementations • 26 Aug 2023 • Shuzhang Zhong, Meng Li, Yun Liang, Runsheng Wang, Ru Huang
Memory-aware network scheduling is becoming increasingly important for deep neural network (DNN) inference on resource-constrained devices.
no code implementations • 25 Aug 2023 • Tianshi Xu, Meng Li, Runsheng Wang, Ru Huang
Efficient networks, e. g., MobileNetV2, EfficientNet, etc, achieves state-of-the-art (SOTA) accuracy with lightweight computation.
1 code implementation • 10 Jul 2023 • Meng Li, Yahan Yu, Yi Yang, Guanghao Ren, Jian Wang
In this paper, we propose a deep learning-based character stroke extraction method that takes semantic features and prior information of strokes into consideration.
no code implementations • 8 Jun 2023 • Ganesh Jawahar, Haichuan Yang, Yunyang Xiong, Zechun Liu, Dilin Wang, Fei Sun, Meng Li, Aasish Pappu, Barlas Oguz, Muhammad Abdul-Mageed, Laks V. S. Lakshmanan, Raghuraman Krishnamoorthi, Vikas Chandra
In addition, the proposed method achieves the SOTA performance in NAS for building fast machine translation models, yielding better latency-BLEU tradeoff compared to HAT, state-of-the-art NAS for MT.
no code implementations • CVPR 2023 • Jiqing Zhang, Yuanchen Wang, Wenxi Liu, Meng Li, Jinpeng Bai, BaoCai Yin, Xin Yang
The alignment module is responsible for cross-style and cross-frame-rate alignment between frame and event modalities under the guidance of the moving cues furnished by events.
no code implementations • 19 May 2023 • Ya-Lin Zhang, Jun Zhou, Yankun Ren, Yue Zhang, Xinxing Yang, Meng Li, Qitao Shi, Longfei Li
In this paper, we consider the problem of long tail scenario modeling with budget limitation, i. e., insufficient human resources for model training stage and limited time and computing resources for model inference stage.
no code implementations • 22 Mar 2023 • Renjie Wei, Shuwen Zhang, Zechun Liu, Meng Li, Yuchen Fan, Runsheng Wang, Ru Huang
While the performance of deep convolutional neural networks for image super-resolution (SR) has improved significantly, the rapid increase of memory and computation requirements hinders their deployment on resource-constrained devices.
no code implementations • 14 Jan 2023 • Meng Li, Senbo Wang, Weihao Yuan, Weichao Shen, Zhe Sheng, Zilong Dong
In this paper, we propose an end-to-end deep network for monocular panorama depth estimation on a unit spherical surface.
no code implementations • 20 Dec 2022 • Meng Li, Chaoyi Li, Can Peng, Brian Lovell
Extensive experiments on the histopathology datasets show that leveraging our synthetic augmentation framework results in significant and consistent improvements in classification performance.
no code implementations • 20 Dec 2022 • Meng Li, Brian Lovell
The teacher learns to generate curriculum to feed into the student model for data augmentation and guides the student to improve performance in a meta-learning style.
no code implementations • 12 Dec 2022 • Lemeng Wu, Dilin Wang, Meng Li, Yunyang Xiong, Raghuraman Krishnamoorthi, Qiang Liu, Vikas Chandra
PathFusion introduces a path consistency loss between shallow and deep features, which encourages the 2D backbone and its fusion path to transform 2D features in a way that is semantically aligned with the transform of the 3D backbone.
1 code implementation • ICCV 2023 • Wenxuan Zeng, Meng Li, Wenjie Xiong, Tong Tong, Wen-jie Lu, Jin Tan, Runsheng Wang, Ru Huang
Secure multi-party computation (MPC) enables computation directly on encrypted data and protects both data and model privacy in deep learning inference.
no code implementations • 29 Oct 2022 • Meng Li, Xue-Ping Wang
This paper deals with the resolutions of fuzzy relation equations with addition-min composition.
no code implementations • 26 Oct 2022 • Junyi He, Meimei Wu, Meng Li, Xiaobo Zhu, Feng Ye
Inspired by Transformer TTS, we propose a multilevel transformer model to perform fine-grained multimodal emotion recognition.
1 code implementation • 30 Jul 2022 • Can Peng, Kun Zhao, Tianren Wang, Meng Li, Brian C. Lovell
The continual appearance of new objects in the visual world poses considerable challenges for current deep learning methods in real-world deployments.
no code implementations • 28 Jul 2022 • Meng Li, Shangyin Gao, Yihui Feng, Yibo Shi, Jing Wang
In recent years, with the development of deep neural networks, end-to-end optimized image compression has made significant progress and exceeded the classic methods in terms of rate-distortion performance.
no code implementations • 16 Jun 2022 • Vladimir Belov, Tracy Erwin-Grabner, Ali Saffet Gonul, Alyssa R. Amod, Amar Ojha, Andre Aleman, Annemiek Dols, Anouk Scharntee, Aslihan Uyar-Demir, Ben J Harrison, Benson M. Irungu, Bianca Besteher, Bonnie Klimes-Dougan, Brenda W. J. H. Penninx, Bryon A. Mueller, Carlos Zarate, Christopher G. Davey, Christopher R. K. Ching, Colm G. Connolly, Cynthia H. Y. Fu, Dan J. Stein, Danai Dima, David E. J. Linden, David M. A. Mehler, Edith Pomarol-Clotet, Elena Pozzi, Elisa Melloni, Francesco Benedetti, Frank P. MacMaster, Hans J. Grabe, Henry Völzke, Ian H. Gotlib, Jair C. Soares, Jennifer W. Evans, Kang Sim, Katharina Wittfeld, Kathryn Cullen, Liesbeth Reneman, Mardien L. Oudega, Margaret J. Wright, Maria J. Portella, Matthew D. Sacchet, Meng Li, Moji Aghajani, Mon-Ju Wu, Natalia Jaworska, Neda Jahanshad, Nic J. A. van der Wee, Nynke Groenewold, Paul J. Hamilton, Philipp Saemann, Robin Bülow, Sara Poletti, Sarah Whittle, Sophia I. Thomopoulos, Steven J. A. van, der Werff, Sheri-Michelle Koopowitz, Thomas Lancaster, Tiffany C. Ho, Tony T. Yang, Zeynep Basgoze, Dick J. Veltman, Lianne Schmaal, Paul M. Thompson, Roberto Goya-Maldonado
Machine learning (ML) techniques have gained popularity in the neuroimaging field due to their potential for classifying neuropsychiatric disorders.
1 code implementation • 2 Jun 2022 • Yonggan Fu, Haichuan Yang, Jiayi Yuan, Meng Li, Cheng Wan, Raghuraman Krishnamoorthi, Vikas Chandra, Yingyan Lin
Efficient deep neural network (DNN) models equipped with compact operators (e. g., depthwise convolutions) have shown great potential in reducing DNNs' theoretical complexity (e. g., the total number of weights/operations) while maintaining a decent model accuracy.
2 code implementations • 25 May 2022 • Zechun Liu, Barlas Oguz, Aasish Pappu, Lin Xiao, Scott Yih, Meng Li, Raghuraman Krishnamoorthi, Yashar Mehdad
Modern pre-trained transformers have rapidly advanced the state-of-the-art in machine learning, but have also grown in parameters and computational complexity, making them increasingly difficult to deploy in resource-constrained environments.
no code implementations • CVPR 2022 • Xin Dong, Barbara De Salvo, Meng Li, Chiao Liu, Zhongnan Qu, H. T. Kung, Ziyun Li
We design deep neural networks (DNNs) and corresponding networks' splittings to distribute DNNs' workload to camera sensors and a centralized aggregator on head mounted devices to meet system performance targets in inference accuracy and latency under the given hardware resource constraints.
no code implementations • 17 Feb 2022 • Jiaxu Li, Yin Xu, Ying Wang, Meng Li, Jinghan He, Chen-Ching Liu, Kevin P. Schneider
In this paper, a distribution system service restoration method considering the electricity-water-gas interdependency is proposed.
no code implementations • 21 Nov 2021 • Yawen Wu, Zhepeng Wang, Dewen Zeng, Meng Li, Yiyu Shi, Jingtong Hu
To tackle this problem, we propose a collaborative contrastive learning framework consisting of two approaches: feature fusion and neighborhood matching, by which a unified feature space among clients is learned for better data representations.
1 code implementation • 18 Nov 2021 • Haoqi Fan, Tullie Murrell, Heng Wang, Kalyan Vasudev Alwala, Yanghao Li, Yilei Li, Bo Xiong, Nikhila Ravi, Meng Li, Haichuan Yang, Jitendra Malik, Ross Girshick, Matt Feiszli, Aaron Adcock, Wan-Yen Lo, Christoph Feichtenhofer
We introduce PyTorchVideo, an open-source deep-learning library that provides a rich set of modular, efficient, and reproducible components for a variety of video understanding tasks, including classification, detection, self-supervised learning, and low-level processing.
no code implementations • 2 Nov 2021 • Cole Hawkins, Haichuan Yang, Meng Li, Liangzhen Lai, Vikas Chandra
Low-rank tensor compression has been proposed as a promising approach to reduce the memory and compute requirements of neural networks for their deployment on edge devices.
1 code implementation • CVPR 2022 • Jiaqi Gu, Hyoukjun Kwon, Dilin Wang, Wei Ye, Meng Li, Yu-Hsin Chen, Liangzhen Lai, Vikas Chandra, David Z. Pan
Therefore, we propose HRViT, which enhances ViTs to learn semantically-rich and spatially-precise multi-scale representations by integrating high-resolution multi-branch architectures with ViTs.
Ranked #20 on
Semantic Segmentation
on Cityscapes val
1 code implementation • 27 Oct 2021 • Kun Li, Meng Li, Yanling Li, Min Lin
The traditional trend prediction models can better predict the short trend than the long trend.
no code implementations • 15 Oct 2021 • Haichuan Yang, Yuan Shangguan, Dilin Wang, Meng Li, Pierce Chuang, Xiaohui Zhang, Ganesh Venkatesh, Ozlem Kalinli, Vikas Chandra
From wearables to powerful smart devices, modern automatic speech recognition (ASR) models run on a variety of edge devices with different computational budgets.
Automatic Speech Recognition
Automatic Speech Recognition (ASR)
+2
no code implementations • 29 Sep 2021 • Yonggan Fu, Qixuan Yu, Meng Li, Xu Ouyang, Vikas Chandra, Yingyan Lin
Contrastive learning, which learns visual representations by enforcing feature consistency under different augmented views, has emerged as one of the most effective unsupervised learning methods.
no code implementations • 29 Sep 2021 • Yawen Wu, Zhepeng Wang, Dewen Zeng, Meng Li, Yiyu Shi, Jingtong Hu
Federated learning (FL) enables distributed clients to learn a shared model for prediction while keeping the training data local on each client.
1 code implementation • ICLR 2022 • Chengyue Gong, Dilin Wang, Meng Li, Xinlei Chen, Zhicheng Yan, Yuandong Tian, Qiang Liu, Vikas Chandra
In this work, we observe that the poor performance is due to a gradient conflict issue: the gradients of different sub-networks conflict with that of the supernet more severely in ViTs than CNNs, which leads to early saturation in training and inferior convergence.
Ranked #7 on
Neural Architecture Search
on ImageNet
no code implementations • 9 Jul 2021 • Dilin Wang, Yuan Shangguan, Haichuan Yang, Pierce Chuang, Jiatong Zhou, Meng Li, Ganesh Venkatesh, Ozlem Kalinli, Vikas Chandra
We apply noisy training to improve both dense and sparse state-of-the-art Emformer models and observe consistent WER reduction.
Automatic Speech Recognition
Automatic Speech Recognition (ASR)
+2
no code implementations • 29 Apr 2021 • Meng Li, Changyan Lin, Heng Wu, Jiasong Li, Hongshuai Cao
Since the mapping relationship between definitized intra-interventional X-ray and undefined pre-interventional Computed Tomography(CT) is uncertain, auxiliary positioning devices or body markers, such as medical implants, are commonly used to determine this relationship.
1 code implementation • 26 Apr 2021 • Chengyue Gong, Dilin Wang, Meng Li, Vikas Chandra, Qiang Liu
To alleviate this problem, in this work, we introduce novel loss functions in vision transformer training to explicitly encourage diversity across patch representations for more discriminative feature extraction.
Ranked #16 on
Semantic Segmentation
on Cityscapes val
no code implementations • 2 Mar 2021 • Meng Li, Xia Yan, Feng Lin
When we use End-to-end automatic speech recognition (E2E-ASR) system for real-world applications, a voice activity detection (VAD) system is usually needed to improve the performance and to reduce the computational cost by discarding non-speech parts in the audio.
2 code implementations • 16 Feb 2021 • Dilin Wang, Chengyue Gong, Meng Li, Qiang Liu, Vikas Chandra
Weight-sharing NAS builds a supernet that assembles all the architectures as its sub-networks and jointly trains the supernet with the sub-networks.
Ranked #12 on
Neural Architecture Search
on ImageNet
no code implementations • 14 Feb 2021 • Xiaoyan Wang, Xi Lin, Meng Li
We call such a mobility market with AV renting options the "AV crowdsourcing market".
1 code implementation • ICLR 2021 • Yonggan Fu, Han Guo, Meng Li, Xin Yang, Yining Ding, Vikas Chandra, Yingyan Lin
In this paper, we attempt to explore low-precision training from a new perspective as inspired by recent findings in understanding DNN training: we conjecture that DNNs' precision might have a similar effect as the learning rate during DNN training, and advocate dynamic precision along the training trajectory for further boosting the time/energy efficiency of DNN training.
no code implementations • 18 Jan 2021 • Xiangzhuo Xing, Yue Sun, Xiaolei Yi, Meng Li, Jiajia Feng, Yan Meng, Yufeng Zhang, Wenchong Li, Nan Zhou, Xiude He, Jun-Yi Ge, Wei Zhou, Tsuyoshi Tamegai, Zhixiang Shi
FeSe$_{1-x}$Te$_{x}$ superconductors manifest some intriguing electronic properties depending on the value of $x$.
Superconductivity Materials Science
no code implementations • 31 Dec 2020 • Can Peng, Kun Zhao, Sam Maksoud, Meng Li, Brian C. Lovell
Incremental learning requires a model to continually learn new tasks from streaming data.
no code implementations • 11 Dec 2020 • Zhiyun Fan, Meng Li, Shiyu Zhou, Bo Xu
Then we demonstrate the effectiveness of wav2vec 2. 0 on the two tasks respectively.
no code implementations • 30 Nov 2020 • Hsin-Pai Cheng, Feng Liang, Meng Li, Bowen Cheng, Feng Yan, Hai Li, Vikas Chandra, Yiran Chen
We use ScaleNAS to create high-resolution models for two different tasks, ScaleNet-P for human pose estimation and ScaleNet-S for semantic segmentation.
Ranked #5 on
Multi-Person Pose Estimation
on COCO test-dev
no code implementations • 27 Nov 2020 • Zejian Liu, Meng Li
For a general class of kernels, we establish convergence rates of the posterior measure of the regression function and its derivatives, which are both minimax optimal up to a logarithmic factor for functions in certain classes.
1 code implementation • CVPR 2021 • Chengyue Gong, Dilin Wang, Meng Li, Vikas Chandra, Qiang Liu
Data augmentation (DA) is an essential technique for training state-of-the-art deep learning systems.
2 code implementations • CVPR 2021 • Dilin Wang, Meng Li, Chengyue Gong, Vikas Chandra
Our discovered model family, AttentiveNAS models, achieves top-1 accuracy from 77. 3% to 80. 7% on ImageNet, and outperforms SOTA models, including BigNAS and Once-for-All networks.
Ranked #21 on
Neural Architecture Search
on ImageNet
no code implementations • 28 Oct 2020 • Yongan Zhang, Yonggan Fu, Weiwen Jiang, Chaojian Li, Haoran You, Meng Li, Vikas Chandra, Yingyan Lin
Powerful yet complex deep neural networks (DNNs) have fueled a booming demand for efficient DNN solutions to bring DNN-powered intelligence into numerous applications.
2 code implementations • 25 Oct 2020 • Kailai Li, Meng Li, Uwe D. Hanebeck
LiLi-OM (Livox LiDAR-inertial odometry and mapping) is real-time capable and achieves superior accuracy over state-of-the-art systems for both LiDAR types on public data sets of mechanical LiDARs and in experiments using the Livox Horizon.
Robotics
no code implementations • 8 Jul 2020 • Hsin-Pai Cheng, Tunhou Zhang, Yixing Zhang, Shi-Yu Li, Feng Liang, Feng Yan, Meng Li, Vikas Chandra, Hai Li, Yiran Chen
To preserve graph correlation information in encoding, we propose NASGEM which stands for Neural Architecture Search via Graph Embedding Method.
1 code implementation • IEEE Transactions on Medical Imaging 2020 • Meng Li, William Hsu, Xiaodong Xie, Jason Cong, Wen Gao
We combine these two methods and demonstrate their effectiveness on both CNN-based neural networks and WGAN-based neural networks with comprehensive experiments.
1 code implementation • 17 Jun 2020 • Zhengjia Wang, John Magnotti, Michael S. Beauchamp, Meng Li
In particular, we show that the estimated coefficient functions are rate optimal in the minimax sense under the $L_2$ norm and resemble a phase transition phenomenon.
Methodology Statistics Theory Statistics Theory
1 code implementation • 2 Jun 2020 • Zejian Liu, Meng Li
We study the problem of estimating the derivatives of a regression function, which has a wide range of applications as a key nonparametric functional of unknown functions.
6 code implementations • 20 Mar 2020 • Fan Zhang, Meng Li, Guisheng Zhai, Yizhao Liu
Therefore, our multi-branch and multi-scale learning network(MMAL-Net) has good classification ability and robustness for images of different scales.
Ranked #3 on
Fine-Grained Image Classification
on FGVC Aircraft
Fine-Grained Image Classification
Fine-Grained Image Recognition
+2
no code implementations • 5 Mar 2020 • Qitao Shi, Ya-Lin Zhang, Longfei Li, Xinxing Yang, Meng Li, Jun Zhou
Machine learning techniques have been widely applied in Internet companies for various tasks, acting as an essential driving force, and feature engineering has been generally recognized as a crucial tache when constructing machine learning systems.
no code implementations • 13 Feb 2020 • Meng Li, Yilei Li, Pierce Chuang, Liangzhen Lai, Vikas Chandra
Neural network accelerator is a key enabler for the on-device AI inference, for which energy efficiency is an important metric.
no code implementations • 13 Feb 2020 • Ke Zhang, Meng Li, Zhengchao Zhang, Xi Lin, Fang He
Multi-vehicle routing problem with soft time windows (MVRPSTW) is an indispensable constituent in urban logistics distribution systems.
Multi-agent Reinforcement Learning
reinforcement-learning
+1
no code implementations • 10 Feb 2020 • Lei Yang, Zheyu Yan, Meng Li, Hyoukjun Kwon, Liangzhen Lai, Tushar Krishna, Vikas Chandra, Weiwen Jiang, Yiyu Shi
Neural Architecture Search (NAS) has demonstrated its power on various AI accelerating platforms such as Field Programmable Gate Arrays (FPGAs) and Graphic Processing Units (GPUs).
1 code implementation • 4 Jan 2020 • Zijian Zeng, Meng Li
We develop a Bayesian median autoregressive (BayesMAR) model for time series forecasting.
Applications Econometrics Methodology
1 code implementation • CVPR 2020 • Rongjie Liu, Meng Li, Li Ma
Fast and effective image compression for multi-dimensional images has become increasingly important for efficient storage and transfer of massive amounts of high-resolution images and videos.
1 code implementation • 4 Nov 2019 • Jie Zhao, Lei Dai, Mo Zhang, Fei Yu, Meng Li, Hongfeng Li, Wenjia Wang, Li Zhang
The experimental results show that the PGU-net+ has superior accuracy than the previous state-of-the-art methods on cervical nuclei segmentation.
1 code implementation • ICLR 2020 • Dilin Wang, Meng Li, Lemeng Wu, Vikas Chandra, Qiang Liu
Designing energy-efficient networks is of critical importance for enabling state-of-the-art deep learning in mobile and edge settings where the computation and energy budgets are highly limited.
no code implementations • 9 Sep 2019 • Yun Shang, Meng Li
Quantum state transfer between different sites is a significant problem for quantum networks and quantum computers.
Quantum Physics
no code implementations • 22 Aug 2019 • Jiexiang Wang, Cheng Bian, Meng Li, Xin Yang, Kai Ma, Wenao Ma, Jin Yuan, Xinghao Ding, Yefeng Zheng
Automatic and accurate segmentation for retinal and choroidal layers of Optical Coherence Tomography (OCT) is crucial for detection of various ocular diseases.
1 code implementation • 24 Jun 2019 • Meng Li, Lin Wu, Arnold Wiliem, Kun Zhao, Teng Zhang, Brian C. Lovell
Histopathology image analysis can be considered as a Multiple instance learning (MIL) problem, where the whole slide histopathology image (WSI) is regarded as a bag of instances (i. e, patches) and the task is to predict a single class label to the WSI.
no code implementations • 1 Dec 2018 • Meng Li, Yan Zhang, Haicheng She, Jinqiong Zhou, Jia Jia, Danmei He, Li Zhang
The change of retinal vasculature is an early sign of many vascular and systematic diseases, such as diabetes and hypertension.
no code implementations • 24 Oct 2018 • Zhengchao Zhang, Meng Li, Xi Lin, Yinhai Wang, Fang He
Multistep traffic forecasting on road networks is a crucial task in successful intelligent transportation system applications.
1 code implementation • 1 Sep 2018 • Yusha Liu, Meng Li, Jeffrey S. Morris
Mass spectrometry proteomics, characterized by spiky, spatially heterogeneous functional data, can be used to identify potential cancer biomarkers.
Methodology
no code implementations • 18 Jul 2018 • Meng Li, Shiwen Shen, Wen Gao, William Hsu, Jason Cong
Computed tomography (CT) is increasingly being used for cancer screening, such as early detection of lung cancer.
1 code implementation • 2 Jun 2018 • Yue Zhao, Meng Li, Liangzhen Lai, Naveen Suda, Damon Civin, Vikas Chandra
Experiments show that accuracy can be increased by 30% for the CIFAR-10 dataset with only 5% globally shared data.
no code implementations • SEMEVAL 2018 • Meng Li, Zhenyuan Dong, Zhihao Fan, Kongming Meng, Jinghua Cao, Guanqi Ding, Yu-Han Liu, Jiawei Shan, Binyang Li
This paper presents a UIR-Miner system for emotion and sentiment analysis evaluation in Twitter in SemEval 2018.
no code implementations • 15 Feb 2018 • Jun Lu, Meng Li, David Dunson
Dirichlet process mixture (DPM) models tend to produce many small clusters regardless of whether they are needed to accurately characterize the data - this is particularly true for large data sets.
1 code implementation • 2 Nov 2017 • Meng Li, Li Ma
Effective learning of asymmetric and local features in images and other data observed on multi-dimensional grids is a challenging objective critical for a wide range of image processing applications involving biomedical and natural images.
no code implementations • ICLR 2018 • Meng Li, Liangzhen Lai, Naveen Suda, Vikas Chandra, David Z. Pan
Massive data exist among user local platforms that usually cannot support deep neural network (DNN) training due to computation and storage resource constraints.