1 code implementation • 16 Oct 2024 • Youpeng Li, Xinda Wang, Fuxun Yu, Lichao Sun, Wenbin Zhang, Xuyu Wang
The core of FedCAP is a model update calibration mechanism to help a server capture the differences in the direction and magnitude of model updates among clients.
no code implementations • 16 Sep 2024 • Yanan Jian, Fuxun Yu, Qi Zhang, William LeVine, Brandon Dubbs, Nikolaos Karianakis
This paper presents a novel way of online adapting any off-the-shelf object detection model to a novel domain without retraining the detector model.
no code implementations • 10 Jun 2024 • Simranjit Singh, Michael Fore, Andreas Karatzas, Chaehong Lee, Yanan Jian, Longfei Shangguan, Fuxun Yu, Iraklis Anagnostopoulos, Dimitrios Stamoulis
As Large Language Models (LLMs) broaden their capabilities to manage thousands of API calls, they are confronted with complex data operations across vast datasets with significant overhead to the underlying system.
no code implementations • 22 May 2024 • Chenhui Xu, Fuxun Yu, Maoliang Li, Zihao Zheng, Zirui Xu, JinJun Xiong, Xiang Chen
The past neural network design has largely focused on feature representation space dimension and its capacity scaling (e. g., width, depth), but overlooked the feature interaction space scaling.
no code implementations • 6 May 2024 • Chenhui Xu, Xinyao Wang, Fuxun Yu, JinJun Xiong, Xiang Chen
Machine learning is evolving towards high-order models that necessitate pre-training on extensive datasets, a process associated with significant overheads.
no code implementations • 24 Mar 2024 • Chenhui Xu, Fuxun Yu, Zirui Xu, Nathan Inkawhich, Xiang Chen
Our experimental results demonstrate the superior performance of the MC Ensemble strategy in OOD detection compared to both the naive Deep Ensemble method and a standalone model of comparable size.
no code implementations • 29 Nov 2023 • Chenhui Xu, Fuxun Yu, Zirui Xu, ChenChen Liu, JinJun Xiong, Xiang Chen
Recent progress in computer vision-oriented neural network designs is mostly driven by capturing high-order neural interactions among inputs and features.
Hardware Aware Neural Architecture Search Neural Architecture Search
no code implementations • 21 Nov 2023 • Yanan Jian, Fuxun Yu, Simranjit Singh, Dimitrios Stamoulis
Aerial object detection is a challenging task, in which one major obstacle lies in the limitations of large-scale data collection and the long-tail distribution of certain classes.
no code implementations • 1 Apr 2022 • Zirui Xu, Fuxun Yu, JinJun Xiong, Xiang Chen
The significant success of Deep Neural Networks (DNNs) is highly promoted by the multiple sophisticated DNN libraries.
no code implementations • 28 Nov 2021 • Fuxun Yu, Weishan Zhang, Zhuwei Qin, Zirui Xu, Di Wang, ChenChen Liu, Zhi Tian, Xiang Chen
Federated learning learns from scattered data by fusing collaborative models from local nodes.
no code implementations • 28 Nov 2021 • Fuxun Yu, Di Wang, Longfei Shangguan, Minjia Zhang, Xulong Tang, ChenChen Liu, Xiang Chen
With both scaling trends, new problems and challenges emerge in DL inference serving systems, which gradually trends towards Large-scale Deep learning Serving systems (LDS).
no code implementations • 21 Oct 2021 • Ehsan K. Ardestani, Changkyu Kim, Seung Jae Lee, Luoshang Pan, Valmiki Rampersad, Jens Axboe, Banit Agrawal, Fuxun Yu, Ansha Yu, Trung Le, Hector Yuen, Shishir Juluri, Akshat Nanda, Manoj Wodekar, Dheevatsa Mudigere, Krishnakumar Nair, Maxim Naumov, Chris Peterson, Mikhail Smelyanskiy, Vijay Rao
Deep Learning Recommendation Models (DLRM) are widespread, account for a considerable data center footprint, and grow by more than 1. 5x per year.
no code implementations • 22 Nov 2020 • Fuxun Yu, Dimitrios Stamoulis, Di Wang, Dimitrios Lymberopoulos, Xiang Chen
This paper gives an overview of our ongoing work on the design space exploration of efficient deep neural networks (DNNs).
no code implementations • 15 Aug 2020 • Fuxun Yu, Weishan Zhang, Zhuwei Qin, Zirui Xu, Di Wang, ChenChen Liu, Zhi Tian, Xiang Chen
Specifically, we design a feature-oriented regulation method ({$\Psi$-Net}) to ensure explicit feature information allocation in different neural network structures.
no code implementations • 14 Aug 2020 • Fuxun Yu, ChenChen Liu, Di Wang, Yanzhi Wang, Xiang Chen
Based on the neural network attention mechanism, we propose a comprehensive dynamic optimization framework including (1) testing-phase channel and column feature map pruning, as well as (2) training-phase optimization by targeted dropout.
1 code implementation • 17 Nov 2019 • Fuxun Yu, Di Wang, Yinpeng Chen, Nikolaos Karianakis, Tong Shen, Pei Yu, Dimitrios Lymberopoulos, Sidi Lu, Weisong Shi, Xiang Chen
In this work, we show that such adversarial-based methods can only reduce the domain style gap, but cannot address the domain content distribution gap that is shown to be important for object detectors.
no code implementations • 17 Oct 2019 • Zirui Xu, Fuxun Yu, Xiang Chen
Based on the detection result, we further propose a data recovery methodology to defend the physical adversarial attacks.
no code implementations • 25 Sep 2019 • Junxiang Wang, Fuxun Yu, Xiang Chen, Liang Zhao
To overcome these drawbacks, alternating minimization-based methods for deep neural network optimization have attracted fast-increasing attention recently.
no code implementations • 27 Aug 2019 • Xiaolong Ma, Geng Yuan, Sheng Lin, Caiwen Ding, Fuxun Yu, Tao Liu, Wujie Wen, Xiang Chen, Yanzhi Wang
To mitigate the challenges, the memristor crossbar array has emerged as an intrinsically suitable matrix computation and low-power acceleration framework for DNN applications.
1 code implementation • 31 May 2019 • Junxiang Wang, Fuxun Yu, Xiang Chen, Liang Zhao
However, as an emerging domain, several challenges remain, including 1) The lack of global convergence guarantees, 2) Slow convergence towards solutions, and 3) Cubic time complexity with regard to feature dimensions.
no code implementations • 21 May 2019 • Zirui Xu, Fuxun Yu, Xiang Chen
To address this issue, we propose DoPa -- a comprehensive CNN detection methodology for various physical adversarial attacks.
no code implementations • 10 May 2019 • Fuxun Yu, Zhuwei Qin, Chenchen Liu, Liang Zhao, Yanzhi Wang, Xiang Chen
Recently, adversarial deception becomes one of the most considerable threats to deep neural networks.
no code implementations • ICLR 2019 • Zhuwei Qin, Fuxun Yu, ChenChen Liu, Xiang Chen
As significant redundancies inevitably present in such a structure, many works have been proposed to prune the convolutional filters for computation cost reduction.
no code implementations • NIPS Workshop CDNNRIA 2018 • Zhuwei Qin, Fuxun Yu, ChenChen Liu, Xiang Chen
We find that the filter magnitude based method fails to eliminate the filters with repetitive functionality.
no code implementations • NIPS Workshop CDNNRIA 2018 • Fuxun Yu, Zhuwei Qin, Xiang Chen
Neural network compression and acceleration are widely demanded currently due to the resource constraints on most deployment targets.
no code implementations • ICLR 2019 • Shaokai Ye, Tianyun Zhang, Kaiqi Zhang, Jiayu Li, Kaidi Xu, Yunfei Yang, Fuxun Yu, Jian Tang, Makan Fardad, Sijia Liu, Xiang Chen, Xue Lin, Yanzhi Wang
Motivated by dynamic programming, the proposed method reaches extremely high pruning rate by using partial prunings with moderate pruning rates.
no code implementations • ICLR 2019 • Zhuwei Qin, Fuxun Yu, ChenChen Liu, Xiang Chen
As significant redundancies inevitably present in such a structure, many works have been proposed to prune the convolutional filters for computation cost reduction.
no code implementations • ICLR 2019 • Fuxun Yu, ChenChen Liu, Yanzhi Wang, Liang Zhao, Xiang Chen
One popular hypothesis of neural network generalization is that the flat local minima of loss surface in parameter space leads to good generalization.
no code implementations • 4 Sep 2018 • Zirui Xu, Fuxun Yu, ChenChen Liu, Xiang Chen
In this work, we propose HASP -- a high-performance security enhancement approach to solve this security issue on mobile devices.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +2
no code implementations • 23 May 2018 • Fuxun Yu, Zirui Xu, Yanzhi Wang, ChenChen Liu, Xiang Chen
In recent years, neural networks have demonstrated outstanding effectiveness in a large amount of applications. However, recent works have shown that neural networks are susceptible to adversarial examples, indicating possible flaws intrinsic to the network structures.
1 code implementation • 30 Apr 2018 • Zhuwei Qin, Fuxun Yu, ChenChen Liu, Xiang Chen
Nowadays, the Convolutional Neural Networks (CNNs) have achieved impressive performance on many computer vision related tasks, such as object detection, image recognition, image retrieval, etc.
no code implementations • 15 Feb 2018 • Fuxun Yu, Qide Dong, Xiang Chen
By comparing the analyzed saliency map and the adversarial perturbation distribution, we proposed a new evaluation scheme to comprehensively assess the adversarial attack precision and efficiency.