Search Results for author: Feng Yan

Found 27 papers, 11 papers with code

Multiple Object Tracking Challenge Technical Report for Team MT_IoT

1 code implementation7 Dec 2022 Feng Yan, Zhiheng Li, Weixin Luo, Zequn Jie, Fan Liang, Xiaolin Wei, Lin Ma

This is a brief technical report of our proposed method for Multiple-Object Tracking (MOT) Challenge in Complex Environments.

Ranked #2 on Multi-Object Tracking on DanceTrack (using extra training data)

Human Detection Multi-Object Tracking +1

PIDS: Joint Point Interaction-Dimension Search for 3D Point Cloud

no code implementations28 Nov 2022 Tunhou Zhang, Mingyuan Ma, Feng Yan, Hai Li, Yiran Chen

In this work, we establish PIDS, a novel paradigm to jointly explore point interactions and point dimensions to serve semantic segmentation on point cloud data.

Neural Architecture Search Semantic Segmentation

Energy Efficiency Optimization of Intelligent Reflective Surface-assisted Terahertz-RSMA System

no code implementations21 Nov 2022 Xiaoyu Chen, Feng Yan, Menghan Hu, Zihuai Lin

This paper examines the energy efficiency optimization problem of intelligent reflective surface (IRS)-assisted multi-user rate division multiple access (RSMA) downlink systems under terahertz propagation.

SoccerNet 2022 Challenges Results

6 code implementations5 Oct 2022 Silvio Giancola, Anthony Cioppa, Adrien Deliège, Floriane Magera, Vladimir Somers, Le Kang, Xin Zhou, Olivier Barnich, Christophe De Vleeschouwer, Alexandre Alahi, Bernard Ghanem, Marc Van Droogenbroeck, Abdulrahman Darwish, Adrien Maglo, Albert Clapés, Andreas Luyts, Andrei Boiarov, Artur Xarles, Astrid Orcesi, Avijit Shah, Baoyu Fan, Bharath Comandur, Chen Chen, Chen Zhang, Chen Zhao, Chengzhi Lin, Cheuk-Yiu Chan, Chun Chuen Hui, Dengjie Li, Fan Yang, Fan Liang, Fang Da, Feng Yan, Fufu Yu, Guanshuo Wang, H. Anthony Chan, He Zhu, Hongwei Kan, Jiaming Chu, Jianming Hu, Jianyang Gu, Jin Chen, João V. B. Soares, Jonas Theiner, Jorge De Corte, José Henrique Brito, Jun Zhang, Junjie Li, Junwei Liang, Leqi Shen, Lin Ma, Lingchi Chen, Miguel Santos Marques, Mike Azatov, Nikita Kasatkin, Ning Wang, Qiong Jia, Quoc Cuong Pham, Ralph Ewerth, Ran Song, RenGang Li, Rikke Gade, Ruben Debien, Runze Zhang, Sangrok Lee, Sergio Escalera, Shan Jiang, Shigeyuki Odashima, Shimin Chen, Shoichi Masui, Shouhong Ding, Sin-wai Chan, Siyu Chen, Tallal El-Shabrawy, Tao He, Thomas B. Moeslund, Wan-Chi Siu, Wei zhang, Wei Li, Xiangwei Wang, Xiao Tan, Xiaochuan Li, Xiaolin Wei, Xiaoqing Ye, Xing Liu, Xinying Wang, Yandong Guo, YaQian Zhao, Yi Yu, YingYing Li, Yue He, Yujie Zhong, Zhenhua Guo, Zhiheng Li

The SoccerNet 2022 challenges were the second annual video understanding challenges organized by the SoccerNet team.

Action Spotting Camera Calibration +3

BiFeat: Supercharge GNN Training via Graph Feature Quantization

1 code implementation29 Jul 2022 Yuxin Ma, Ping Gong, Jun Yi, Zhewei Yao, Minjie Wang, Cheng Li, Yuxiong He, Feng Yan

We identify the main accuracy impact factors in graph feature quantization and theoretically prove that BiFeat training converges to a network where the loss is within $\epsilon$ of the optimal loss of uncompressed network.

Quantization

NASRec: Weight Sharing Neural Architecture Search for Recommender Systems

no code implementations14 Jul 2022 Tunhou Zhang, Dehua Cheng, Yuchen He, Zhengxing Chen, Xiaoliang Dai, Liang Xiong, Feng Yan, Hai Li, Yiran Chen, Wei Wen

However, the success of recommender systems lies in delicate architecture fabrication, and thus calls for Neural Architecture Search (NAS) to further improve its modeling.

Click-Through Rate Prediction Neural Architecture Search +1

SMLT: A Serverless Framework for Scalable and Adaptive Machine Learning Design and Training

no code implementations4 May 2022 Ahsan Ali, Syed Zawad, Paarijaat Aditya, Istemi Ekin Akkus, Ruichuan Chen, Feng Yan

In addition, by providing an end-to-end design, SMLT solves the intrinsic problems in serverless platforms such as the communication overhead, limited function execution duration, need for repeated initialization, and also provides explicit fault tolerance for ML training.

BIG-bench Machine Learning Management +1

SimiGrad: Fine-Grained Adaptive Batching for Large Scale Training using Gradient Similarity Measurement

1 code implementation NeurIPS 2021 Heyang Qin, Samyam Rajbhandari, Olatunji Ruwase, Feng Yan, Lei Yang, Yuxiong He

In this paper, we propose a fully automated and lightweight adaptive batching methodology to enable fine-grained batch size adaption (e. g., at a mini-batch level) that can achieve state-of-the-art performance with record breaking batch sizes.

Automated Mobile Attention KPConv Networks via A Wide & Deep Predictor

no code implementations29 Sep 2021 Tunhou Zhang, Mingyuan Ma, Feng Yan, Hai Li, Yiran Chen

MAKPConv employs a depthwise kernel to reduce resource consumption and re-calibrates the contribution of kernel points towards each neighbor point via Neighbor-Kernel attention to improve representation power.

3D Point Cloud Classification Feature Engineering +2

Demystifying Hyperparameter Optimization in Federated Learning

no code implementations29 Sep 2021 Syed Zawad, Jun Yi, Minjia Zhang, Cheng Li, Feng Yan, Yuxiong He

Such data heterogeneity and privacy requirements bring unique challenges for learning hyperparameter optimization as the training dynamics change across clients even within the same training round and they are difficult to measure due to privacy constraints.

Federated Learning Hyperparameter Optimization +1

Citadel: Protecting Data Privacy and Model Confidentiality for Collaborative Learning with SGX

no code implementations4 May 2021 Chengliang Zhang, Junzhe Xia, Baichen Yang, Huancheng Puyang, Wei Wang, Ruichuan Chen, Istemi Ekin Akkus, Paarijaat Aditya, Feng Yan

This paper presents Citadel, a scalable collaborative ML system that protects the privacy of both data owner and model owner in untrusted infrastructures with the help of Intel SGX.

Federated Learning

The Age of Correlated Features in Supervised Learning based Forecasting

no code implementations27 Feb 2021 MD Kamran Chowdhury Shisher, Heyang Qin, Lei Yang, Feng Yan, Yin Sun

In these applications, a neural network is trained to predict a time-varying target (e. g., solar power), based on multiple correlated features (e. g., temperature, humidity, and cloud coverage).

Curse or Redemption? How Data Heterogeneity Affects the Robustness of Federated Learning

no code implementations1 Feb 2021 Syed Zawad, Ahsan Ali, Pin-Yu Chen, Ali Anwar, Yi Zhou, Nathalie Baracaldo, Yuan Tian, Feng Yan

Data heterogeneity has been identified as one of the key features in federated learning but often overlooked in the lens of robustness to adversarial attacks.

Federated Learning

NASGEM: Neural Architecture Search via Graph Embedding Method

no code implementations8 Jul 2020 Hsin-Pai Cheng, Tunhou Zhang, Yixing Zhang, Shi-Yu Li, Feng Liang, Feng Yan, Meng Li, Vikas Chandra, Hai Li, Yiran Chen

To preserve graph correlation information in encoding, we propose NASGEM which stands for Neural Architecture Search via Graph Embedding Method.

Graph Embedding Graph Similarity +3

Learning Low-rank Deep Neural Networks via Singular Vector Orthogonality Regularization and Singular Value Sparsification

1 code implementation20 Apr 2020 Huanrui Yang, Minxue Tang, Wei Wen, Feng Yan, Daniel Hu, Ang Li, Hai Li, Yiran Chen

In this work, we propose SVD training, the first method to explicitly achieve low-rank DNNs during training without applying SVD on every step.

TiFL: A Tier-based Federated Learning System

no code implementations25 Jan 2020 Zheng Chai, Ahsan Ali, Syed Zawad, Stacey Truex, Ali Anwar, Nathalie Baracaldo, Yi Zhou, Heiko Ludwig, Feng Yan, Yue Cheng

To this end, we propose TiFL, a Tier-based Federated Learning System, which divides clients into tiers based on their training performance and selects clients from the same tier in each training round to mitigate the straggler problem caused by heterogeneity in resource and data quantity.

Federated Learning

AutoShrink: A Topology-aware NAS for Discovering Efficient Neural Architecture

1 code implementation21 Nov 2019 Tunhou Zhang, Hsin-Pai Cheng, Zhenwen Li, Feng Yan, Chengyu Huang, Hai Li, Yiran Chen

Specifically, both ShrinkCNN and ShrinkRNN are crafted within 1. 5 GPU hours, which is 7. 2x and 6. 7x faster than the crafting time of SOTA CNN and RNN models, respectively.

Image Classification Neural Architecture Search

EPNAS: Efficient Progressive Neural Architecture Search

no code implementations7 Jul 2019 Yanqi Zhou, Peng Wang, Sercan Arik, Haonan Yu, Syed Zawad, Feng Yan, Greg Diamos

In this paper, we propose Efficient Progressive Neural Architecture Search (EPNAS), a neural architecture search (NAS) that efficiently handles large search space through a novel progressive search policy with performance prediction based on REINFORCE~\cite{Williams. 1992. PG}.

Neural Architecture Search

SwiftNet: Using Graph Propagation as Meta-knowledge to Search Highly Representative Neural Architectures

1 code implementation19 Jun 2019 Hsin-Pai Cheng, Tunhou Zhang, Yukun Yang, Feng Yan, Shi-Yu Li, Harris Teague, Hai Li, Yiran Chen

Designing neural architectures for edge devices is subject to constraints of accuracy, inference latency, and computational cost.

Neural Architecture Search

LEASGD: an Efficient and Privacy-Preserving Decentralized Algorithm for Distributed Learning

no code implementations27 Nov 2018 Hsin-Pai Cheng, Patrick Yu, Haojing Hu, Feng Yan, Shi-Yu Li, Hai Li, Yiran Chen

Distributed learning systems have enabled training large-scale models over large amount of data in significantly shorter time.

Privacy Preserving

Differentiable Fine-grained Quantization for Deep Neural Network Compression

1 code implementation NIPS Workshop CDNNRIA 2018 Hsin-Pai Cheng, Yuanjun Huang, Xuyang Guo, Yifei HUANG, Feng Yan, Hai Li, Yiran Chen

Thus judiciously selecting different precision for different layers/structures can potentially produce more efficient models compared to traditional quantization methods by striking a better balance between accuracy and compression rate.

Neural Network Compression Quantization

SmoothOut: Smoothing Out Sharp Minima to Improve Generalization in Deep Learning

1 code implementation21 May 2018 Wei Wen, Yandan Wang, Feng Yan, Cong Xu, Chunpeng Wu, Yiran Chen, Hai Li

It becomes an open question whether escaping sharp minima can improve the generalization.

EigenNet: A Bayesian hybrid of generative and conditional models for sparse learning

no code implementations NeurIPS 2011 Feng Yan, Yuan Qi

To overcome this limitation, we present a novel hybrid model, EigenNet, that uses the eigenstructures of data to guide variable selection.

Sparse Learning Variable Selection

Parallel Inference for Latent Dirichlet Allocation on Graphics Processing Units

no code implementations NeurIPS 2009 Feng Yan, Ningyi Xu, Yuan Qi

Extensive experiments showed that our parallel inference methods consistently produced LDA models with the same predictive power as sequential training methods did but with 26x speedup for CGS and 196x speedup for CVB on a GPU with 30 multiprocessors; actually the speedup is almost linearly scalable with the number of multiprocessors available.

Cannot find the paper you are looking for? You can Submit a new open access paper.