Search Results for author: Yuecheng Li

Found 10 papers, 4 papers with code

Clients Collaborate: Flexible Differentially Private Federated Learning with Guaranteed Improvement of Utility-Privacy Trade-off

no code implementations10 Feb 2024 Yuecheng Li, Tong Wang, Chuan Chen, Jian Lou, Bin Chen, Lei Yang, Zibin Zheng

This implies that our FedCEO can effectively recover the disrupted semantic information by smoothing the global semantic space for different privacy settings and continuous training processes.

Federated Learning

Community-Aware Efficient Graph Contrastive Learning via Personalized Self-Training

no code implementations18 Nov 2023 Yuecheng Li, YanMing Hu, Lele Fu, Chuan Chen, Lei Yang, Zibin Zheng

However, for unsupervised and structure-related tasks such as community detection, current GCL algorithms face difficulties in acquiring the necessary community-level information, resulting in poor performance.

Community Detection Contrastive Learning +1

Contrastive Deep Nonnegative Matrix Factorization for Community Detection

1 code implementation4 Nov 2023 Yuecheng Li, Jialong Chen, Chuan Chen, Lei Yang, Zibin Zheng

Recently, nonnegative matrix factorization (NMF) has been widely adopted for community detection, because of its better interpretability.

Community Detection Contrastive Learning +3

CLR: Channel-wise Lightweight Reprogramming for Continual Learning

1 code implementation ICCV 2023 Yunhao Ge, Yuecheng Li, Shuo Ni, Jiaping Zhao, Ming-Hsuan Yang, Laurent Itti

Reprogramming parameters are task-specific and exclusive to each task, which makes our method immune to catastrophic forgetting.

Continual Learning Image Classification

Lightweight Learner for Shared Knowledge Lifelong Learning

1 code implementation24 May 2023 Yunhao Ge, Yuecheng Li, Di wu, Ao Xu, Adam M. Jones, Amanda Sofie Rios, Iordanis Fostiropoulos, Shixian Wen, Po-Hsuan Huang, Zachary William Murdock, Gozde Sahin, Shuo Ni, Kiran Lekkala, Sumedh Anand Sontakke, Laurent Itti

We propose a new Shared Knowledge Lifelong Learning (SKILL) challenge, which deploys a decentralized population of LL agents that each sequentially learn different tasks, with all agents operating independently and in parallel.

Image Classification

Auto-CARD: Efficient and Robust Codec Avatar Driving for Real-time Mobile Telepresence

no code implementations CVPR 2023 Yonggan Fu, Yuecheng Li, Chenghui Li, Jason Saragih, Peizhao Zhang, Xiaoliang Dai, Yingyan Lin

Real-time and robust photorealistic avatars for telepresence in AR/VR have been highly desired for enabling immersive photorealistic telepresence.

Neural Architecture Search

DRESS: Dynamic REal-time Sparse Subnets

no code implementations1 Jul 2022 Zhongnan Qu, Syed Shakib Sarwar, Xin Dong, Yuecheng Li, Ekin Sumbul, Barbara De Salvo

The limited and dynamically varied resources on edge devices motivate us to deploy an optimized deep neural network that can adapt its sub-networks to fit in different resource constraints.

Power-of-Two Quantization for Low Bitwidth and Hardware Compliant Neural Networks

no code implementations9 Mar 2022 Dominika Przewlocka-Rus, Syed Shakib Sarwar, H. Ekin Sumbul, Yuecheng Li, Barbara De Salvo

Eventually, the experiments showed that for low bit width precision, non-uniform quantization performs better than uniform, and at the same time, PoT quantization vastly reduces the computational complexity of the neural network.

Quantization

Pixel Codec Avatars

1 code implementation CVPR 2021 Shugao Ma, Tomas Simon, Jason Saragih, Dawei Wang, Yuecheng Li, Fernando de la Torre, Yaser Sheikh

Telecommunication with photorealistic avatars in virtual or augmented reality is a promising path for achieving authentic face-to-face communication in 3D over remote physical distances.

F-CAD: A Framework to Explore Hardware Accelerators for Codec Avatar Decoding

no code implementations8 Mar 2021 Xiaofan Zhang, Dawei Wang, Pierce Chuang, Shugao Ma, Deming Chen, Yuecheng Li

Creating virtual avatars with realistic rendering is one of the most essential and challenging tasks to provide highly immersive virtual reality (VR) experiences.

Cannot find the paper you are looking for? You can Submit a new open access paper.