Search Results for author: Yawen Wu

Found 21 papers, 1 papers with code

EdgeOL: Efficient in-situ Online Learning on Edge Devices

no code implementations30 Jan 2024 Sheng Li, Geng Yuan, Yawen Wu, Yue Dai, Chao Wu, Alex K. Jones, Jingtong Hu, Yanzhi Wang, Xulong Tang

Emerging applications, such as robot-assisted eldercare and object recognition, generally employ deep learning neural networks (DNNs) and naturally require: i) handling streaming-in inference requests and ii) adapting to possible deployment scenario changes.

Object Recognition

Achieve Fairness without Demographics for Dermatological Disease Diagnosis

no code implementations16 Jan 2024 Ching-Hao Chiu, Yu-Jen Chen, Yawen Wu, Yiyu Shi, Tsung-Yi Ho

To overcome this, we propose a method enabling fair predictions for sensitive attributes during the testing phase without using such information during training.

Attribute Fairness

Additional Positive Enables Better Representation Learning for Medical Images

no code implementations31 May 2023 Dewen Zeng, Yawen Wu, Xinrong Hu, Xiaowei Xu, Jingtong Hu, Yiyu Shi

This paper presents a new way to identify additional positive pairs for BYOL, a state-of-the-art (SOTA) self-supervised learning framework, to improve its representation learning ability.

Representation Learning Self-Supervised Learning +1

Enabling Weakly-Supervised Temporal Action Localization from On-Device Learning of the Video Stream

no code implementations25 Aug 2022 Yue Tang, Yawen Wu, Peipei Zhou, Jingtong Hu

To enable W-TAL models to learn from a long, untrimmed streaming video, we propose an efficient video learning approach that can directly adapt to new environments.

Action Detection Weakly-supervised Temporal Action Localization +1

Federated Self-Supervised Contrastive Learning and Masked Autoencoder for Dermatological Disease Diagnosis

no code implementations24 Aug 2022 Yawen Wu, Dewen Zeng, Zhepeng Wang, Yi Sheng, Lei Yang, Alaina J. James, Yiyu Shi, Jingtong Hu

Self-supervised learning (SSL) methods, contrastive learning (CL) and masked autoencoders (MAE), can leverage the unlabeled data to pre-train models, followed by fine-tuning with limited labels.

Contrastive Learning Federated Learning +1

Achieving Fairness in Dermatological Disease Diagnosis through Automatic Weight Adjusting Federated Learning and Personalization

no code implementations23 Aug 2022 Gelei Xu, Yawen Wu, Jingtong Hu, Yiyu Shi

The framework is divided into two stages: In the first in-FL stage, clients with different skin types are trained in a federated learning process to construct a global model for all skin types.

Fairness Federated Learning

Distributed Contrastive Learning for Medical Image Segmentation

no code implementations7 Aug 2022 Yawen Wu, Dewen Zeng, Zhepeng Wang, Yiyu Shi, Jingtong Hu

However, when adopting CL in FL, the limited data diversity on each site makes federated contrastive learning (FCL) ineffective.

Contrastive Learning Federated Learning +4

Federated Contrastive Learning for Volumetric Medical Image Segmentation

no code implementations23 Apr 2022 Yawen Wu, Dewen Zeng, Zhepeng Wang, Yiyu Shi, Jingtong Hu

However, in medical imaging analysis, each site may only have a limited amount of data and labels, which makes learning ineffective.

Contrastive Learning Federated Learning +4

FairPrune: Achieving Fairness Through Pruning for Dermatological Disease Diagnosis

no code implementations4 Mar 2022 Yawen Wu, Dewen Zeng, Xiaowei Xu, Yiyu Shi, Jingtong Hu

By pruning the parameters based on this importance difference, we can reduce the accuracy difference between the privileged group and the unprivileged group to improve fairness without a large accuracy drop.

Fairness Image Classification +1

The Larger The Fairer? Small Neural Networks Can Achieve Fairness for Edge Devices

no code implementations23 Feb 2022 Yi Sheng, Junhuan Yang, Yawen Wu, Kevin Mao, Yiyu Shi, Jingtong Hu, Weiwen Jiang, Lei Yang

Results show that FaHaNa can identify a series of neural networks with higher fairness and accuracy on a dermatology dataset.

Face Recognition Fairness +2

Federated Contrastive Learning for Dermatological Disease Diagnosis via On-device Learning

no code implementations14 Feb 2022 Yawen Wu, Dewen Zeng, Zhepeng Wang, Yi Sheng, Lei Yang, Alaina J. James, Yiyu Shi, Jingtong Hu

The recently developed self-supervised learning approach, contrastive learning (CL), can leverage the unlabeled data to pre-train a model, after which the model is fine-tuned on limited labeled data for dermatological disease diagnosis.

Contrastive Learning Federated Learning +1

Synthetic Data Can Also Teach: Synthesizing Effective Data for Unsupervised Visual Representation Learning

no code implementations14 Feb 2022 Yawen Wu, Zhepeng Wang, Dewen Zeng, Yiyu Shi, Jingtong Hu

To tackle this problem, we propose a data generation framework with two methods to improve CL training by joint sample generation and contrastive learning.

Contrastive Learning Representation Learning +2

Decentralized Unsupervised Learning of Visual Representations

no code implementations21 Nov 2021 Yawen Wu, Zhepeng Wang, Dewen Zeng, Meng Li, Yiyu Shi, Jingtong Hu

To tackle this problem, we propose a collaborative contrastive learning framework consisting of two approaches: feature fusion and neighborhood matching, by which a unified feature space among clients is learned for better data representations.

Contrastive Learning Federated Learning +2

Federated Contrastive Representation Learning with Feature Fusion and Neighborhood Matching

no code implementations29 Sep 2021 Yawen Wu, Zhepeng Wang, Dewen Zeng, Meng Li, Yiyu Shi, Jingtong Hu

Federated learning (FL) enables distributed clients to learn a shared model for prediction while keeping the training data local on each client.

Contrastive Learning Federated Learning +2

Data-Efficient Contrastive Learning by Differentiable Hard Sample and Hard Positive Pair Generation

no code implementations29 Sep 2021 Yawen Wu, Zhepeng Wang, Dewen Zeng, Yiyu Shi, Jingtong Hu

In this way, the main model learns to cluster hard positives by pulling the representations of similar yet distinct samples together, by which the representations of similar samples are well-clustered and better representations can be learned.

Contrastive Learning Self-Supervised Learning

Enabling On-Device Self-Supervised Contrastive Learning With Selective Data Contrast

no code implementations7 Jun 2021 Yawen Wu, Zhepeng Wang, Dewen Zeng, Yiyu Shi, Jingtong Hu

After a model is deployed on edge devices, it is desirable for these devices to learn from unlabeled data to continuously improve accuracy.

Contrastive Learning

Enabling Efficient On-Device Self-supervised Contrastive Learning by Data Selection

no code implementations1 Jan 2021 Yawen Wu, Zhepeng Wang, Dewen Zeng, Yiyu Shi, Jingtong Hu

In this paper, we propose a framework to automatically select the most representative data from unlabeled input stream on-the-fly, which only requires the use of a small data buffer for dynamic learning.

Contrastive Learning

Enabling On-Device CNN Training by Self-Supervised Instance Filtering and Error Map Pruning

no code implementations7 Jul 2020 Yawen Wu, Zhepeng Wang, Yiyu Shi, Jingtong Hu

For example, when training ResNet-110 on CIFAR-10, we achieve 68% computation saving while preserving full accuracy and 75% computation saving with a marginal accuracy loss of 1. 3%.

Quantization

Intermittent Inference with Nonuniformly Compressed Multi-Exit Neural Network for Energy Harvesting Powered Devices

no code implementations23 Apr 2020 Yawen Wu, Zhepeng Wang, Zhenge Jia, Yiyu Shi, Jingtong Hu

This work aims to enable persistent, event-driven sensing and decision capabilities for energy-harvesting (EH)-powered devices by deploying lightweight DNNs onto EH-powered devices.

Cannot find the paper you are looking for? You can Submit a new open access paper.