Search Results for author: Jingtong Hu

Found 40 papers, 6 papers with code

Data-Algorithm-Architecture Co-Optimization for Fair Neural Networks on Skin Lesion Dataset

no code implementations18 Jul 2024 Yi Sheng, Junhuan Yang, Jinyang Li, James Alaina, Xiaowei Xu, Yiyu Shi, Jingtong Hu, Weiwen Jiang, Lei Yang

As Artificial Intelligence (AI) increasingly integrates into our daily lives, fairness has emerged as a critical concern, particularly in medical AI, where datasets often reflect inherent biases due to social factors like the underrepresentation of marginalized communities and socioeconomic barriers to data collection.

Data Augmentation Fairness +1

Achieving Fairness Through Channel Pruning for Dermatological Disease Diagnosis

1 code implementation14 May 2024 Qingpeng Kong, Ching-Hao Chiu, Dewen Zeng, Yu-Jen Chen, Tsung-Yi Ho, Jingtong Hu, Yiyu Shi

Numerous studies have revealed that deep learning-based medical image classification models may exhibit bias towards specific demographic attributes, such as race, gender, and age.

Fairness Image Classification +1

etuner: A Redundancy-Aware Framework for Efficient Continual Learning Application on Edge Devices

no code implementations30 Jan 2024 Sheng Li, Geng Yuan, Yawen Wu, Yue Dai, Tianyu Wang, Chao Wu, Alex K. Jones, Jingtong Hu, Yanzhi Wang, Xulong Tang

Many emerging applications, such as robot-assisted eldercare and object recognition, generally employ deep learning neural networks (DNNs) and require the deployment of DNN models on edge devices.

Continual Learning Object Recognition

A Dataset and Benchmark for Copyright Infringement Unlearning from Text-to-Image Diffusion Models

1 code implementation4 Jan 2024 Rui Ma, Qiang Zhou, Yizhu Jin, Daquan Zhou, Bangjun Xiao, Xiuyu Li, Yi Qu, Aishani Singh, Kurt Keutzer, Jingtong Hu, Xiaodong Xie, Zhen Dong, Shanghang Zhang, Shiji Zhou

Notably, models like stable diffusion, which excel in text-to-image synthesis, heighten the risk of copyright infringement and unauthorized distribution. Machine unlearning, which seeks to eradicate the influence of specific data or concepts from machine learning models, emerges as a promising solution by eliminating the \enquote{copyright memories} ingrained in diffusion models.

Text-to-Image Generation

Enabling On-Device Large Language Model Personalization with Self-Supervised Data Selection and Synthesis

no code implementations21 Nov 2023 Ruiyang Qin, Jun Xia, Zhenge Jia, Meng Jiang, Ahmed Abbasi, Peipei Zhou, Jingtong Hu, Yiyu Shi

While it is possible to obtain annotation locally by directly asking users to provide preferred responses, such annotations have to be sparse to not affect user experience.

Language Modelling Large Language Model

Additional Positive Enables Better Representation Learning for Medical Images

no code implementations31 May 2023 Dewen Zeng, Yawen Wu, Xinrong Hu, Xiaowei Xu, Jingtong Hu, Yiyu Shi

This paper presents a new way to identify additional positive pairs for BYOL, a state-of-the-art (SOTA) self-supervised learning framework, to improve its representation learning ability.

Representation Learning Self-Supervised Learning +1

BiTrackGAN: Cascaded CycleGANs to Constraint Face Aging

no code implementations22 Apr 2023 Tsung-Han Kuo, Zhenge Jia, Tei-Wei Kuo, Jingtong Hu

With the increased accuracy of modern computer vision technology, many access control systems are equipped with face recognition functions for faster identification.

Face Recognition Generative Adversarial Network +2

Development of A Real-time POCUS Image Quality Assessment and Acquisition Guidance System

no code implementations16 Dec 2022 Zhenge Jia, Yiyu Shi, Jingtong Hu, Lei Yang, Benjamin Nti

Point-of-care ultrasound (POCUS) is one of the most commonly applied tools for cardiac function imaging in the clinical routine of the emergency department and pediatric intensive care unit.

Image Quality Assessment

Enabling Weakly-Supervised Temporal Action Localization from On-Device Learning of the Video Stream

no code implementations25 Aug 2022 Yue Tang, Yawen Wu, Peipei Zhou, Jingtong Hu

To enable W-TAL models to learn from a long, untrimmed streaming video, we propose an efficient video learning approach that can directly adapt to new environments.

Action Detection Weakly-supervised Temporal Action Localization +1

Federated Self-Supervised Contrastive Learning and Masked Autoencoder for Dermatological Disease Diagnosis

no code implementations24 Aug 2022 Yawen Wu, Dewen Zeng, Zhepeng Wang, Yi Sheng, Lei Yang, Alaina J. James, Yiyu Shi, Jingtong Hu

Self-supervised learning (SSL) methods, contrastive learning (CL) and masked autoencoders (MAE), can leverage the unlabeled data to pre-train models, followed by fine-tuning with limited labels.

Contrastive Learning Federated Learning +1

Achieving Fairness in Dermatological Disease Diagnosis through Automatic Weight Adjusting Federated Learning and Personalization

no code implementations23 Aug 2022 Gelei Xu, Yawen Wu, Jingtong Hu, Yiyu Shi

The framework is divided into two stages: In the first in-FL stage, clients with different skin types are trained in a federated learning process to construct a global model for all skin types.

Fairness Federated Learning

Distributed Contrastive Learning for Medical Image Segmentation

no code implementations7 Aug 2022 Yawen Wu, Dewen Zeng, Zhepeng Wang, Yiyu Shi, Jingtong Hu

However, when adopting CL in FL, the limited data diversity on each site makes federated contrastive learning (FCL) ineffective.

Contrastive Learning Federated Learning +4

Sustainable AI Processing at the Edge

no code implementations4 Jul 2022 Sébastien Ollivier, Sheng Li, Yue Tang, Chayanika Chaudhuri, Peipei Zhou, Xulong Tang, Jingtong Hu, Alex K. Jones

In particular, we explore the use of processing-in-memory (PIM) approaches, mobile GPU accelerators, and recently released FPGAs, and compare them with novel Racetrack memory PIM.

BIG-bench Machine Learning Edge-computing

H2H: Heterogeneous Model to Heterogeneous System Mapping with Computation and Communication Awareness

1 code implementation29 Apr 2022 Xinyi Zhang, Cong Hao, Peipei Zhou, Alex Jones, Jingtong Hu

The heterogeneity in ML models comes from multi-sensor perceiving and multi-task learning, i. e., multi-modality multi-task (MMMT), resulting in diverse deep neural network (DNN) layers and computation patterns.

Multi-Task Learning

Federated Contrastive Learning for Volumetric Medical Image Segmentation

no code implementations23 Apr 2022 Yawen Wu, Dewen Zeng, Zhepeng Wang, Yiyu Shi, Jingtong Hu

However, in medical imaging analysis, each site may only have a limited amount of data and labels, which makes learning ineffective.

Contrastive Learning Federated Learning +4

FairPrune: Achieving Fairness Through Pruning for Dermatological Disease Diagnosis

no code implementations4 Mar 2022 Yawen Wu, Dewen Zeng, Xiaowei Xu, Yiyu Shi, Jingtong Hu

By pruning the parameters based on this importance difference, we can reduce the accuracy difference between the privileged group and the unprivileged group to improve fairness without a large accuracy drop.

Fairness Image Classification +1

The Larger The Fairer? Small Neural Networks Can Achieve Fairness for Edge Devices

no code implementations23 Feb 2022 Yi Sheng, Junhuan Yang, Yawen Wu, Kevin Mao, Yiyu Shi, Jingtong Hu, Weiwen Jiang, Lei Yang

Results show that FaHaNa can identify a series of neural networks with higher fairness and accuracy on a dermatology dataset.

Face Recognition Fairness +2

EF-Train: Enable Efficient On-device CNN Training on FPGA Through Data Reshaping for Online Adaptation or Personalization

no code implementations18 Feb 2022 Yue Tang, Xinyi Zhang, Peipei Zhou, Jingtong Hu

In this work, we design EF-Train, an efficient DNN training accelerator with a unified channel-level parallelism-based convolution kernel that can achieve end-to-end training on resource-limited low-power edge-level FPGAs.

Domain Adaptation

Synthetic Data Can Also Teach: Synthesizing Effective Data for Unsupervised Visual Representation Learning

no code implementations14 Feb 2022 Yawen Wu, Zhepeng Wang, Dewen Zeng, Yiyu Shi, Jingtong Hu

To tackle this problem, we propose a data generation framework with two methods to improve CL training by joint sample generation and contrastive learning.

Contrastive Learning Representation Learning +2

Federated Contrastive Learning for Dermatological Disease Diagnosis via On-device Learning

no code implementations14 Feb 2022 Yawen Wu, Dewen Zeng, Zhepeng Wang, Yi Sheng, Lei Yang, Alaina J. James, Yiyu Shi, Jingtong Hu

The recently developed self-supervised learning approach, contrastive learning (CL), can leverage the unlabeled data to pre-train a model, after which the model is fine-tuned on limited labeled data for dermatological disease diagnosis.

Contrastive Learning Federated Learning +1

Decentralized Unsupervised Learning of Visual Representations

no code implementations21 Nov 2021 Yawen Wu, Zhepeng Wang, Dewen Zeng, Meng Li, Yiyu Shi, Jingtong Hu

To tackle this problem, we propose a collaborative contrastive learning framework consisting of two approaches: feature fusion and neighborhood matching, by which a unified feature space among clients is learned for better data representations.

Contrastive Learning Federated Learning +2

Federated Contrastive Representation Learning with Feature Fusion and Neighborhood Matching

no code implementations29 Sep 2021 Yawen Wu, Zhepeng Wang, Dewen Zeng, Meng Li, Yiyu Shi, Jingtong Hu

Federated learning (FL) enables distributed clients to learn a shared model for prediction while keeping the training data local on each client.

Contrastive Learning Federated Learning +2

Data-Efficient Contrastive Learning by Differentiable Hard Sample and Hard Positive Pair Generation

no code implementations29 Sep 2021 Yawen Wu, Zhepeng Wang, Dewen Zeng, Yiyu Shi, Jingtong Hu

In this way, the main model learns to cluster hard positives by pulling the representations of similar yet distinct samples together, by which the representations of similar samples are well-clustered and better representations can be learned.

Contrastive Learning Self-Supervised Learning

Enabling On-Device Self-Supervised Contrastive Learning With Selective Data Contrast

no code implementations7 Jun 2021 Yawen Wu, Zhepeng Wang, Dewen Zeng, Yiyu Shi, Jingtong Hu

After a model is deployed on edge devices, it is desirable for these devices to learn from unlabeled data to continuously improve accuracy.

Contrastive Learning

Enabling Efficient On-Device Self-supervised Contrastive Learning by Data Selection

no code implementations1 Jan 2021 Yawen Wu, Zhepeng Wang, Dewen Zeng, Yiyu Shi, Jingtong Hu

In this paper, we propose a framework to automatically select the most representative data from unlabeled input stream on-the-fly, which only requires the use of a small data buffer for dynamic learning.

Contrastive Learning

FGNAS: FPGA-Aware Graph Neural Architecture Search

no code implementations1 Jan 2021 Qing Lu, Weiwen Jiang, Meng Jiang, Jingtong Hu, Sakyasingha Dasgupta, Yiyu Shi

The success of gragh neural networks (GNNs) in the past years has aroused grow-ing interest and effort in designing best models to handle graph-structured data.

Neural Architecture Search

Personalized Deep Learning for Ventricular Arrhythmias Detection on Medical IoT Systems

no code implementations18 Aug 2020 Zhenge Jia, Zhepeng Wang, Feng Hong, Lichuan Ping, Yiyu Shi, Jingtong Hu

We equip the system with real-time inference on both intracardiac and surface rhythm monitors.

Towards Cardiac Intervention Assistance: Hardware-aware Neural Architecture Exploration for Real-Time 3D Cardiac Cine MRI Segmentation

no code implementations17 Aug 2020 Dewen Zeng, Weiwen Jiang, Tianchen Wang, Xiaowei Xu, Haiyun Yuan, Meiping Huang, Jian Zhuang, Jingtong Hu, Yiyu Shi

Experimental results on ACDC MICCAI 2017 dataset demonstrate that our hardware-aware multi-scale NAS framework can reduce the latency by up to 3. 5 times and satisfy the real-time constraints, while still achieving competitive segmentation accuracy, compared with the state-of-the-art NAS segmentation framework.

MRI segmentation Neural Architecture Search +1

Standing on the Shoulders of Giants: Hardware and Neural Architecture Co-Search with Hot Start

1 code implementation17 Jul 2020 Weiwen Jiang, Lei Yang, Sakyasingha Dasgupta, Jingtong Hu, Yiyu Shi

To tackle this issue, HotNAS builds a chain of tools to design hardware to support compression, based on which a global optimizer is developed to automatically co-search all the involved search spaces.

Neural Architecture Search

Enabling On-Device CNN Training by Self-Supervised Instance Filtering and Error Map Pruning

no code implementations7 Jul 2020 Yawen Wu, Zhepeng Wang, Yiyu Shi, Jingtong Hu

For example, when training ResNet-110 on CIFAR-10, we achieve 68% computation saving while preserving full accuracy and 75% computation saving with a marginal accuracy loss of 1. 3%.

Quantization

Intermittent Inference with Nonuniformly Compressed Multi-Exit Neural Network for Energy Harvesting Powered Devices

no code implementations23 Apr 2020 Yawen Wu, Zhepeng Wang, Zhenge Jia, Yiyu Shi, Jingtong Hu

This work aims to enable persistent, event-driven sensing and decision capabilities for energy-harvesting (EH)-powered devices by deploying lightweight DNNs onto EH-powered devices.

On Neural Architecture Search for Resource-Constrained Hardware Platforms

no code implementations31 Oct 2019 Qing Lu, Weiwen Jiang, Xiaowei Xu, Yiyu Shi, Jingtong Hu

With 30, 000 LUTs, a light-weight design is found to achieve 82. 98\% accuracy and 1293 images/second throughput, compared to which, under the same constraints, the traditional method even fails to find a valid solution.

Neural Architecture Search Quantization +1

Device-Circuit-Architecture Co-Exploration for Computing-in-Memory Neural Accelerators

no code implementations31 Oct 2019 Weiwen Jiang, Qiuwen Lou, Zheyu Yan, Lei Yang, Jingtong Hu, Xiaobo Sharon Hu, Yiyu Shi

In this paper, we are the first to bring the computing-in-memory architecture, which can easily transcend the memory wall, to interplay with the neural architecture search, aiming to find the most efficient neural architectures with high network accuracy and maximized hardware efficiency.

Neural Architecture Search

Accuracy vs. Efficiency: Achieving Both through FPGA-Implementation Aware Neural Architecture Search

no code implementations31 Jan 2019 Weiwen Jiang, Xinyi Zhang, Edwin H. -M. Sha, Lei Yang, Qingfeng Zhuge, Yiyu Shi, Jingtong Hu

In addition, with a performance abstraction model to analyze the latency of neural architectures without training, our framework can quickly prune architectures that do not satisfy the specification, leading to higher efficiency.

Neural Architecture Search Reinforcement Learning

An Area and Energy Efficient Design of Domain-Wall Memory-Based Deep Convolutional Neural Networks using Stochastic Computing

no code implementations3 Feb 2018 Xiaolong Ma, Yi-Peng Zhang, Geng Yuan, Ao Ren, Zhe Li, Jie Han, Jingtong Hu, Yanzhi Wang

However, in these works, the memory design optimization is neglected for weight storage, which will inevitably result in large hardware cost.

Cannot find the paper you are looking for? You can Submit a new open access paper.