no code implementations • 18 Jul 2024 • Yi Sheng, Junhuan Yang, Jinyang Li, James Alaina, Xiaowei Xu, Yiyu Shi, Jingtong Hu, Weiwen Jiang, Lei Yang
As Artificial Intelligence (AI) increasingly integrates into our daily lives, fairness has emerged as a critical concern, particularly in medical AI, where datasets often reflect inherent biases due to social factors like the underrepresentation of marginalized communities and socioeconomic barriers to data collection.
1 code implementation • 14 May 2024 • Qingpeng Kong, Ching-Hao Chiu, Dewen Zeng, Yu-Jen Chen, Tsung-Yi Ho, Jingtong Hu, Yiyu Shi
Numerous studies have revealed that deep learning-based medical image classification models may exhibit bias towards specific demographic attributes, such as race, gender, and age.
no code implementations • 30 Jan 2024 • Sheng Li, Geng Yuan, Yawen Wu, Yue Dai, Tianyu Wang, Chao Wu, Alex K. Jones, Jingtong Hu, Yanzhi Wang, Xulong Tang
Many emerging applications, such as robot-assisted eldercare and object recognition, generally employ deep learning neural networks (DNNs) and require the deployment of DNN models on edge devices.
1 code implementation • 4 Jan 2024 • Rui Ma, Qiang Zhou, Yizhu Jin, Daquan Zhou, Bangjun Xiao, Xiuyu Li, Yi Qu, Aishani Singh, Kurt Keutzer, Jingtong Hu, Xiaodong Xie, Zhen Dong, Shanghang Zhang, Shiji Zhou
Notably, models like stable diffusion, which excel in text-to-image synthesis, heighten the risk of copyright infringement and unauthorized distribution. Machine unlearning, which seeks to eradicate the influence of specific data or concepts from machine learning models, emerges as a promising solution by eliminating the \enquote{copyright memories} ingrained in diffusion models.
no code implementations • 21 Nov 2023 • Ruiyang Qin, Jun Xia, Zhenge Jia, Meng Jiang, Ahmed Abbasi, Peipei Zhou, Jingtong Hu, Yiyu Shi
While it is possible to obtain annotation locally by directly asking users to provide preferred responses, such annotations have to be sparse to not affect user experience.
no code implementations • 31 May 2023 • Dewen Zeng, Yawen Wu, Xinrong Hu, Xiaowei Xu, Jingtong Hu, Yiyu Shi
This paper presents a new way to identify additional positive pairs for BYOL, a state-of-the-art (SOTA) self-supervised learning framework, to improve its representation learning ability.
no code implementations • 22 Apr 2023 • Tsung-Han Kuo, Zhenge Jia, Tei-Wei Kuo, Jingtong Hu
With the increased accuracy of modern computer vision technology, many access control systems are equipped with face recognition functions for faster identification.
no code implementations • 16 Dec 2022 • Zhenge Jia, Yiyu Shi, Jingtong Hu, Lei Yang, Benjamin Nti
Point-of-care ultrasound (POCUS) is one of the most commonly applied tools for cardiac function imaging in the clinical routine of the emergency department and pediatric intensive care unit.
no code implementations • 2 Dec 2022 • Jiahe Shi, Yawen Wu, Dewen Zeng, Jun Tao, Jingtong Hu, Yiyu Shi
The ubiquity of edge devices has led to a growing amount of unlabeled data produced at the edge.
no code implementations • 25 Aug 2022 • Yue Tang, Yawen Wu, Peipei Zhou, Jingtong Hu
To enable W-TAL models to learn from a long, untrimmed streaming video, we propose an efficient video learning approach that can directly adapt to new environments.
Action Detection Weakly-supervised Temporal Action Localization +1
no code implementations • 24 Aug 2022 • Yawen Wu, Dewen Zeng, Zhepeng Wang, Yi Sheng, Lei Yang, Alaina J. James, Yiyu Shi, Jingtong Hu
Self-supervised learning (SSL) methods, contrastive learning (CL) and masked autoencoders (MAE), can leverage the unlabeled data to pre-train models, followed by fine-tuning with limited labels.
no code implementations • 23 Aug 2022 • Gelei Xu, Yawen Wu, Jingtong Hu, Yiyu Shi
The framework is divided into two stages: In the first in-FL stage, clients with different skin types are trained in a federated learning process to construct a global model for all skin types.
no code implementations • 7 Aug 2022 • Yawen Wu, Dewen Zeng, Zhepeng Wang, Yiyu Shi, Jingtong Hu
However, when adopting CL in FL, the limited data diversity on each site makes federated contrastive learning (FCL) ineffective.
no code implementations • 4 Jul 2022 • Sébastien Ollivier, Sheng Li, Yue Tang, Chayanika Chaudhuri, Peipei Zhou, Xulong Tang, Jingtong Hu, Alex K. Jones
In particular, we explore the use of processing-in-memory (PIM) approaches, mobile GPU accelerators, and recently released FPGAs, and compare them with novel Racetrack memory PIM.
1 code implementation • 29 Apr 2022 • Xinyi Zhang, Cong Hao, Peipei Zhou, Alex Jones, Jingtong Hu
The heterogeneity in ML models comes from multi-sensor perceiving and multi-task learning, i. e., multi-modality multi-task (MMMT), resulting in diverse deep neural network (DNN) layers and computation patterns.
no code implementations • 23 Apr 2022 • Yawen Wu, Dewen Zeng, Zhepeng Wang, Yiyu Shi, Jingtong Hu
However, in medical imaging analysis, each site may only have a limited amount of data and labels, which makes learning ineffective.
no code implementations • 4 Mar 2022 • Yawen Wu, Dewen Zeng, Xiaowei Xu, Yiyu Shi, Jingtong Hu
By pruning the parameters based on this importance difference, we can reduce the accuracy difference between the privileged group and the unprivileged group to improve fairness without a large accuracy drop.
no code implementations • 23 Feb 2022 • Yi Sheng, Junhuan Yang, Yawen Wu, Kevin Mao, Yiyu Shi, Jingtong Hu, Weiwen Jiang, Lei Yang
Results show that FaHaNa can identify a series of neural networks with higher fairness and accuracy on a dermatology dataset.
no code implementations • 18 Feb 2022 • Yue Tang, Xinyi Zhang, Peipei Zhou, Jingtong Hu
In this work, we design EF-Train, an efficient DNN training accelerator with a unified channel-level parallelism-based convolution kernel that can achieve end-to-end training on resource-limited low-power edge-level FPGAs.
no code implementations • 14 Feb 2022 • Yawen Wu, Zhepeng Wang, Dewen Zeng, Yiyu Shi, Jingtong Hu
To tackle this problem, we propose a data generation framework with two methods to improve CL training by joint sample generation and contrastive learning.
no code implementations • 14 Feb 2022 • Yawen Wu, Dewen Zeng, Zhepeng Wang, Yi Sheng, Lei Yang, Alaina J. James, Yiyu Shi, Jingtong Hu
The recently developed self-supervised learning approach, contrastive learning (CL), can leverage the unlabeled data to pre-train a model, after which the model is fine-tuned on limited labeled data for dermatological disease diagnosis.
no code implementations • 21 Nov 2021 • Yawen Wu, Zhepeng Wang, Dewen Zeng, Meng Li, Yiyu Shi, Jingtong Hu
To tackle this problem, we propose a collaborative contrastive learning framework consisting of two approaches: feature fusion and neighborhood matching, by which a unified feature space among clients is learned for better data representations.
no code implementations • 29 Sep 2021 • Yawen Wu, Zhepeng Wang, Dewen Zeng, Meng Li, Yiyu Shi, Jingtong Hu
Federated learning (FL) enables distributed clients to learn a shared model for prediction while keeping the training data local on each client.
no code implementations • 29 Sep 2021 • Yawen Wu, Zhepeng Wang, Dewen Zeng, Yiyu Shi, Jingtong Hu
In this way, the main model learns to cluster hard positives by pulling the representations of similar yet distinct samples together, by which the representations of similar samples are well-clustered and better representations can be learned.
no code implementations • 14 Sep 2021 • Dewen Zeng, Yukun Ding, Haiyun Yuan, Meiping Huang, Xiaowei Xu, Jian Zhuang, Jingtong Hu, Yiyu Shi
At the data acquisition time, the operator could not know the quality of the segmentation results.
Hardware Aware Neural Architecture Search Neural Architecture Search +1
1 code implementation • 16 Jun 2021 • Dewen Zeng, Yawen Wu, Xinrong Hu, Xiaowei Xu, Haiyun Yuan, Meiping Huang, Jian Zhuang, Jingtong Hu, Yiyu Shi
The success of deep learning heavily depends on the availability of large labeled training sets.
no code implementations • 7 Jun 2021 • Yawen Wu, Zhepeng Wang, Dewen Zeng, Yiyu Shi, Jingtong Hu
After a model is deployed on edge devices, it is desirable for these devices to learn from unlabeled data to continuously improve accuracy.
no code implementations • 1 Jan 2021 • Yawen Wu, Zhepeng Wang, Dewen Zeng, Yiyu Shi, Jingtong Hu
In this paper, we propose a framework to automatically select the most representative data from unlabeled input stream on-the-fly, which only requires the use of a small data buffer for dynamic learning.
no code implementations • 1 Jan 2021 • Qing Lu, Weiwen Jiang, Meng Jiang, Jingtong Hu, Sakyasingha Dasgupta, Yiyu Shi
The success of gragh neural networks (GNNs) in the past years has aroused grow-ing interest and effort in designing best models to handle graph-structured data.
no code implementations • 18 Aug 2020 • Zhenge Jia, Zhepeng Wang, Feng Hong, Lichuan Ping, Yiyu Shi, Jingtong Hu
We equip the system with real-time inference on both intracardiac and surface rhythm monitors.
no code implementations • 17 Aug 2020 • Dewen Zeng, Weiwen Jiang, Tianchen Wang, Xiaowei Xu, Haiyun Yuan, Meiping Huang, Jian Zhuang, Jingtong Hu, Yiyu Shi
Experimental results on ACDC MICCAI 2017 dataset demonstrate that our hardware-aware multi-scale NAS framework can reduce the latency by up to 3. 5 times and satisfy the real-time constraints, while still achieving competitive segmentation accuracy, compared with the state-of-the-art NAS segmentation framework.
1 code implementation • 17 Jul 2020 • Weiwen Jiang, Lei Yang, Sakyasingha Dasgupta, Jingtong Hu, Yiyu Shi
To tackle this issue, HotNAS builds a chain of tools to design hardware to support compression, based on which a global optimizer is developed to automatically co-search all the involved search spaces.
no code implementations • 7 Jul 2020 • Yawen Wu, Zhepeng Wang, Yiyu Shi, Jingtong Hu
For example, when training ResNet-110 on CIFAR-10, we achieve 68% computation saving while preserving full accuracy and 75% computation saving with a marginal accuracy loss of 1. 3%.
no code implementations • 23 Apr 2020 • Yawen Wu, Zhepeng Wang, Zhenge Jia, Yiyu Shi, Jingtong Hu
This work aims to enable persistent, event-driven sensing and decision capabilities for energy-harvesting (EH)-powered devices by deploying lightweight DNNs onto EH-powered devices.
no code implementations • 31 Oct 2019 • Qing Lu, Weiwen Jiang, Xiaowei Xu, Yiyu Shi, Jingtong Hu
With 30, 000 LUTs, a light-weight design is found to achieve 82. 98\% accuracy and 1293 images/second throughput, compared to which, under the same constraints, the traditional method even fails to find a valid solution.
no code implementations • 31 Oct 2019 • Weiwen Jiang, Qiuwen Lou, Zheyu Yan, Lei Yang, Jingtong Hu, Xiaobo Sharon Hu, Yiyu Shi
In this paper, we are the first to bring the computing-in-memory architecture, which can easily transcend the memory wall, to interplay with the neural architecture search, aiming to find the most efficient neural architectures with high network accuracy and maximized hardware efficiency.
1 code implementation • 6 Jul 2019 • Weiwen Jiang, Lei Yang, Edwin Sha, Qingfeng Zhuge, Shouzhen Gu, Sakyasingha Dasgupta, Yiyu Shi, Jingtong Hu
We propose a novel hardware and software co-exploration framework for efficient neural architecture search (NAS).
no code implementations • 31 Jan 2019 • Weiwen Jiang, Xinyi Zhang, Edwin H. -M. Sha, Lei Yang, Qingfeng Zhuge, Yiyu Shi, Jingtong Hu
In addition, with a performance abstraction model to analyze the latency of neural architectures without training, our framework can quickly prune architectures that do not satisfy the specification, leading to higher efficiency.
no code implementations • 1 Sep 2018 • Xiaowei Xu, Xinyi Zhang, Bei Yu, X. Sharon Hu, Christopher Rowen, Jingtong Hu, Yiyu Shi
The 55th Design Automation Conference (DAC) held its first System Design Contest (SDC) in 2018.
no code implementations • 3 Feb 2018 • Xiaolong Ma, Yi-Peng Zhang, Geng Yuan, Ao Ren, Zhe Li, Jie Han, Jingtong Hu, Yanzhi Wang
However, in these works, the memory design optimization is neglected for weight storage, which will inevitably result in large hardware cost.