Search Results for author: Chun-Fu Chen

Found 24 papers, 12 papers with code

MaSS: Multi-attribute Selective Suppression for Utility-preserving Data Transformation from an Information-theoretic Perspective

no code implementations23 May 2024 Yizhuo Chen, Chun-Fu Chen, Hsiang Hsu, Shaohan Hu, Marco Pistoia, Tarek Abdelzaher

The growing richness of large-scale datasets has been crucial in driving the rapid advancement and wide adoption of machine learning technologies.

Attribute

Model-Agnostic Utility-Preserving Biometric Information Anonymization

no code implementations23 May 2024 Chun-Fu Chen, Bill Moriarty, Shaohan Hu, Sean Moran, Marco Pistoia, Vincenzo Piuri, Pierangela Samarati

The recent rapid advancements in both sensing and machine learning technologies have given rise to the universal collection and utilization of people's biometrics, such as fingerprints, voices, retina/facial scans, or gait/motion/gestures data, enabling a wide range of applications including authentication, health monitoring, or much more sophisticated analytics.

OVOR: OnePrompt with Virtual Outlier Regularization for Rehearsal-Free Class-Incremental Learning

no code implementations6 Feb 2024 Wei-Cheng Huang, Chun-Fu Chen, Hsiang Hsu

We illustrate that a simplified prompt-based method can achieve results comparable to previous state-of-the-art (SOTA) methods equipped with a prompt pool, using much less learnable parameters and lower inference cost.

Class Incremental Learning Incremental Learning

Machine Unlearning for Image-to-Image Generative Models

2 code implementations1 Feb 2024 Guihong Li, Hsiang Hsu, Chun-Fu Chen, Radu Marculescu

This paper serves as a bridge, addressing the gap by providing a unifying framework of machine unlearning for image-to-image generative models.

Machine Unlearning

Fast-NTK: Parameter-Efficient Unlearning for Large-Scale Models

no code implementations22 Dec 2023 Guihong Li, Hsiang Hsu, Chun-Fu Chen, Radu Marculescu

The rapid growth of machine learning has spurred legislative initiatives such as ``the Right to be Forgotten,'' allowing users to request data removal.

Machine Unlearning

Procedural Image Programs for Representation Learning

1 code implementation29 Nov 2022 Manel Baradad, Chun-Fu Chen, Jonas Wulff, Tongzhou Wang, Rogerio Feris, Antonio Torralba, Phillip Isola

Learning image representations using synthetic data allows training neural networks without some of the concerns associated with real images, such as privacy and bias.

Representation Learning

MaSS: Multi-attribute Selective Suppression

no code implementations18 Oct 2022 Chun-Fu Chen, Shaohan Hu, Zhonghao Shi, Prateek Gulati, Bill Moriarty, Marco Pistoia, Vincenzo Piuri, Pierangela Samarati

The recent rapid advances in machine learning technologies largely depend on the vast richness of data available today, in terms of both the quantity and the rich content contained within.

Attribute

Generating Realistic Physical Adversarial Examplesby Patch Transformer Network

no code implementations29 Sep 2021 Quanfu Fan, Kaidi Xu, Chun-Fu Chen, Sijia Liu, Gaoyuan Zhang, David Daniel Cox, Xue Lin

Physical adversarial attacks apply carefully crafted adversarial perturbations onto real objects to maliciously alter the prediction of object classifiers or detectors.

Object

Dynamic Network Quantization for Efficient Video Inference

1 code implementation ICCV 2021 Ximeng Sun, Rameswar Panda, Chun-Fu Chen, Aude Oliva, Rogerio Feris, Kate Saenko

Deep convolutional networks have recently achieved great success in video recognition, yet their practical realization remains a challenge due to the large amount of computational resources required to achieve robust recognition.

Quantization Video Recognition

Dynamic Distillation Network for Cross-Domain Few-Shot Recognition with Unlabeled Data

1 code implementation NeurIPS 2021 Ashraful Islam, Chun-Fu Chen, Rameswar Panda, Leonid Karlinsky, Rogerio Feris, Richard J. Radke

As the base dataset and unlabeled dataset are from different domains, projecting the target images in the class-domain of the base dataset with a fixed pretrained model might be sub-optimal.

cross-domain few-shot learning

RegionViT: Regional-to-Local Attention for Vision Transformers

3 code implementations ICLR 2022 Chun-Fu Chen, Rameswar Panda, Quanfu Fan

The regional-to-local attention includes two steps: first, the regional self-attention extract global information among all regional tokens and then the local self-attention exchanges the information among one regional token and the associated local tokens via self-attention.

Action Recognition Image Classification +2

AdaMML: Adaptive Multi-Modal Learning for Efficient Video Recognition

1 code implementation ICCV 2021 Rameswar Panda, Chun-Fu Chen, Quanfu Fan, Ximeng Sun, Kate Saenko, Aude Oliva, Rogerio Feris

Specifically, given a video segment, a multi-modal policy network is used to decide what modalities should be used for processing by the recognition model, with the goal of improving both accuracy and efficiency.

Video Recognition

CrossViT: Cross-Attention Multi-Scale Vision Transformer for Image Classification

14 code implementations ICCV 2021 Chun-Fu Chen, Quanfu Fan, Rameswar Panda

To this end, we propose a dual-branch transformer to combine image patches (i. e., tokens in a transformer) of different sizes to produce stronger image features.

General Classification Image Classification

Improved Techniques for Quantizing Deep Networks with Adaptive Bit-Widths

no code implementations2 Mar 2021 Ximeng Sun, Rameswar Panda, Chun-Fu Chen, Naigang Wang, Bowen Pan, Kailash Gopalakrishnan, Aude Oliva, Rogerio Feris, Kate Saenko

Second, to effectively transfer knowledge, we develop a dynamic block swapping method by randomly replacing the blocks in the lower-precision student network with the corresponding blocks in the higher-precision teacher network.

Image Classification Quantization +2

Deep Analysis of CNN-based Spatio-temporal Representations for Action Recognition

1 code implementation CVPR 2021 Chun-Fu Chen, Rameswar Panda, Kandan Ramakrishnan, Rogerio Feris, John Cohn, Aude Oliva, Quanfu Fan

In recent years, a number of approaches based on 2D or 3D convolutional neural networks (CNN) have emerged for video action recognition, achieving state-of-the-art results on several large-scale benchmark datasets.

Action Recognition Temporal Action Localization

NASTransfer: Analyzing Architecture Transferability in Large Scale Neural Architecture Search

no code implementations23 Jun 2020 Rameswar Panda, Michele Merler, Mayoore Jaiswal, Hui Wu, Kandan Ramakrishnan, Ulrich Finkler, Chun-Fu Chen, Minsik Cho, David Kung, Rogerio Feris, Bishwaranjan Bhattacharjee

The typical way of conducting large scale NAS is to search for an architectural building block on a small dataset (either using a proxy set from the large dataset or a completely different small scale dataset) and then transfer the block to a larger dataset.

Neural Architecture Search

Efficient Fusion of Sparse and Complementary Convolutions

no code implementations7 Aug 2018 Chun-Fu Chen, Quanfu Fan, Marco Pistoia, Gwo Giun Lee

We propose a new method to create compact convolutional neural networks (CNNs) by exploiting sparse convolutions.

General Classification Object +2

Big-Little Net: An Efficient Multi-Scale Feature Representation for Visual and Speech Recognition

3 code implementations ICLR 2019 Chun-Fu Chen, Quanfu Fan, Neil Mallinar, Tom Sercu, Rogerio Feris

The proposed approach demonstrates improvement of model efficiency and performance on both object recognition and speech recognition tasks, using popular architectures including ResNet and ResNeXt.

Object Object Recognition +2

NISP: Pruning Networks using Neuron Importance Score Propagation

no code implementations CVPR 2018 Ruichi Yu, Ang Li, Chun-Fu Chen, Jui-Hsin Lai, Vlad I. Morariu, Xintong Han, Mingfei Gao, Ching-Yung Lin, Larry S. Davis

In contrast, we argue that it is essential to prune neurons in the entire neuron network jointly based on a unified goal: minimizing the reconstruction error of important responses in the "final response layer" (FRL), which is the second-to-last layer before classification, for a pruned network to retrain its predictive power.

Network Pruning

Cannot find the paper you are looking for? You can Submit a new open access paper.