Search Results for author: Shao-Lun Huang

Found 33 papers, 11 papers with code

Exploring Iterative Refinement with Diffusion Models for Video Grounding

1 code implementation26 Oct 2023 Xiao Liang, Tao Shi, Yaoyuan Liang, Te Tao, Shao-Lun Huang

In this paper, we propose DiffusionVG, a novel framework with diffusion models that formulates video grounding as a conditional generation task, where the target span is generated from Gaussian noise inputs and interatively refined in the reverse diffusion process.

Sentence Video Grounding

SSLCL: An Efficient Model-Agnostic Supervised Contrastive Learning Framework for Emotion Recognition in Conversations

1 code implementation25 Oct 2023 Tao Shi, Xiao Liang, Yaoyuan Liang, Xinyi Tong, Shao-Lun Huang

To address these challenges, we propose an efficient and model-agnostic SCL framework named Supervised Sample-Label Contrastive Learning with Soft-HGR Maximal Correlation (SSLCL), which eliminates the need for a large batch size and can be seamlessly integrated with existing ERC models without introducing any model-specific assumptions.

Contrastive Learning Emotion Recognition

Personalized Federated Learning with Feature Alignment and Classifier Collaboration

3 code implementations20 Jun 2023 Jian Xu, Xinyi Tong, Shao-Lun Huang

Data heterogeneity is one of the most challenging issues in federated learning, which motivates a variety of approaches to learn personalized models for participating clients.

Personalized Federated Learning Representation Learning

Stabilizing and Improving Federated Learning with Non-IID Data and Client Dropout

no code implementations11 Mar 2023 Jian Xu, Meiling Yang, Wenbo Ding, Shao-Lun Huang

The label distribution skew induced data heterogeniety has been shown to be a significant obstacle that limits the model performance in federated learning, which is particularly developed for collaborative model training over decentralized data sources while preserving user privacy.

Federated Learning

An Information-Theoretic Approach to Transferability in Task Transfer Learning

no code implementations20 Dec 2022 Yajie Bao, Yang Li, Shao-Lun Huang, Lin Zhang, Lizhong Zheng, Amir Zamir, Leonidas Guibas

Task transfer learning is a popular technique in image processing applications that uses pre-trained models to reduce the supervision cost of related tasks.

Model Selection Transfer Learning

Revisiting Sparse Convolutional Model for Visual Recognition

1 code implementation24 Oct 2022 Xili Dai, Mingyang Li, Pengyuan Zhai, Shengbang Tong, Xingjian Gao, Shao-Lun Huang, Zhihui Zhu, Chong You, Yi Ma

We show that such models have equally strong empirical performance on CIFAR-10, CIFAR-100, and ImageNet datasets when compared to conventional neural networks.

Image Classification

Transferability-Guided Cross-Domain Cross-Task Transfer Learning

no code implementations12 Jul 2022 Yang Tan, Enming Zhang, Yang Li, Shao-Lun Huang, Xiao-Ping Zhang

We propose two novel transferability metrics F-OTCE (Fast Optimal Transport based Conditional Entropy) and JC-OTCE (Joint Correspondence OTCE) to evaluate how much the source model (task) can benefit the learning of the target task and to learn more transferable representations for cross-domain cross-task transfer learning.

Transfer Learning

Generalizing to Unseen Domains with Wasserstein Distributional Robustness under Limited Source Knowledge

no code implementations11 Jul 2022 Jingge Wang, Liyan Xie, Yao Xie, Shao-Lun Huang, Yang Li

Domain generalization aims at learning a universal model that performs well on unseen target domains, incorporating knowledge from multiple source domains.

Domain Generalization Rotated MNIST +1

Regularization Penalty Optimization for Addressing Data Quality Variance in OoD Algorithms

no code implementations12 Jun 2022 Runpeng Yu, Hong Zhu, Kaican Li, Lanqing Hong, Rui Zhang, Nanyang Ye, Shao-Lun Huang, Xiuqiang He

Due to the poor generalization performance of traditional empirical risk minimization (ERM) in the case of distributional shift, Out-of-Distribution (OoD) generalization algorithms receive increasing attention.


A Mathematical Framework for Quantifying Transferability in Multi-source Transfer Learning

no code implementations NeurIPS 2021 Xinyi Tong, Xiangxiang Xu, Shao-Lun Huang, Lizhong Zheng

Current transfer learning algorithm designs mainly focus on the similarities between source and target tasks, while the impacts of the sample sizes of these tasks are often not sufficiently addressed.

Image Classification Transfer Learning

Transferability Estimation for Semantic Segmentation Task

no code implementations30 Sep 2021 Yang Tan, Yang Li, Shao-Lun Huang

Recent analytical transferability metrics are mainly designed for image classification problem, and currently there is no specific investigation for the transferability estimation of semantic segmentation task, which is an essential problem in autonomous driving, medical image analysis, etc.

Autonomous Driving Image Classification +3

PAC-Bayes Information Bottleneck

1 code implementation ICLR 2022 Zifeng Wang, Shao-Lun Huang, Ercan E. Kuruoglu, Jimeng Sun, Xi Chen, Yefeng Zheng

Then, we build an IIW-based information bottleneck on the trade-off between accuracy and information complexity of NNs, namely PIB.

Performance-Guaranteed ODE Solvers with Complexity-Informed Neural Networks

no code implementations NeurIPS Workshop DLDE 2021 Feng Zhao, Xiang Chen, Jun Wang, Zuoqiang Shi, Shao-Lun Huang

Traditionally, we provide technical parameters for ODE solvers, such as the order, the stepsize and the local error threshold.

On Distributed Learning with Constant Communication Bits

no code implementations14 Sep 2021 Xiangxiang Xu, Shao-Lun Huang

Specifically, we consider the distributed hypothesis testing (DHT) problem where two distributed nodes are constrained to transmit a constant number of bits to a central decoder.

Byzantine-robust Federated Learning through Collaborative Malicious Gradient Filtering

3 code implementations13 Sep 2021 Jian Xu, Shao-Lun Huang, Linqi Song, Tian Lan

To this end, previous work either makes use of auxiliary data at parameter server to verify the received gradients (e. g., by computing validation error rate) or leverages statistic-based methods (e. g. median and Krum) to identify and remove malicious gradients from Byzantine clients.

Federated Learning Model Poisoning +2

Maximum Likelihood Estimation for Multimodal Learning with Missing Modality

no code implementations24 Aug 2021 Fei Ma, Xiangxiang Xu, Shao-Lun Huang, Lin Zhang

Moreover, we develop a generalized form of the softmax function to effectively implement maximum likelihood estimation in an end-to-end manner.

DQ-SGD: Dynamic Quantization in SGD for Communication-Efficient Distributed Learning

no code implementations30 Jul 2021 Guangfeng Yan, Shao-Lun Huang, Tian Lan, Linqi Song

Gradient quantization is an emerging technique in reducing communication costs in distributed learning.


Practical Transferability Estimation for Image Classification Tasks

no code implementations19 Jun 2021 Yang Tan, Yang Li, Shao-Lun Huang

Transferability estimation is an essential problem in transfer learning to predict how good the performance is when transferring a source model (or source task) to a target task.

Classification Image Classification +2

OTCE: A Transferability Metric for Cross-Domain Cross-Task Representations

1 code implementation CVPR 2021 Yang Tan, Yang Li, Shao-Lun Huang

Specifically, we use optimal transport to estimate domain difference and the optimal coupling between source and target distributions, which is then used to derive the conditional entropy of the target task (task difference).

Model Selection Transfer Learning

Lifelong Learning based Disease Diagnosis on Clinical Notes

1 code implementation27 Feb 2021 Zifeng Wang, Yifan Yang, Rui Wen, Xi Chen, Shao-Lun Huang, Yefeng Zheng

Current deep learning based disease diagnosis systems usually fall short in catastrophic forgetting, i. e., directly fine-tuning the disease diagnosis model on new tasks usually leads to abrupt decay of performance on previous tasks.


no code implementations1 Jan 2021 Guangfeng Yan, Shao-Lun Huang, Tian Lan, Linqi Song

This paper addresses this issue by proposing a novel dynamic quantized SGD (DQSGD) framework, which enables us to optimize the quantization strategy for each gradient descent step by exploring the trade-off between communication cost and modeling error.


Predicting Events in MOBA Games: Prediction, Attribution, and Evaluation

no code implementations17 Dec 2020 Zelong Yang, Yan Wang, Piji Li, Shaobin Lin, Shuming Shi, Shao-Lun Huang, Wei Bi

The multiplayer online battle arena (MOBA) games have become increasingly popular in recent years.

Finding Influential Instances for Distantly Supervised Relation Extraction

no code implementations COLING 2022 Zifeng Wang, Rui Wen, Xi Chen, Shao-Lun Huang, Ningyu Zhang, Yefeng Zheng

Distant supervision (DS) is a strong way to expand the datasets for enhancing relation extraction (RE) models but often suffers from high label noise.

Relation Relation Extraction

Online Disease Self-diagnosis with Inductive Heterogeneous Graph Convolutional Networks

no code implementations6 Sep 2020 Zifeng Wang, Rui Wen, Xi Chen, Shilei Cao, Shao-Lun Huang, Buyue Qian, Yefeng Zheng

We propose a Healthcare Graph Convolutional Network (HealGCN) to offer disease self-diagnosis service for online users based on Electronic Healthcare Records (EHRs).

Graph Representation Learning Retrieval

Information Theoretic Counterfactual Learning from Missing-Not-At-Random Feedback

1 code implementation NeurIPS 2020 Zifeng Wang, Xi Chen, Rui Wen, Shao-Lun Huang, Ercan E. Kuruoglu, Yefeng Zheng

Counterfactual learning for dealing with missing-not-at-random data (MNAR) is an intriguing topic in the recommendation literature since MNAR data are ubiquitous in modern recommender systems.

counterfactual Recommendation Systems

Interpretable Real-Time Win Prediction for Honor of Kings, a Popular Mobile MOBA Esport

no code implementations14 Aug 2020 Zelong Yang, Zhufeng Pan, Yan Wang, Deng Cai, Xiaojiang Liu, Shuming Shi, Shao-Lun Huang

With the rapid prevalence and explosive development of MOBA esports (Multiplayer Online Battle Arena electronic sports), much research effort has been devoted to automatically predicting game results (win predictions).


On the Fairness of Randomized Trials for Recommendation with Heterogeneous Demographics and Beyond

no code implementations25 Jan 2020 Zifeng Wang, Xi Chen, Rui Wen, Shao-Lun Huang

Observed events in recommendation are consequence of the decisions made by a policy, thus they are usually selectively labeled, namely the data are Missing Not At Random (MNAR), which often causes large bias to the estimate of true outcomes risk.

counterfactual Fairness

Less Is Better: Unweighted Data Subsampling via Influence Function

1 code implementation3 Dec 2019 Zifeng Wang, Hong Zhu, Zhenhua Dong, Xiuqiang He, Shao-Lun Huang

In the time of Big Data, training complex models on large-scale data sets is challenging, making it appealing to reduce data volume for saving computation resources by subsampling.

General Classification Image Classification +2

On Universal Features for High-Dimensional Learning and Inference

no code implementations20 Nov 2019 Shao-Lun Huang, Anuran Makur, Gregory W. Wornell, Lizhong Zheng

We consider the problem of identifying universal low-dimensional features from high-dimensional data for inference tasks in settings involving learning.

Collaborative Filtering regression +1

An Information-theoretic Approach to Unsupervised Feature Selection for High-Dimensional Data

no code implementations8 Oct 2019 Shao-Lun Huang, Xiangxiang Xu, Lizhong Zheng

In this paper, we propose an information-theoretic approach to design the functional representations to extract the hidden common structure shared by a set of random variables.

feature selection

An Information Theoretic Interpretation to Deep Neural Networks

no code implementations16 May 2019 Shao-Lun Huang, Xiangxiang Xu, Lizhong Zheng, Gregory W. Wornell

It is commonly believed that the hidden layers of deep neural networks (DNNs) attempt to extract informative features for learning tasks.

feature selection

An Information-Theoretic Metric of Transferability for Task Transfer Learning

1 code implementation ICLR 2019 Yajie Bao, Yang Li, Shao-Lun Huang, Lin Zhang, Amir R. Zamir, Leonidas J. Guibas

An important question in task transfer learning is to determine task transferability, i. e. given a common input domain, estimating to what extent representations learned from a source task can help in learning a target task.

General Classification Scene Understanding +1

Cannot find the paper you are looking for? You can Submit a new open access paper.