Search Results for author: Long-Kai Huang

Found 14 papers, 6 papers with code

Communication-Efficient Distributed PCA by Riemannian Optimization

no code implementations ICML 2020 Long-Kai Huang, Jialin Pan

In this paper, we study the leading eigenvector problem in a statistically distributed setting and propose a communication-efficient algorithm based on Riemannian optimization, which trades local computation for global communication.

Riemannian optimization

Invariant Test-Time Adaptation for Vision-Language Model Generalization

1 code implementation1 Mar 2024 Huan Ma, Yan Zhu, Changqing Zhang, Peilin Zhao, Baoyuan Wu, Long-Kai Huang, QinGhua Hu, Bingzhe Wu

Vision-language foundation models have exhibited remarkable success across a multitude of downstream tasks due to their scalability on extensive image-text paired datasets.

Fine-Grained Image Classification Language Modelling +1

Concept-wise Fine-tuning Matters in Preventing Negative Transfer

no code implementations ICCV 2023 Yunqiao Yang, Long-Kai Huang, Ying WEI

A multitude of prevalent pre-trained models mark a major milestone in the development of artificial intelligence, while fine-tuning has been a common practice that enables pretrained models to figure prominently in a wide array of target datasets.

Improving Generalizability of Graph Anomaly Detection Models via Data Augmentation

1 code implementation18 Jun 2023 Shuang Zhou, Xiao Huang, Ninghao Liu, Huachi Zhou, Fu-Lai Chung, Long-Kai Huang

In this paper, we base on the phenomenon and propose a general and novel research problem of generalized graph anomaly detection that aims to effectively identify anomalies on both the training-domain graph and unseen testing graph to eliminate potential dangers.

Data Augmentation Graph Anomaly Detection

Improving Generalizability of Graph Anomaly Detection Models via Data Augmentation

1 code implementation21 Sep 2022 Shuang Zhou, Xiao Huang, Ninghao Liu, Fu-Lai Chung, Long-Kai Huang

In this paper, we base on the phenomenon and propose a general and novel research problem of generalized graph anomaly detection that aims to effectively identify anomalies on both the training-domain graph and unseen testing graph to eliminate potential dangers.

Data Augmentation Graph Anomaly Detection

Can Pre-trained Models Really Learn Better Molecular Representations for AI-aided Drug Discovery?

no code implementations21 Aug 2022 Ziqiao Zhang, Yatao Bian, Ailin Xie, Pengju Han, Long-Kai Huang, Shuigeng Zhou

Self-supervised pre-training is gaining increasingly more popularity in AI-aided drug discovery, leading to more and more pre-trained models with the promise that they can extract better feature representations for molecules.

Drug Discovery

Learning to generate imaginary tasks for improving generalization in meta-learning

no code implementations9 Jun 2022 Yichen Wu, Long-Kai Huang, Ying WEI

The success of meta-learning on existing benchmarks is predicated on the assumption that the distribution of meta-training tasks covers meta-testing tasks.

Image Classification Memorization +2

Fine-Tuning Graph Neural Networks via Graph Topology induced Optimal Transport

1 code implementation20 Mar 2022 Jiying Zhang, Xi Xiao, Long-Kai Huang, Yu Rong, Yatao Bian

In this paper, we present a novel optimal transport-based fine-tuning framework called GTOT-Tuning, namely, Graph Topology induced Optimal Transport fine-Tuning, for GNN style backbones.

Graph Classification Graph Learning +2

Functionally Regionalized Knowledge Transfer for Low-resource Drug Discovery

no code implementations NeurIPS 2021 Huaxiu Yao, Ying WEI, Long-Kai Huang, Ding Xue, Junzhou Huang, Zhenhui (Jessie) Li

More recently, there has been a surge of interest in employing machine learning approaches to expedite the drug discovery process where virtual screening for hit discovery and ADMET prediction for lead optimization play essential roles.

Drug Discovery Meta-Learning +1

Frustratingly Easy Transferability Estimation

no code implementations17 Jun 2021 Long-Kai Huang, Ying WEI, Yu Rong, Qiang Yang, Junzhou Huang

Transferability estimation has been an essential tool in selecting a pre-trained model and the layers in it for transfer learning, to transfer, so as to maximize the performance on a target task and prevent negative transfer.

Mutual Information Estimation Transfer Learning

Improving Generalization in Meta-learning via Task Augmentation

1 code implementation26 Jul 2020 Huaxiu Yao, Long-Kai Huang, Linjun Zhang, Ying WEI, Li Tian, James Zou, Junzhou Huang, Zhenhui Li

Moreover, both MetaMix and Channel Shuffle outperform state-of-the-art results by a large margin across many datasets and are compatible with existing meta-learning algorithms.

Meta-Learning

Online Hashing

no code implementations6 Apr 2017 Long-Kai Huang, Qiang Yang, Wei-Shi Zheng

Specifically, a new loss function is proposed to measure the similarity loss between a pair of data samples in hamming space.

Cannot find the paper you are looking for? You can Submit a new open access paper.