no code implementations • 2 May 2025 • Hanping Zhang, Yuhong Guo
In this paper, we propose a novel Safe Skill Planning (SSkP) approach to enhance effective safe RL by exploiting auxiliary offline demonstration data.
no code implementations • 3 Apr 2025 • Hanping Zhang, Yuhong Guo
Safe Reinforcement Learning (Safe RL) aims to train an RL agent to maximize its performance in real-world environments while adhering to safety constraints, as exceeding safety violation limits can result in severe consequences.
no code implementations • 3 Apr 2025 • Hao Yan, Yuhong Guo
Domain generalization aims at developing suitable learning algorithms in source training domains such that the model learned can generalize well on a different unseen testing domain.
no code implementations • 11 Mar 2025 • Abdullah Alchihabi, Hanping Zhang, Yuhong Guo
The action representation learning module extracts discriminative embeddings of actions from limited observations, while the policy learning module leverages the learned action representations, along with augmented synthetic action representations, to learn a policy capable of handling tasks with unseen actions.
no code implementations • 8 Mar 2025 • Hao Yan, Marzi Heidari, Yuhong Guo
To maintain a diverse and representative feature memory bank, we introduce an adversarial feature generation method that creates features extending beyond the training domain distribution.
no code implementations • 22 Feb 2025 • Marzi Heidari, Yuhong Guo
Single Domain Generalization (SDG) remains a formidable challenge in the field of machine learning, particularly when models are deployed in environments that differ significantly from their training domains.
no code implementations • CVPR 2025 • Marzi Heidari, Abdullah Alchihabi, Hao Yan, Yuhong Guo
In this work, we introduce a novel problem setup termed as Heterogeneous Semi-Supervised Learning (HSSL), which presents unique challenges by bridging the semi-supervised learning (SSL) task and the unsupervised domain adaptation (UDA) task, and expanding standard semi-supervised learning to cope with heterogeneous training data.
no code implementations • 31 Dec 2024 • Abdullah Alchihabi, Yuhong Guo
To effectively diffuse unfairness in the input data, we introduce additional adversary bias perturbations to the subgraphs during the forward diffusion process, and train score-based models to predict these applied perturbations, enabling them to learn the underlying dynamics of the biases present in the data.
no code implementations • 30 Dec 2024 • Abdullah Alchihabi, Hao Yan, Yuhong Guo
Class imbalance is pervasive in real-world graph datasets, where the majority of annotated nodes belong to a small set of classes (majority classes), leaving many other classes (minority classes) with only a handful of labeled nodes.
no code implementations • 9 Dec 2024 • Hanping Zhang, Yuhong Guo
SeRLA introduces a skill-level adversarial Positive-Unlabeled (PU) learning model to extract useful skill prior knowledge by enabling learning from both limited expert data and general low-cost demonstration data in the offline prior learning stage.
1 code implementation • 1 Oct 2024 • Yi Xiong, Hao Wu, Changxu Shao, Ziqing Wang, Rui Zhang, Yuhong Guo, Junping Zhao, Ke Zhang, Zhenxuan Pan
The expanding context windows in large language models (LLMs) have greatly enhanced their capabilities in various applications, but they also introduce significant challenges in maintaining low latency, particularly in Time to First Token (TTFT).
1 code implementation • 22 Jul 2024 • Jiale Xu, Rui Zhang, Cong Guo, Weiming Hu, Zihan Liu, Feiyang Wu, Yu Feng, Shixuan Sun, Changxu Shao, Yuhong Guo, Junping Zhao, Ke Zhang, Minyi Guo, Jingwen Leng
This study introduces the vTensor, an innovative tensor structure for LLM inference based on GPU virtual memory management (VMM).
no code implementations • 2 May 2024 • Marzi Heidari, Hanping Zhang, Yuhong Guo
In recent years, semi-supervised learning (SSL) has gained significant attention due to its ability to leverage both labeled and unlabeled data to improve model performance, especially when labeled data is scarce.
no code implementations • 18 Apr 2024 • Qing En, Yuhong Guo
In this paper, we introduce a novel Cross-model Mutual learning framework for Exemplar-based Medical image Segmentation (CMEMS), which leverages two models to mutually excavate implicit information from unlabeled data at multiple granularities.
no code implementations • 17 Apr 2024 • Marzi Heidari, Hanping Zhang, Yuhong Guo
In this paper, we present a novel approach termed Prompt-Driven Feature Diffusion (PDFD) within a semi-supervised learning framework for Open World Semi-Supervised Learning (OW-SSL).
no code implementations • 17 Apr 2024 • Qing En, Yuhong Guo
It can learn statistical information and capture spatial correlations between image and text attributes in the embedding space, iteratively refining the mask to enhance segmentation.
no code implementations • 17 Apr 2024 • Hao Yan, Yuhong Guo
To address these two inherent challenges in supervised federated learning, we propose a novel lightweight unsupervised federated learning approach that leverages unlabeled data on each client to perform lightweight model training and communication by harnessing pretrained vision-language models, such as CLIP.
no code implementations • 6 Dec 2023 • Abdullah Alchihabi, Marzi Heidari, Yuhong Guo
Due to the availability of only a few labeled instances for the novel target prediction task and the significant domain shift between the well annotated source domain and the target domain, cross-domain few-shot learning (CDFSL) induces a very challenging adaptation problem.
no code implementations • 18 Sep 2023 • Abdullah Alchihabi, Qing En, Yuhong Guo
As a result, instead of using the dense adjacency matrix directly, ELR-GNN can learn a low-rank and sparse estimate of it in a simple, efficient and easy to optimize manner.
no code implementations • 18 Sep 2023 • Abdullah Alchihabi, Yuhong Guo
In this work, we propose a novel mixup-based graph augmentation method, Graph Dual Mixup (GDM), that leverages both functional and structural information of the graph instances to generate new labeled graph samples.
no code implementations • 3 May 2023 • Xuejun Han, Yuhong Guo
In view of this, in this paper we tackle a challenging and practical continual learning scenario named few-shot class-incremental learning (FSCIL), in which labeled data are given for classes in a base session but very limited labeled instances are available for new incremental classes.
no code implementations • CVPR 2023 • Taoseef Ishtiak, Qing En, Yuhong Guo
Moreover, a new exemplar embedding contrastive module is designed to enhance the discriminative capability of the segmentation model by exploiting the contrastive exemplar-based guidance knowledge in the embedding space.
no code implementations • 17 Dec 2022 • Qing En, Yuhong Guo
The proposed method trains the base segmentation network by using a novel contrastive variance (CV) loss to exploit the unlabeled pixels and a partial cross-entropy loss on the labeled pixels.
no code implementations • 15 Dec 2022 • Hao Yan, Yuhong Guo
We first split the unlabeled training set in the target domain into a pseudo-labeled confident subset and an unlabeled less-confident subset according to the prediction confidence scores from the pre-trained source model.
Source-Free Domain Adaptation
Unsupervised Domain Adaptation
no code implementations • 10 Sep 2022 • Hanping Zhang, Yuhong Guo
As safety violations can lead to severe consequences in real-world robotic applications, the increasing deployment of Reinforcement Learning (RL) in robotic domains has propelled the study of safe exploration for reinforcement learning (safe RL).
no code implementations • 3 Apr 2022 • Qing En, Yuhong Guo
Medical image annotation typically requires expert knowledge and hence incurs time-consuming and expensive data annotation costs.
no code implementations • 3 Dec 2021 • Xuejun Han, Yuhong Guo
To address this shortcoming, continual machine learners are elaborated to commendably learn a stream of tasks with domain and class shifts among different tasks.
no code implementations • 29 Jun 2021 • Abdullah Alchihabi, Yuhong Guo
In this paper, we propose a novel Dual GNN learning framework to address this challenge task.
no code implementations • 29 Jun 2021 • Hanping Zhang, Yuhong Guo
In this work, we propose a novel policy-aware adversarial data augmentation method to augment the standard policy learning method with automatically generated trajectory data.
no code implementations • 7 Dec 2020 • Bingyu Liu, Yuhong Guo, Jieping Ye, Weihong Deng
Inspired by the effectiveness of pseudo-labels in domain adaptation, we propose a reinforcement learning based selective pseudo-labeling method for semi-supervised domain adaptation.
no code implementations • 3 Dec 2020 • Zhenpeng Li, Jianan Jiang, Yuhong Guo, Tiantian Tang, Chengxiang Zhuo, Jieping Ye
In the proposed model, we design a data imputation module to fill the missing feature values based on the partial observations in the target domain, while aligning the two domains via deep adversarial adaption.
no code implementations • 14 Nov 2020 • Zhen Zhao, Yuhong Guo, Jieping Ye
Recently the problem of cross-domain object detection has started drawing attention in the computer vision community.
1 code implementation • 8 Jun 2020 • Jianan Jiang, Zhenpeng Li, Yuhong Guo, Jieping Ye
The TMHFS method extends the Meta-Confidence Transduction (MCT) and Dense Feature-Matching Networks (DFMN) method [2] by introducing a new prediction head, i. e, an instance-wise global classification network based on semantic information, after the common feature embedding network.
1 code implementation • 8 Jun 2020 • Zhen Zhao, Bingyu Liu, Yuhong Guo, Jieping Ye
In this paper, we present our proposed ensemble model with batch spectral regularization and data blending mechanisms for the Track 2 problem of the cross-domain few-shot learning (CD-FSL) challenge.
no code implementations • 18 May 2020 • Bingyu Liu, Zhen Zhao, Zhenpeng Li, Jianan Jiang, Yuhong Guo, Jieping Ye
In this paper, we propose a feature transformation ensemble model with batch spectral regularization for the Cross-domain few-shot learning (CD-FSL) challenge.
no code implementations • 11 May 2020 • Yan Yan, Yuhong Guo
Partial label (PL) learning tackles the problem where each training instance is associated with a set of candidate labels that include both the true label and irrelevant noise labels.
no code implementations • 3 Apr 2020 • Kevin Hua, Yuhong Guo
Domain adaptation aims to exploit a label-rich source domain for learning classifiers in a different label-scarce target domain.
no code implementations • ECCV 2020 • Zhen Zhao, Yuhong Guo, Haifeng Shen, Jieping Ye
In this paper, we propose a novel end-to-end unsupervised deep domain adaptation model for adaptive object detection by exploiting multi-label object recognition as a dual auxiliary task.
no code implementations • 29 Mar 2020 • Zhenpeng Li, Zhen Zhao, Yuhong Guo, Haifeng Shen, Jieping Ye
However, in practice the labeled data can come from multiple source domains with different distributions.
1 code implementation • ICML 2020 • Vasileios Lioutas, Yuhong Guo
Some of these models use all the available sequence tokens to generate an attention distribution which results in time complexity of $O(n^2)$.
Ranked #12 on
Machine Translation
on WMT2014 English-French
no code implementations • ICLR 2020 • Yan Yan, Yuhong Guo
Partial multi-label learning (PML), which tackles the problem of learning multi-label prediction models from instances with overcomplete noisy annotations, has recently started gaining attention from the research community.
no code implementations • 18 Sep 2019 • Yuan Wu, Yuhong Guo
In this paper we propose a novel dual adversarial co-learning approach for multi-domain text classification (MDTC).
General Classification
Multi-Domain Sentiment Classification
+4
no code implementations • 17 Sep 2019 • Yaser Alwattar, Yuhong Guo
In this paper, we propose a novel deep multi-level attention model to address inverse visual question answering.
no code implementations • 17 Sep 2019 • Xinyuan Lu, Yuhong Guo
Automatic question generation is an important problem in natural language processing.
no code implementations • 15 Sep 2019 • Yan Yan, Yuhong Guo
Partial multi-label learning (PML), which tackles the problem of learning multi-label prediction models from instances with overcomplete noisy annotations, has recently started gaining attention from the research community.
1 code implementation • 13 May 2019 • Zhengxia Zou, Keyan Chen, Zhenwei Shi, Yuhong Guo, Jieping Ye
Object detection, as of one the most fundamental and challenging problems in computer vision, has received great attention in recent years.
no code implementations • 7 Aug 2018 • Meng Ye, Yuhong Guo
The approach projects the label embedding vectors into a low-dimensional space to induce better inter-label relationships and explicitly facilitate information transfer from seen labels to unseen labels, while simultaneously learning a max-margin multi-label classifier with the projected label embeddings.
no code implementations • CVPR 2019 • Meng Ye, Yuhong Guo
The ensemble network is built by learning multiple image classification functions with a shared feature extraction network but different label embedding representations, which enhance the diversity of the classifiers and facilitate information transfer to unlabeled classes.
1 code implementation • 27 Apr 2018 • Kongming Liang, Yuhong Guo, Hong Chang, Xilin Chen
In this paper, we propose a novel framework, called Deep Structural Ranking, for visual relationship detection.
1 code implementation • 19 Apr 2018 • Meng Ye, Yuhong Guo
Despite the breakthroughs achieved by deep learning models in conventional supervised learning scenarios, their dependence on sufficient labeled training data in each class prevents effective applications of these deep models in situations where labeled training instances for a subset of novel classes are very sparse -- in the extreme case only one instance is available for each class.
no code implementations • CVPR 2017 • Meng Ye, Yuhong Guo
The proposed approach aims to identify a set of common high-level semantic components across the two domains via non-negative sparse matrix factorization, while enforcing the representation vectors of the images in this common component-based space to be discriminatively aligned with the attribute-based label representation vectors.
no code implementations • ICCV 2015 • Xin Li, Yuhong Guo, Dale Schuurmans
Most existing zero-shot learning methods require a user to first provide a set of semantic visual attributes for each class as side information before applying a two-step prediction procedure that introduces an intermediate attribute prediction problem.
no code implementations • NeurIPS 2013 • Min Xiao, Yuhong Guo
In this paper, we propose a two-step representation learning method to bridge the feature spaces of different languages by exploiting a set of parallel bilingual documents.
no code implementations • NeurIPS 2013 • Yuhong Guo
Our method is based on the assumption that useful information for the recovery of a corrupted data matrix can be gained from an uncorrupted related data matrix.
no code implementations • CVPR 2013 • Xin Li, Yuhong Guo
Recently active learning has attracted a lot of attention in computer vision field, as it is time and cost consuming to prepare a good set of labeled images for vision data analysis.
no code implementations • NeurIPS 2010 • Yuhong Guo
Recently, batch-mode active learning has attracted a lot of attention.
no code implementations • NeurIPS 2008 • Yuhong Guo
Recently, supervised dimensionality reduction has been gaining attention, owing to the realization that data labels are often available and strongly suggest important underlying structures in the data.
no code implementations • NeurIPS 2007 • Yuhong Guo, Dale Schuurmans
Most previous studies in active learning have focused on selecting one unlabeled instance at one time while retraining in each iteration.