Search Results for author: Jindong Wang

Found 98 papers, 56 papers with code

Stratified Transfer Learning for Cross-domain Activity Recognition

no code implementations25 Dec 2017 Jindong Wang, Yiqiang Chen, Lisha Hu, Xiaohui Peng, Philip S. Yu

The proposed framework, referred to as Stratified Transfer Learning (STL), can dramatically improve the classification accuracy for cross-domain activity recognition.

Cross-Domain Activity Recognition General Classification +1

Cross-position Activity Recognition with Stratified Transfer Learning

no code implementations26 Jun 2018 Yiqiang Chen, Jindong Wang, Meiyu Huang, Han Yu

STL consists of two components: Stratified Domain Selection (STL-SDS) can select the most similar source domain to the target domain; Stratified Activity Transfer (STL-SAT) is able to perform accurate knowledge transfer.

Human Activity Recognition Position +1

Balanced Distribution Adaptation for Transfer Learning

no code implementations2 Jul 2018 Jindong Wang, Yiqiang Chen, Shuji Hao, Wenjie Feng, Zhiqi Shen

To tackle the distribution adaptation problem, in this paper, we propose a novel transfer learning approach, named as Balanced Distribution \underline{A}daptation~(BDA), which can adaptively leverage the importance of the marginal and conditional distribution discrepancies, and several existing methods can be treated as special cases of BDA.

Transfer Learning

Accelerating Deep Unsupervised Domain Adaptation with Transfer Channel Pruning

1 code implementation25 Mar 2019 Chaohui Yu, Jindong Wang, Yiqiang Chen, Zijing Wu

In this paper, we propose a unified Transfer Channel Pruning (TCP) approach for accelerating UDA models.

Transfer Learning Unsupervised Domain Adaptation

Easy Transfer Learning By Exploiting Intra-domain Structures

1 code implementation2 Apr 2019 Jindong Wang, Yiqiang Chen, Han Yu, Meiyu Huang, Qiang Yang

In this paper, we propose a practically Easy Transfer Learning (EasyTL) approach which requires no model selection and hyperparameter tuning, while achieving competitive performance.

Computational Efficiency Domain Adaptation +2

Transfer Learning with Dynamic Distribution Adaptation

1 code implementation17 Sep 2019 Jindong Wang, Yiqiang Chen, Wenjie Feng, Han Yu, Meiyu Huang, Qiang Yang

Since the source and the target domains are usually from different distributions, existing methods mainly focus on adapting the cross-domain marginal or conditional distributions.

Domain Adaptation Image Classification +2

Transfer Learning with Dynamic Adversarial Adaptation Network

no code implementations18 Sep 2019 Chaohui Yu, Jindong Wang, Yiqiang Chen, Meiyu Huang

In this paper, we propose a novel Dynamic Adversarial Adaptation Network (DAAN) to dynamically learn domain-invariant representations while quantitatively evaluate the relative importance of global and local domain distributions.

Domain Adaptation Transfer Learning

Joint Partial Optimal Transport for Open Set Domain Adaptation

no code implementations11 Jul 2020 Renjun Xu, Pelen Liu, Yin Zhang, Fang Cai, Jindong Wang, Shuoying Liang, Heting Ying, Jianwei Yin

However, in a general setting when the target domain contains classes that are never observed in the source domain, namely in Open Set Domain Adaptation (OSDA), existing DA methods failed to work because of the interference of the extra unknown classes.

Domain Adaptation

Learning to Match Distributions for Domain Adaptation

1 code implementation17 Jul 2020 Chaohui Yu, Jindong Wang, Chang Liu, Tao Qin, Renjun Xu, Wenjie Feng, Yiqiang Chen, Tie-Yan Liu

However, it remains challenging to determine which method is suitable for a given application since they are built with certain priors or bias.

Domain Adaptation Inductive Bias

Learning Causal Semantic Representation for Out-of-Distribution Prediction

1 code implementation NeurIPS 2021 Chang Liu, Xinwei Sun, Jindong Wang, Haoyue Tang, Tao Li, Tao Qin, Wei Chen, Tie-Yan Liu

Conventional supervised learning methods, especially deep ones, are found to be sensitive to out-of-distribution (OOD) examples, largely because the learned representation mixes the semantic factor with the variation factor due to their domain-specific correlation, while only the semantic factor causes the output.

Domain Adaptation

Boosting Adversarial Attacks on Neural Networks with Better Optimizer

no code implementations1 Dec 2020 Heng Yin, Hengwei Zhang, Jindong Wang, Ruiyu Dou

However, the success rate of adversarial attacks can be further improved in black-box environments.

Cross-domain Activity Recognition via Substructural Optimal Transport

1 code implementation29 Jan 2021 Wang Lu, Yiqiang Chen, Jindong Wang, Xin Qin

In this paper, we propose substructure-level matching for domain adaptation (SSDA) to better utilize the locality information of activity data for accurate and efficient knowledge transfer.

Clustering Cross-Domain Activity Recognition +3

Adversarial example generation with AdaBelief Optimizer and Crop Invariance

no code implementations7 Feb 2021 Bo Yang, Hengwei Zhang, Yuchen Zhang, Kaiyong Xu, Jindong Wang

ABI-FGM and CIM can be readily integrated to build a strong gradient-based attack to further boost the success rates of adversarial examples for black-box attacks.

MixSpeech: Data Augmentation for Low-resource Automatic Speech Recognition

no code implementations25 Feb 2021 Linghui Meng, Jin Xu, Xu Tan, Jindong Wang, Tao Qin, Bo Xu

In this paper, we propose MixSpeech, a simple yet effective data augmentation method based on mixup for automatic speech recognition (ASR).

Automatic Speech Recognition Automatic Speech Recognition (ASR) +2

Generalizing to Unseen Domains: A Survey on Domain Generalization

1 code implementation2 Mar 2021 Jindong Wang, Cuiling Lan, Chang Liu, Yidong Ouyang, Tao Qin, Wang Lu, Yiqiang Chen, Wenjun Zeng, Philip S. Yu

Domain generalization deals with a challenging setting where one or several different but related domain(s) are given, and the goal is to learn a model that can generalize to an unseen test domain.

Domain Generalization Out-of-Distribution Generalization +1

Learning Invariant Representations across Domains and Tasks

no code implementations3 Mar 2021 Jindong Wang, Wenjie Feng, Chang Liu, Chaohui Yu, Mingxuan Du, Renjun Xu, Tao Qin, Tie-Yan Liu

Being expensive and time-consuming to collect massive COVID-19 image samples to train deep classification models, transfer learning is a promising approach by transferring knowledge from the abundant typical pneumonia datasets for COVID-19 image classification.

Domain Adaptation Image Classification +1

Exploiting Adapters for Cross-lingual Low-resource Speech Recognition

2 code implementations18 May 2021 Wenxin Hou, Han Zhu, Yidong Wang, Jindong Wang, Tao Qin, Renjun Xu, Takahiro Shinozaki

Based on our previous MetaAdapter that implicitly leverages adapters, we propose a novel algorithms called SimAdapter for explicitly learning knowledge from adapters.

Cross-Lingual ASR General Knowledge +3

Deep Subdomain Adaptation Network for Image Classification

1 code implementation17 Jun 2021 Yongchun Zhu, Fuzhen Zhuang, Jindong Wang, Guolin Ke, Jingwu Chen, Jiang Bian, Hui Xiong, Qing He

The adaptation can be achieved easily with most feed-forward network models by extending them with LMMD loss, which can be trained efficiently via back-propagation.

Classification Domain Adaptation +4

Unsupervised Deep Anomaly Detection for Multi-Sensor Time-Series Signals

no code implementations27 Jul 2021 Yuxin Zhang, Yiqiang Chen, Jindong Wang, Zhiwen Pan

We empirically compare the proposed approach with several state-of-the-art anomaly detection methods on HAR and HC datasets.

Human Activity Recognition Time Series +2

AdaRNN: Adaptive Learning and Forecasting of Time Series

2 code implementations10 Aug 2021 Yuntao Du, Jindong Wang, Wenjie Feng, Sinno Pan, Tao Qin, Renjun Xu, Chongjun Wang

This paper proposes Adaptive RNNs (AdaRNN) to tackle the TCS problem by building an adaptive model that generalizes well on the unseen test data.

Human Activity Recognition Time Series +1

Wav2vec-S: Semi-Supervised Pre-Training for Low-Resource ASR

no code implementations9 Oct 2021 Han Zhu, Li Wang, Jindong Wang, Gaofeng Cheng, Pengyuan Zhang, Yonghong Yan

In this work, in order to build a better pre-trained model for low-resource ASR, we propose a pre-training approach called wav2vec-S, where we use task-specific semi-supervised pre-training to refine the self-supervised pre-trained model for the ASR task thus more effectively utilize the capacity of the pre-trained model to generate task-specific representations for ASR.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +1

FlexMatch: Boosting Semi-Supervised Learning with Curriculum Pseudo Labeling

2 code implementations NeurIPS 2021 BoWen Zhang, Yidong Wang, Wenxin Hou, Hao Wu, Jindong Wang, Manabu Okumura, Takahiro Shinozaki

However, like other modern SSL algorithms, FixMatch uses a pre-defined constant threshold for all classes to select unlabeled data that contribute to the training, thus failing to consider different learning status and learning difficulties of different classes.

Semi-Supervised Image Classification

Margin Calibration for Long-Tailed Visual Recognition

1 code implementation14 Dec 2021 Yidong Wang, BoWen Zhang, Wenxin Hou, Zhen Wu, Jindong Wang, Takahiro Shinozaki

The long-tailed class distribution in visual recognition tasks poses great challenges for neural networks on how to handle the biased predictions between head and tail classes, i. e., the model tends to classify tail classes as head classes.

Adaptive Memory Networks with Self-supervised Learning for Unsupervised Anomaly Detection

no code implementations3 Jan 2022 Yuxin Zhang, Jindong Wang, Yiqiang Chen, Han Yu, Tao Qin

In this paper, we propose a novel approach called Adaptive Memory Network with Self-supervised Learning (AMSL) to address these challenges and enhance the generalization ability in unsupervised anomaly detection.

Self-Supervised Learning Sleep Stage Detection +3

Multi-Representation Adaptation Network for Cross-domain Image Classification

1 code implementation4 Jan 2022 Yongchun Zhu, Fuzhen Zhuang, Jindong Wang, Jingwu Chen, Zhiping Shi, Wenjuan Wu, Qing He

Based on this, we present Multi-Representation Adaptation Network (MRAN) to accomplish the cross-domain image classification task via multi-representation alignment which can capture the information from different aspects.

Classification Domain Adaptation +2

FreeMatch: Self-adaptive Thresholding for Semi-supervised Learning

4 code implementations15 May 2022 Yidong Wang, Hao Chen, Qiang Heng, Wenxin Hou, Yue Fan, Zhen Wu, Jindong Wang, Marios Savvides, Takahiro Shinozaki, Bhiksha Raj, Bernt Schiele, Xing Xie

Semi-supervised Learning (SSL) has witnessed great success owing to the impressive performances brought by various methods based on pseudo labeling and consistency regularization.

Fairness Semi-Supervised Image Classification

Semantic-Discriminative Mixup for Generalizable Sensor-based Cross-domain Activity Recognition

no code implementations14 Jun 2022 Wang Lu, Jindong Wang, Yiqiang Chen, Sinno Jialin Pan, Chunyu Hu, Xin Qin

Training on existing data often makes the model biased towards the distribution of the training data, thus the model might perform terribly on test data with different distributions.

Cross-Domain Activity Recognition Domain Adaptation +2

MetaFed: Federated Learning among Federations with Cyclic Knowledge Distillation for Personalized Healthcare

2 code implementations17 Jun 2022 Yiqiang Chen, Wang Lu, Xin Qin, Jindong Wang, Xing Xie

Federated learning has attracted increasing attention to building models without accessing the raw user data, especially in healthcare.

Federated Learning Knowledge Distillation

Boosting Cross-Domain Speech Recognition with Self-Supervision

1 code implementation20 Jun 2022 Han Zhu, Gaofeng Cheng, Jindong Wang, Wenxin Hou, Pengyuan Zhang, Yonghong Yan

The cross-domain performance of automatic speech recognition (ASR) could be severely hampered due to the mismatch between training and testing distributions.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +4

Memory-Guided Multi-View Multi-Domain Fake News Detection

1 code implementation26 Jun 2022 Yongchun Zhu, Qiang Sheng, Juan Cao, Qiong Nan, Kai Shu, Minghui Wu, Jindong Wang, Fuzhen Zhuang

In this paper, we propose a Memory-guided Multi-view Multi-domain Fake News Detection Framework (M$^3$FEND) to address these two challenges.

Fake News Detection

Domain Generalization for Activity Recognition via Adaptive Feature Fusion

1 code implementation21 Jul 2022 Xin Qin, Jindong Wang, Yiqiang Chen, Wang Lu, Xinlong Jiang

To this end, we propose \emph{Adaptive Feature Fusion for Activity Recognition~(AFFAR)}, a domain generalization approach that learns to fuse the domain-invariant and domain-specific representations to improve the model's generalization performance.

Domain Generalization Human Activity Recognition

Domain-invariant Feature Exploration for Domain Generalization

1 code implementation25 Jul 2022 Wang Lu, Jindong Wang, Haoliang Li, Yiqiang Chen, Xing Xie

Internal invariance means that the features can be learned with a single domain and the features capture intrinsic semantics of data, i. e., the property within a domain, which is agnostic to other domains.

Domain Generalization Knowledge Distillation +2

Equivariant Disentangled Transformation for Domain Generalization under Combination Shift

no code implementations3 Aug 2022 Yivan Zhang, Jindong Wang, Xing Xie, Masashi Sugiyama

To formally analyze this issue, we provide a unique algebraic formulation of the combination shift problem based on the concepts of homomorphism, equivariance, and a refined definition of disentanglement.

Disentanglement Domain Generalization

Conv-Adapter: Exploring Parameter Efficient Transfer Learning for ConvNets

no code implementations15 Aug 2022 Hao Chen, Ran Tao, Han Zhang, Yidong Wang, Xiang Li, Wei Ye, Jindong Wang, Guosheng Hu, Marios Savvides

Beyond classification, Conv-Adapter can generalize to detection and segmentation tasks with more than 50% reduction of parameters but comparable performance to the traditional full fine-tuning.

Transfer Learning

Domain-Specific Risk Minimization for Out-of-Distribution Generalization

1 code implementation18 Aug 2022 Yi-Fan Zhang, Jindong Wang, Jian Liang, Zhang Zhang, Baosheng Yu, Liang Wang, DaCheng Tao, Xing Xie

Our bound motivates two strategies to reduce the gap: the first one is ensembling multiple classifiers to enrich the hypothesis space, then we propose effective gap estimation methods for guiding the selection of a better hypothesis for the target.

Domain Generalization Out-of-Distribution Generalization

Towards Optimization and Model Selection for Domain Generalization: A Mixup-guided Solution

no code implementations1 Sep 2022 Wang Lu, Jindong Wang, Yidong Wang, Xing Xie

For optimization, we utilize an adapted Mixup to generate an out-of-distribution dataset that can guide the preference direction and optimize with Pareto optimization.

Domain Generalization Model Optimization +2

FIXED: Frustratingly Easy Domain Generalization with Mixup

1 code implementation7 Nov 2022 Wang Lu, Jindong Wang, Han Yu, Lei Huang, Xiang Zhang, Yiqiang Chen, Xing Xie

Firstly, Mixup cannot effectively identify the domain and class information that can be used for learning invariant representations.

Domain Generalization Image Classification +2

GLUE-X: Evaluating Natural Language Understanding Models from an Out-of-distribution Generalization Perspective

1 code implementation15 Nov 2022 Linyi Yang, Shuibai Zhang, Libo Qin, Yafu Li, Yidong Wang, Hanmeng Liu, Jindong Wang, Xing Xie, Yue Zhang

Pre-trained language models (PLMs) are known to improve the generalization performance of natural language understanding models by leveraging large amounts of data during the pre-training phase.

Natural Language Understanding Out-of-Distribution Generalization

An Embarrassingly Simple Baseline for Imbalanced Semi-Supervised Learning

no code implementations20 Nov 2022 Hao Chen, Yue Fan, Yidong Wang, Jindong Wang, Bernt Schiele, Xing Xie, Marios Savvides, Bhiksha Raj

While standard SSL assumes uniform data distribution, we consider a more realistic and challenging setting called imbalanced SSL, where imbalanced class distributions occur in both labeled and unlabeled data.

Pseudo Label

SoftMatch: Addressing the Quantity-Quality Trade-off in Semi-supervised Learning

4 code implementations26 Jan 2023 Hao Chen, Ran Tao, Yue Fan, Yidong Wang, Jindong Wang, Bernt Schiele, Xing Xie, Bhiksha Raj, Marios Savvides

The critical challenge of Semi-Supervised Learning (SSL) is how to effectively leverage the limited labeled data and massive unlabeled data to improve the model's generalization performance.

imbalanced classification

On the Robustness of ChatGPT: An Adversarial and Out-of-distribution Perspective

1 code implementation22 Feb 2023 Jindong Wang, Xixu Hu, Wenxin Hou, Hao Chen, Runkai Zheng, Yidong Wang, Linyi Yang, Haojun Huang, Wei Ye, Xiubo Geng, Binxin Jiao, Yue Zhang, Xing Xie

In this paper, we conduct a thorough evaluation of the robustness of ChatGPT from the adversarial and out-of-distribution (OOD) perspective.

Adversarial Robustness Chatbot +1

FedCLIP: Fast Generalization and Personalization for CLIP in Federated Learning

1 code implementation27 Feb 2023 Wang Lu, Xixu Hu, Jindong Wang, Xing Xie

Concretely, we design an attention-based adapter for the large model, CLIP, and the rest operations merely depend on adapters.

Federated Learning Privacy Preserving

Exploring Vision-Language Models for Imbalanced Learning

1 code implementation4 Apr 2023 Yidong Wang, Zhuohao Yu, Jindong Wang, Qiang Heng, Hao Chen, Wei Ye, Rui Xie, Xing Xie, Shikun Zhang

However, their performance on imbalanced dataset is relatively poor, where the distribution of classes in the training dataset is skewed, leading to poor performance in predicting minority classes.

Zero-Shot Learning

Out-of-Distribution Generalization in Text Classification: Past, Present, and Future

no code implementations23 May 2023 Linyi Yang, Yaoxiao Song, Xuan Ren, Chenyang Lyu, Yidong Wang, Lingqiao Liu, Jindong Wang, Jennifer Foster, Yue Zhang

Machine learning (ML) systems in natural language processing (NLP) face significant challenges in generalizing to out-of-distribution (OOD) data, where the test distribution differs from the training data distribution.

Out-of-Distribution Generalization text-classification +1

Selective Mixup Helps with Distribution Shifts, But Not (Only) because of Mixup

no code implementations26 May 2023 Damien Teney, Jindong Wang, Ehsan Abbasnejad

We have found a new equivalence between two successful methods: selective mixup and resampling.

Binary Classification

PandaLM: An Automatic Evaluation Benchmark for LLM Instruction Tuning Optimization

2 code implementations8 Jun 2023 Yidong Wang, Zhuohao Yu, Zhengran Zeng, Linyi Yang, Cunxiang Wang, Hao Chen, Chaoya Jiang, Rui Xie, Jindong Wang, Xing Xie, Wei Ye, Shikun Zhang, Yue Zhang

To ensure the reliability of PandaLM, we collect a diverse human-annotated test dataset, where all contexts are generated by humans and labels are aligned with human preferences.

Language Modelling Large Language Model

A Survey on Evaluation of Large Language Models

1 code implementation6 Jul 2023 Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie

Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications.

Ethics

Large Language Models Understand and Can be Enhanced by Emotional Stimuli

no code implementations14 Jul 2023 Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie

In addition to those deterministic tasks that can be automatically evaluated using existing metrics, we conducted a human study with 106 participants to assess the quality of generative tasks using both vanilla and emotional prompts.

Emotional Intelligence Informativeness

Improving Generalization of Adversarial Training via Robust Critical Fine-Tuning

1 code implementation ICCV 2023 Kaijie Zhu, Jindong Wang, Xixu Hu, Xing Xie, Ge Yang

The core idea of RiFT is to exploit the redundant capacity for robustness by fine-tuning the adversarially trained model on its non-robust-critical module.

Adversarial Robustness

From Instructions to Intrinsic Human Values -- A Survey of Alignment Goals for Big Models

no code implementations23 Aug 2023 Jing Yao, Xiaoyuan Yi, Xiting Wang, Jindong Wang, Xing Xie

Big models, exemplified by Large Language Models (LLMs), are models typically pre-trained on massive data and comprised of enormous parameters, which not only obtain significantly improved performance across diverse tasks but also present emergent capabilities absent in smaller models.

DyVal: Dynamic Evaluation of Large Language Models for Reasoning Tasks

1 code implementation29 Sep 2023 Kaijie Zhu, Jiaao Chen, Jindong Wang, Neil Zhenqiang Gong, Diyi Yang, Xing Xie

Moreover, DyVal-generated samples are not only evaluation sets, but also helpful data for fine-tuning to improve the performance of LLMs on existing benchmarks.

Logical Reasoning

Understanding and Mitigating the Label Noise in Pre-training on Downstream Tasks

no code implementations29 Sep 2023 Hao Chen, Jindong Wang, Ankit Shah, Ran Tao, Hongxin Wei, Xing Xie, Masashi Sugiyama, Bhiksha Raj

This paper aims to understand the nature of noise in pre-training datasets and to mitigate its impact on downstream tasks.

ZooPFL: Exploring Black-box Foundation Models for Personalized Federated Learning

1 code implementation8 Oct 2023 Wang Lu, Hao Yu, Jindong Wang, Damien Teney, Haohan Wang, Yiqiang Chen, Qiang Yang, Xing Xie, Xiangyang Ji

When personalized federated learning (FL) meets large foundation models, new challenges arise from various limitations in resources.

Personalized Federated Learning

A Survey of Heterogeneous Transfer Learning

1 code implementation12 Oct 2023 Runxue Bao, Yiming Sun, Yuhe Gao, Jindong Wang, Qiang Yang, Haifeng Chen, Zhi-Hong Mao, Ye Ye

These methods typically presuppose identical feature spaces and label spaces in both domains, known as homogeneous transfer learning, which, however, is not always a practical assumption.

Transfer Learning

CompeteAI: Understanding the Competition Behaviors in Large Language Model-based Agents

no code implementations26 Oct 2023 Qinlin Zhao, Jindong Wang, Yixuan Zhang, Yiqiao Jin, Kaijie Zhu, Hao Chen, Xing Xie

Large language models (LLMs) have been widely used as agents to complete different tasks, such as personal assistance or event planning.

Language Modelling Large Language Model

Optimization-Free Test-Time Adaptation for Cross-Person Activity Recognition

1 code implementation28 Oct 2023 Shuoyuan Wang, Jindong Wang, Huajun Xi, Bob Zhang, Lei Zhang, Hongxin Wei

However, the high computational cost of optimization-based TTA algorithms makes it intractable to run on resource-constrained edge devices.

Computational Efficiency Human Activity Recognition +2

Distilling Out-of-Distribution Robustness from Vision-Language Foundation Models

1 code implementation NeurIPS 2023 Andy Zhou, Jindong Wang, Yu-Xiong Wang, Haohan Wang

We propose a conceptually simple and lightweight framework for improving the robustness of vision models through the combination of knowledge distillation and data augmentation.

Data Augmentation Domain Generalization +2

Enhancing Few-shot CLIP with Semantic-Aware Fine-Tuning

no code implementations8 Nov 2023 Yao Zhu, Yuefeng Chen, Wei Wang, Xiaofeng Mao, Xiu Yan, Yue Wang, Zhigang Li, Wang Lu, Jindong Wang, Xiangyang Ji

Hence, we propose fine-tuning the parameters of the attention pooling layer during the training process to encourage the model to focus on task-specific semantics.

How Well Does GPT-4V(ision) Adapt to Distribution Shifts? A Preliminary Investigation

1 code implementation12 Dec 2023 Zhongyi Han, Guanglin Zhou, Rundong He, Jindong Wang, Tailin Wu, Yilong Yin, Salman Khan, Lina Yao, Tongliang Liu, Kun Zhang

We further investigate its adaptability to controlled data perturbations and examine the efficacy of in-context learning as a tool to enhance its adaptation.

Anomaly Detection Autonomous Driving +6

PromptBench: A Unified Library for Evaluation of Large Language Models

1 code implementation13 Dec 2023 Kaijie Zhu, Qinlin Zhao, Hao Chen, Jindong Wang, Xing Xie

The evaluation of large language models (LLMs) is crucial to assess their performance and mitigate potential security risks.

Prompt Engineering

The Good, The Bad, and Why: Unveiling Emotions in Generative AI

no code implementations18 Dec 2023 Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Xinyi Wang, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie

Through extensive experiments involving language and multi-modal models on semantic understanding, logical reasoning, and generation tasks, we demonstrate that both textual and visual EmotionPrompt can boost the performance of AI models while EmotionAttack can hinder it.

Logical Reasoning

SpecFormer: Guarding Vision Transformer Robustness via Maximum Singular Value Penalization

no code implementations2 Jan 2024 Xixu Hu, Runkai Zheng, Jindong Wang, Cheuk Hang Leung, Qi Wu, Xing Xie

In this study, we address this gap by introducing SpecFormer, specifically designed to enhance ViTs' resilience against adversarial attacks, with support from carefully derived theoretical guarantees.

Computational Efficiency

Large Language Model Evaluation via Matrix Entropy

1 code implementation30 Jan 2024 Lai Wei, Zhiquan Tan, Chenghai Li, Jindong Wang, Weiran Huang

Large language models (LLMs) have revolutionized the field of natural language processing, extending their strong capabilities into multi-modal domains.

Data Compression Language Modelling +1

On Catastrophic Inheritance of Large Foundation Models

no code implementations2 Feb 2024 Hao Chen, Bhiksha Raj, Xing Xie, Jindong Wang

Large foundation models (LFMs) are claiming incredible performances.

A General Framework for Learning from Weak Supervision

1 code implementation2 Feb 2024 Hao Chen, Jindong Wang, Lei Feng, Xiang Li, Yidong Wang, Xing Xie, Masashi Sugiyama, Rita Singh, Bhiksha Raj

Weakly supervised learning generally faces challenges in applicability to various scenarios with diverse weak supervision and in scalability due to the complexity of existing algorithms, thereby hindering the practical deployment.

Weakly-supervised Learning

Position Paper: What Can Large Language Models Tell Us about Time Series Analysis

3 code implementations5 Feb 2024 Ming Jin, Yifan Zhang, Wei Chen, Kexin Zhang, Yuxuan Liang, Bin Yang, Jindong Wang, Shirui Pan, Qingsong Wen

Time series analysis is essential for comprehending the complexities inherent in various real-world systems and applications.

Decision Making Position +3

Open-Vocabulary Calibration for Vision-Language Models

no code implementations7 Feb 2024 Shuoyuan Wang, Jindong Wang, Guoqing Wang, Bob Zhang, Kaiyang Zhou, Hongxin Wei

Vision-language models (VLMs) have emerged as formidable tools, showing their strong capability in handling various open-vocabulary tasks in image recognition, text-driven visual content generation, and visual chatbots, to name a few.

DyVal 2: Dynamic Evaluation of Large Language Models by Meta Probing Agents

no code implementations21 Feb 2024 Kaijie Zhu, Jindong Wang, Qinlin Zhao, Ruochen Xu, Xing Xie

Our multifaceted analysis demonstrated the strong correlation between the basic abilities and an implicit Matthew effect on model size, i. e., larger models possess stronger correlations of the abilities.

Data Augmentation

MM-Soc: Benchmarking Multimodal Large Language Models in Social Media Platforms

no code implementations21 Feb 2024 Yiqiao Jin, MinJe Choi, Gaurav Verma, Jindong Wang, Srijan Kumar

Social media platforms are hubs for multimodal information exchange, encompassing text, images, and videos, making it challenging for machines to comprehend the information or emotions associated with interactions in online spaces.

Benchmarking Hate Speech Detection +1

KIEval: A Knowledge-grounded Interactive Evaluation Framework for Large Language Models

2 code implementations23 Feb 2024 Zhuohao Yu, Chang Gao, Wenjin Yao, Yidong Wang, Wei Ye, Jindong Wang, Xing Xie, Yue Zhang, Shikun Zhang

Automatic evaluation methods for large language models (LLMs) are hindered by data contamination, leading to inflated assessments of their effectiveness.

Adversarial example soups: averaging multiple adversarial examples improves transferability without increasing additional generation time

no code implementations27 Feb 2024 Bo Yang, Hengwei Zhang, Chenwei Li, Jindong Wang

For transfer-based attacks, the adversarial examples are crafted on the surrogate model, which can be implemented to mislead the target model effectively.

ERBench: An Entity-Relationship based Automatically Verifiable Hallucination Benchmark for Large Language Models

1 code implementation8 Mar 2024 Jio Oh, Soyeon Kim, Junseok Seo, Jindong Wang, Ruochen Xu, Xing Xie, Steven Euijong Whang

Our key idea is to construct questions using the database schema, records, and functional dependencies such that they can be automatically verified.

Hallucination Prompt Engineering

Learning with Noisy Foundation Models

no code implementations11 Mar 2024 Hao Chen, Jindong Wang, Zihan Wang, Ran Tao, Hongxin Wei, Xing Xie, Masashi Sugiyama, Bhiksha Raj

Foundation models are usually pre-trained on large-scale datasets and then adapted to downstream tasks through tuning.

Detoxifying Large Language Models via Knowledge Editing

1 code implementation21 Mar 2024 Mengru Wang, Ningyu Zhang, Ziwen Xu, Zekun Xi, Shumin Deng, Yunzhi Yao, Qishen Zhang, Linyi Yang, Jindong Wang, Huajun Chen

This paper investigates using knowledge editing techniques to detoxify Large Language Models (LLMs).

knowledge editing

FreeEval: A Modular Framework for Trustworthy and Efficient Evaluation of Large Language Models

2 code implementations9 Apr 2024 Zhuohao Yu, Chang Gao, Wenjin Yao, Yidong Wang, Zhengran Zeng, Wei Ye, Jindong Wang, Yue Zhang, Shikun Zhang

The rapid development of large language model (LLM) evaluation methodologies and datasets has led to a profound challenge: integrating state-of-the-art evaluation techniques cost-effectively while ensuring reliability, reproducibility, and efficiency.

Fairness Language Modelling +1

Cannot find the paper you are looking for? You can Submit a new open access paper.