Search Results for author: XiaoLi Li

Found 60 papers, 29 papers with code

Frame Semantic-Enhanced Sentence Modeling for Sentence-level Extractive Text Summarization

no code implementations EMNLP 2021 Yong Guan, Shaoru Guo, Ru Li, XiaoLi Li, Hongye Tan

In this paper, we propose a novel Frame Semantic-Enhanced Sentence Modeling for Extractive Summarization, which leverages Frame semantics to model sentences from both intra-sentence level and inter-sentence level, facilitating the text summarization task.

Extractive Summarization Extractive Text Summarization +1

TSLANet: Rethinking Transformers for Time Series Representation Learning

1 code implementation12 Apr 2024 Emadeldeen Eldele, Mohamed Ragab, Zhenghua Chen, Min Wu, XiaoLi Li

Time series data, characterized by its intrinsic long and short-range dependencies, poses a unique challenge across analytical applications.

Anomaly Detection Computational Efficiency +4

An Experiment with the Use of ChatGPT for LCSH Subject Assignment on Electronic Theses and Dissertations

no code implementations25 Mar 2024 Eric H. C. Chow, TJ Kao, XiaoLi Li

This study delves into the potential use of Large Language Models (LLMs) for generating Library of Congress Subject Headings (LCSH).

Specificity valid

A Survey of Neural Code Intelligence: Paradigms, Advances and Beyond

1 code implementation21 Mar 2024 Qiushi Sun, Zhirui Chen, Fangzhi Xu, Kanzhi Cheng, Chang Ma, Zhangyue Yin, Jianing Wang, Chengcheng Han, Renyu Zhu, Shuai Yuan, Qipeng Guo, Xipeng Qiu, Pengcheng Yin, XiaoLi Li, Fei Yuan, Lingpeng Kong, Xiang Li, Zhiyong Wu

Building on our examination of the developmental trajectories, we further investigate the emerging synergies between code intelligence and broader machine intelligence, uncovering new cross-domain opportunities and illustrating the substantial influence of code intelligence across various domains.

K-Link: Knowledge-Link Graph from LLMs for Enhanced Representation Learning in Multivariate Time-Series Data

no code implementations6 Mar 2024 Yucheng Wang, Ruibing Jin, Min Wu, XiaoLi Li, Lihua Xie, Zhenghua Chen

To capture these dependencies, Graph Neural Networks (GNNs) have emerged as powerful tools, yet their effectiveness is restricted by the quality of graph construction from MTS data.

General Knowledge graph construction +2

Generative Semi-supervised Graph Anomaly Detection

1 code implementation19 Feb 2024 Hezhe Qiao, Qingsong Wen, XiaoLi Li, Ee-Peng Lim, Guansong Pang

This work considers a practical semi-supervised graph anomaly detection (GAD) scenario, where part of the nodes in a graph are known to be normal, contrasting to the unsupervised setting in most GAD studies with a fully unlabeled graph.

Graph Anomaly Detection One-class classifier

Self-evolving Autoencoder Embedded Q-Network

no code implementations18 Feb 2024 J. Senthilnath, Bangjian Zhou, Zhen Wei Ng, Deeksha Aggarwal, Rajdeep Dutta, Ji Wei Yoon, Aye Phyu Phyu Aung, Keyu Wu, Min Wu, XiaoLi Li

During the evolution of the autoencoder architecture, a bias-variance regulatory strategy is employed to elicit the optimal response from the RL agent.

Decision Making Reinforcement Learning (RL)

Evolving Restricted Boltzmann Machine-Kohonen Network for Online Clustering

no code implementations14 Feb 2024 J. Senthilnath, Adithya Bhattiprolu, Ankur Singh, Bangjian Zhou, Min Wu, Jón Atli Benediktsson, XiaoLi Li

A novel online clustering algorithm is presented where an Evolving Restricted Boltzmann Machine (ERBM) is embedded with a Kohonen Network called ERBM-KNet.

Clustering Online Clustering

CompeteSMoE - Effective Training of Sparse Mixture of Experts via Competition

no code implementations4 Feb 2024 Quang Pham, Giang Do, Huy Nguyen, TrungTin Nguyen, Chenghao Liu, Mina Sartipi, Binh T. Nguyen, Savitha Ramasamy, XiaoLi Li, Steven Hoi, Nhat Ho

Sparse mixture of experts (SMoE) offers an appealing solution to scale up the model complexity beyond the mean of increasing the network's depth or width.

A Change Point Detection Integrated Remaining Useful Life Estimation Model under Variable Operating Conditions

no code implementations9 Jan 2024 Anushiya Arunan, Yan Qin, XiaoLi Li, Chau Yuen

During online monitoring, the temporal correlation dynamics of a query device is monitored for breach of the control limit derived in offline training.

Change Point Detection

HyperRouter: Towards Efficient Training and Inference of Sparse Mixture of Experts

1 code implementation12 Dec 2023 Giang Do, Khiem Le, Quang Pham, TrungTin Nguyen, Thanh-Nam Doan, Bint T. Nguyen, Chenghao Liu, Savitha Ramasamy, XiaoLi Li, Steven Hoi

By routing input tokens to only a few split experts, Sparse Mixture-of-Experts has enabled efficient training of large language models.

Decomposed Prompt Tuning via Low-Rank Reparameterization

1 code implementation16 Oct 2023 Yao Xiao, Lu Xu, Jiaxi Li, Wei Lu, XiaoLi Li

While prompt tuning approaches have achieved competitive performance with high efficiency, we observe that they invariably employ the same initialization process, wherein the soft prompt is either randomly initialized or derived from an existing embedding vocabulary.

Large Models for Time Series and Spatio-Temporal Data: A Survey and Outlook

5 code implementations16 Oct 2023 Ming Jin, Qingsong Wen, Yuxuan Liang, Chaoli Zhang, Siqiao Xue, Xue Wang, James Zhang, Yi Wang, Haifeng Chen, XiaoLi Li, Shirui Pan, Vincent S. Tseng, Yu Zheng, Lei Chen, Hui Xiong

In this survey, we offer a comprehensive and up-to-date review of large models tailored (or adapted) for time series and spatio-temporal data, spanning four key facets: data types, model categories, model scopes, and application areas/tasks.

Time Series Time Series Analysis

Graph-Aware Contrasting for Multivariate Time-Series Classification

1 code implementation11 Sep 2023 Yucheng Wang, Yuecong Xu, Jianfei Yang, Min Wu, XiaoLi Li, Lihua Xie, Zhenghua Chen

As MTS data typically originate from multiple sensors, ensuring spatial consistency becomes essential for the overall performance of contrastive learning on MTS data.

Classification Contrastive Learning +3

Fully-Connected Spatial-Temporal Graph for Multivariate Time-Series Data

1 code implementation11 Sep 2023 Yucheng Wang, Yuecong Xu, Jianfei Yang, Min Wu, XiaoLi Li, Lihua Xie, Zhenghua Chen

For graph construction, we design a decay graph to connect sensors across all timestamps based on their temporal distances, enabling us to fully model the ST dependencies by considering the correlations between DEDT.

graph construction Time Series

Efficient Joint Optimization of Layer-Adaptive Weight Pruning in Deep Neural Networks

2 code implementations ICCV 2023 Kaixin Xu, Zhe Wang, Xue Geng, Jie Lin, Min Wu, XiaoLi Li, Weisi Lin

On ImageNet, we achieve up to 4. 7% and 4. 6% higher top-1 accuracy compared to other methods for VGG-16 and ResNet-50, respectively.

Combinatorial Optimization

Source-Free Domain Adaptation with Temporal Imputation for Time Series Data

1 code implementation14 Jul 2023 Mohamed Ragab, Emadeldeen Eldele, Min Wu, Chuan-Sheng Foo, XiaoLi Li, Zhenghua Chen

The existing SFDA methods that are mainly designed for visual applications may fail to handle the temporal dynamics in time series, leading to impaired adaptation performance.

Imputation Source-Free Domain Adaptation +1

Distilling Universal and Joint Knowledge for Cross-Domain Model Compression on Time Series Data

1 code implementation7 Jul 2023 Qing Xu, Min Wu, XiaoLi Li, Kezhi Mao, Zhenghua Chen

More specifically, a feature-domain discriminator is employed to align teacher's and student's representations for universal knowledge transfer.

Knowledge Distillation Model Compression +2

MS-DETR: Natural Language Video Localization with Sampling Moment-Moment Interaction

1 code implementation30 May 2023 Jing Wang, Aixin Sun, Hao Zhang, XiaoLi Li

Given a query, the task of Natural Language Video Localization (NLVL) is to localize a temporal moment in an untrimmed video that semantically matches the query.

SemiGNN-PPI: Self-Ensembling Multi-Graph Neural Network for Efficient and Generalizable Protein-Protein Interaction Prediction

no code implementations15 May 2023 Ziyuan Zhao, Peisheng Qian, Xulei Yang, Zeng Zeng, Cuntai Guan, Wai Leong Tam, XiaoLi Li

Protein-protein interactions (PPIs) are crucial in various biological processes and their study has significant implications for drug development and disease diagnosis.

Graph Learning

A Federated Learning-based Industrial Health Prognostics for Heterogeneous Edge Devices using Matched Feature Extraction

no code implementations13 May 2023 Anushiya Arunan, Yan Qin, XiaoLi Li, Chau Yuen

The algorithm searches across the heterogeneous locally trained models and matches neurons with probabilistically similar feature extraction functions first, before selectively averaging them to form the federated model parameters.

Federated Learning Privacy Preserving

Shall We Trust All Relational Tuples by Open Information Extraction? A Study on Speculation Detection

1 code implementation7 May 2023 Kuicai Dong, Aixin Sun, Jung-jae Kim, XiaoLi Li

We formally define the research problem of tuple-level speculation detection and conduct a detailed data analysis on the LSOIE dataset which contains labels for speculative tuples.

Open Information Extraction Sentence +1

Open Information Extraction via Chunks

1 code implementation5 May 2023 Kuicai Dong, Aixin Sun, Jung-jae Kim, XiaoLi Li

Accordingly, we propose a simple BERT-based model for sentence chunking, and propose Chunk-OIE for tuple extraction on top of SaC.

Chunking Open Information Extraction +1

Augmenting and Aligning Snippets for Few-Shot Video Domain Adaptation

no code implementations ICCV 2023 Yuecong Xu, Jianfei Yang, Yunjiao Zhou, Zhenghua Chen, Min Wu, XiaoLi Li

We thus consider a more realistic \textit{Few-Shot Video-based Domain Adaptation} (FSVDA) scenario where we adapt video models with only a few target video samples.

Action Recognition Unsupervised Domain Adaptation

Label-efficient Time Series Representation Learning: A Review

no code implementations13 Feb 2023 Emadeldeen Eldele, Mohamed Ragab, Zhenghua Chen, Min Wu, Chee-Keong Kwoh, XiaoLi Li

The scarcity of labeled data is one of the main challenges of applying deep learning models on time series data in the real world.

Representation Learning Self-Supervised Learning +3

Syntactic Multi-view Learning for Open Information Extraction

1 code implementation5 Dec 2022 Kuicai Dong, Aixin Sun, Jung-jae Kim, XiaoLi Li

In this paper, we model both constituency and dependency trees into word-level graphs, and enable neural OpenIE to learn from the syntactic structures.

MULTI-VIEW LEARNING Open Information Extraction

Contrastive Domain Adaptation for Time-Series via Temporal Mixup

1 code implementation3 Dec 2022 Emadeldeen Eldele, Mohamed Ragab, Zhenghua Chen, Min Wu, Chee-Keong Kwoh, XiaoLi Li

Specifically, we propose a novel temporal mixup strategy to generate two intermediate augmented views for the source and target domains.

Contrastive Learning Time Series +2

Video Unsupervised Domain Adaptation with Deep Learning: A Comprehensive Survey

no code implementations17 Nov 2022 Yuecong Xu, Haozhi Cao, Zhenghua Chen, XiaoLi Li, Lihua Xie, Jianfei Yang

To tackle performance degradation and address concerns in high video annotation cost uniformly, the video unsupervised domain adaptation (VUDA) is introduced to adapt video models from the labeled source domain to the unlabeled target domain by alleviating video domain shift, improving the generalizability and portability of video models.

Action Recognition Unsupervised Domain Adaptation

Self-supervised Contrastive Representation Learning for Semi-supervised Time-Series Classification

2 code implementations13 Aug 2022 Emadeldeen Eldele, Mohamed Ragab, Zhenghua Chen, Min Wu, Chee-Keong Kwoh, XiaoLi Li, Cuntai Guan

Specifically, we propose time-series specific weak and strong augmentations and use their views to learn robust temporal relations in the proposed temporal contrasting module, besides learning discriminative representations by our proposed contextual contrasting module.

Contrastive Learning Data Augmentation +5

Leveraging Endo- and Exo-Temporal Regularization for Black-box Video Domain Adaptation

no code implementations10 Aug 2022 Yuecong Xu, Jianfei Yang, Haozhi Cao, Min Wu, XiaoLi Li, Lihua Xie, Zhenghua Chen

To enable video models to be applied seamlessly across video tasks in different environments, various Video Unsupervised Domain Adaptation (VUDA) methods have been proposed to improve the robustness and transferability of video models.

Action Recognition Unsupervised Domain Adaptation

A Survey on AI Sustainability: Emerging Trends on Learning Algorithms and Research Challenges

no code implementations8 May 2022 Zhenghua Chen, Min Wu, Alvin Chan, XiaoLi Li, Yew-Soon Ong

We believe that this technical review can help to promote a sustainable development of AI R&D activities for the research community.

Fairness

Type-aware Embeddings for Multi-Hop Reasoning over Knowledge Graphs

1 code implementation2 May 2022 Zhiwei Hu, Víctor Gutiérrez-Basulto, Zhiliang Xiang, XiaoLi Li, Ru Li, Jeff Z. Pan

Multi-hop reasoning over real-life knowledge graphs (KGs) is a highly challenging problem as traditional subgraph matching methods are not capable to deal with noise and missing information.

Knowledge Graphs Vocal Bursts Type Prediction

Slow-varying Dynamics Assisted Temporal Capsule Network for Machinery Remaining Useful Life Estimation

no code implementations30 Mar 2022 Yan Qin, Chau Yuen, Yimin Shao, Bo Qin, XiaoLi Li

Similarly, the estimation accuracy of the milling machine has been improved by 23. 57% compared to LSTM and 19. 54% compared to CapsNet.

Time Series Time Series Analysis

ADATIME: A Benchmarking Suite for Domain Adaptation on Time Series Data

1 code implementation15 Mar 2022 Mohamed Ragab, Emadeldeen Eldele, Wee Ling Tan, Chuan-Sheng Foo, Zhenghua Chen, Min Wu, Chee-Keong Kwoh, XiaoLi Li

Our evaluation includes adapting state-of-the-art visual domain adaptation methods to time series data as well as the recent methods specifically developed for time series data.

Benchmarking Time Series +2

Going Deeper into Recognizing Actions in Dark Environments: A Comprehensive Benchmark Study

no code implementations19 Feb 2022 Yuecong Xu, Jianfei Yang, Haozhi Cao, Jianxiong Yin, Zhenghua Chen, XiaoLi Li, Zhengguo Li, Qianwen Xu

While action recognition (AR) has gained large improvements with the introduction of large-scale video datasets and the development of deep neural networks, AR models robust to challenging environments in real-world scenarios are still under-explored.

Action Recognition Autonomous Driving

Self-supervised Autoregressive Domain Adaptation for Time Series Data

1 code implementation29 Nov 2021 Mohamed Ragab, Emadeldeen Eldele, Zhenghua Chen, Min Wu, Chee-Keong Kwoh, XiaoLi Li

Second, we propose a novel autoregressive domain adaptation technique that incorporates temporal dependency of both source and target features during domain alignment.

Self-Supervised Learning Time Series +2

A Systematic Evaluation of Domain Adaptation Algorithms On Time Series Data

no code implementations29 Sep 2021 Mohamed Ragab, Emadeldeen Eldele, Wee Ling Tan, Chuan-Sheng Foo, Zhenghua Chen, Min Wu, Chee Kwoh, XiaoLi Li

Our evaluation includes adaptations of state-of-the-art visual domain adaptation methods to time series data in addition to recent methods specifically developed for time series data.

Benchmarking Model Selection +3

A Knowledge-Guided Framework for Frame Identification

no code implementations ACL 2021 Xuefeng Su, Ru Li, XiaoLi Li, Jeff Z. Pan, Hu Zhang, Qinghua Chai, Xiaoqi Han

In this paper, we propose a Knowledge-Guided Frame Identification framework (KGFI) that integrates three types frame knowledge, including frame definitions, frame elements and frame-to-frame relations, to learn better frame representation, which guides the KGFI to jointly map target words and frames into the same embedding space and subsequently identify the best frame by calculating the dot-product similarity scores between the target word embedding and all of the frame embeddings.

Semantic Parsing Sentence

A Decentralized Federated Learning Framework via Committee Mechanism with Convergence Guarantee

2 code implementations1 Aug 2021 Chunjiang Che, XiaoLi Li, Chuan Chen, Xiaoyu He, Zibin Zheng

In addition, we theoretically analyze and prove the convergence of CMFL under different election and selection strategies, which coincides with the experimental results.

Federated Learning

ADAST: Attentive Cross-domain EEG-based Sleep Staging Framework with Iterative Self-Training

1 code implementation9 Jul 2021 Emadeldeen Eldele, Mohamed Ragab, Zhenghua Chen, Min Wu, Chee-Keong Kwoh, XiaoLi Li, Cuntai Guan

Second, we design an iterative self-training strategy to improve the classification performance on the target domain via target domain pseudo labels.

Automatic Sleep Stage Classification Domain Adaptation +2

A Conditional Splitting Framework for Efficient Constituency Parsing

no code implementations ACL 2021 Thanh-Tung Nguyen, Xuan-Phi Nguyen, Shafiq Joty, XiaoLi Li

We introduce a generic seq2seq parsing framework that casts constituency parsing problems (syntactic and discourse parsing) into a series of conditional splitting decisions.

Constituency Parsing Discourse Segmentation +1

Time-Series Representation Learning via Temporal and Contextual Contrasting

1 code implementation26 Jun 2021 Emadeldeen Eldele, Mohamed Ragab, Zhenghua Chen, Min Wu, Chee Keong Kwoh, XiaoLi Li, Cuntai Guan

In this paper, we propose an unsupervised Time-Series representation learning framework via Temporal and Contextual Contrasting (TS-TCC), to learn time-series representation from unlabeled data.

Automatic Sleep Stage Classification Contrastive Learning +9

Multilinear Dirichlet Processes

no code implementations16 Jun 2021 XiaoLi Li

Dependent Dirichlet processes (DDP) have been widely applied to model data from distributions over collections of measures which are correlated in some way.

CODA: Constructivism Learning for Instance-Dependent Dropout Architecture Construction

no code implementations15 Jun 2021 XiaoLi Li

To solve this issue, we propose Constructivism learning for instance-dependent Dropout Architecture (CODA), which is inspired from a philosophical theory, constructivism learning.

RST Parsing from Scratch

1 code implementation NAACL 2021 Thanh-Tung Nguyen, Xuan-Phi Nguyen, Shafiq Joty, XiaoLi Li

We introduce a novel top-down end-to-end formulation of document-level discourse parsing in the Rhetorical Structure Theory (RST) framework.

Discourse Segmentation Segmentation

An Attention-Based Deep Learning Approach for Sleep Stage Classification With Single-Channel EEG

1 code implementation28 Apr 2021 Emadeldeen Eldele, Zhenghua Chen, Chengyu Liu, Min Wu, Chee-Keong Kwoh, XiaoLi Li, Cuntai Guan

The MRCNN can extract low and high frequency features and the AFR is able to improve the quality of the extracted features by modeling the inter-dependencies between the features.

Automatic Sleep Stage Classification EEG +1

DO-GAN: A Double Oracle Framework for Generative Adversarial Networks

no code implementations CVPR 2022 Aye Phyu Phyu Aung, Xinrun Wang, Runsheng Yu, Bo An, Senthilnath Jayavelu, XiaoLi Li

In this paper, we propose a new approach to train Generative Adversarial Networks (GANs) where we deploy a double-oracle framework using the generator and discriminator oracles.

Continual Learning

FFConv: Fast Factorized Convolutional Neural Network Inference on Encrypted Data

no code implementations6 Feb 2021 Yuxiao Lu, Jie Lin, Chao Jin, Zhe Wang, Min Wu, Khin Mi Mi Aung, XiaoLi Li

Despite the faster HECNN inference, the mainstream packing schemes Dense Packing (DensePack) and Convolution Packing (ConvPack) introduce expensive rotation overhead, which prolongs the inference latency of HECNN for deeper and wider CNN architectures.

Privacy Preserving

Incorporating Syntax and Frame Semantics in Neural Network for Machine Reading Comprehension

no code implementations COLING 2020 Shaoru Guo, Yong Guan, Ru Li, XiaoLi Li, Hongye Tan

Machine reading comprehension (MRC) is one of the most critical yet challenging tasks in natural language understanding(NLU), where both syntax and semantics information of text are essential components for text understanding.

Machine Reading Comprehension Natural Language Understanding

Variable-Length Hashing

no code implementations17 Mar 2016 Honghai Yu, Pierre Moulin, Hong Wei Ng, XiaoLi Li

In particular, we propose a block K-means hashing (B-KMH) method to obtain significantly improved retrieval performance with no increase in storage and marginal increase in computational cost.

Code Search Retrieval

Cannot find the paper you are looking for? You can Submit a new open access paper.