Search Results for author: Rui Xie

Found 27 papers, 12 papers with code

PlugAT: A Plug and Play Module to Defend against Textual Adversarial Attack

no code implementations COLING 2022 Rui Zheng, Rong Bao, Qin Liu, Tao Gui, Qi Zhang, Xuanjing Huang, Rui Xie, Wei Wu

To reduce the potential side effects of using defense modules, we further propose a novel forgetting restricted adversarial training, which filters out bad adversarial examples that impair the performance of original ones.

Adversarial Attack Domain Adaptation +2

Making Parameter-efficient Tuning More Efficient: A Unified Framework for Classification Tasks

1 code implementation COLING 2022 Xin Zhou, Ruotian Ma, Yicheng Zou, Xuanting Chen, Tao Gui, Qi Zhang, Xuanjing Huang, Rui Xie, Wei Wu

Specifically, we re-formulate both token and sentence classification tasks into a unified language modeling task, and map label spaces of different tasks into the same vocabulary space.

Language Modelling Sentence +2

An Effective and Efficient Entity Alignment Decoding Algorithm via Third-Order Tensor Isomorphism

1 code implementation ACL 2022 Xin Mao, Meirong Ma, Hao Yuan, Jianchao Zhu, ZongYu Wang, Rui Xie, Wei Wu, Man Lan

Entity alignment (EA) aims to discover the equivalent entity pairs between KGs, which is a crucial step for integrating multi-source KGs. For a long time, most researchers have regarded EA as a pure graph representation learning task and focused on improving graph encoders while paying little attention to the decoding process. In this paper, we propose an effective and efficient EA Decoding Algorithm via Third-order Tensor Isomorphism (DATTI). Specifically, we derive two sets of isomorphism equations: (1) Adjacency tensor isomorphism equations and (2) Gramian tensor isomorphism equations. By combining these equations, DATTI could effectively utilize the adjacency and inner correlation isomorphisms of KGs to enhance the decoding process of EA. Extensive experiments on public datasets indicate that our decoding algorithm can deliver significant performance improvements even on the most advanced EA methods, while the extra required time is less than 3 seconds.

Entity Alignment Graph Representation Learning

AddSR: Accelerating Diffusion-based Blind Super-Resolution with Adversarial Diffusion Distillation

1 code implementation2 Apr 2024 Rui Xie, Ying Tai, Kai Zhang, Zhenyu Zhang, Jun Zhou, Jian Yang

Blind super-resolution methods based on stable diffusion showcase formidable generative capabilities in reconstructing clear high-resolution images with intricate details from low-resolution inputs.

Blind Super-Resolution Super-Resolution

CodeShell Technical Report

no code implementations23 Mar 2024 Rui Xie, Zhengran Zeng, Zhuohao Yu, Chang Gao, Shikun Zhang, Wei Ye

Through this process, We have curated 100 billion high-quality pre-training data from GitHub.

8k

Knockoff-Guided Feature Selection via A Single Pre-trained Reinforced Agent

1 code implementation6 Mar 2024 Xinyuan Wang, Dongjie Wang, Wangyang Ying, Rui Xie, Haifeng Chen, Yanjie Fu

A deep Q-network, pre-trained with the original features and their corresponding pseudo labels, is employed to improve the efficacy of the exploration process in feature selection.

feature selection Pseudo Label

Can Large Language Models Recall Reference Location Like Humans?

no code implementations26 Feb 2024 Ye Wang, Xinrun Xu, Rui Xie, Wenxin Hu, Wei Ye

When completing knowledge-intensive tasks, humans sometimes need not just an answer but also a corresponding reference passage for auxiliary reading.

Position Retrieval

Addressing Distribution Shift in Time Series Forecasting with Instance Normalization Flows

no code implementations30 Jan 2024 Wei Fan, Shun Zheng, Pengyang Wang, Rui Xie, Jiang Bian, Yanjie Fu

Due to non-stationarity of time series, the distribution shift problem largely hinders the performance of time series forecasting.

Time Series Time Series Forecasting

Exploiting Duality in Open Information Extraction with Predicate Prompt

1 code implementation20 Jan 2024 Zhen Chen, Jingping Liu, Deqing Yang, Yanghua Xiao, Huimin Xu, ZongYu Wang, Rui Xie, Yunsen Xian

Open information extraction (OpenIE) aims to extract the schema-free triplets in the form of (\emph{subject}, \emph{predicate}, \emph{object}) from a given sentence.

Open Information Extraction Sentence

A Multi-timescale and Chance-Constrained Energy Dispatching Strategy of Integrated Heat-Power Community with Shared Hybrid Energy Storage

no code implementations23 Oct 2023 Wenyi Zhang, Yue Chen, Rui Xie, Yunjian Xu

In the integrated heat-power system with coupling heat-power generators and demands, the key challenges lie in the interaction between heat and power, the inherent uncertainty of renewable energy and consumers' demands, and the multi-timescale scheduling of heat and power.

Scheduling

Towards Visual Taxonomy Expansion

1 code implementation12 Sep 2023 Tinghui Zhu, Jingping Liu, Jiaqing Liang, Haiyun Jiang, Yanghua Xiao, ZongYu Wang, Rui Xie, Yunsen Xian

Specifically, on the Chinese taxonomy dataset, our method significantly improves accuracy by 8. 75 %.

Taxonomy Expansion

AOG-LSTM: An adaptive attention neural network for visual storytelling

no code implementations Neurocomputing 2023 Hanqing Liu, Jiacheng Yang, Chia-Hao Chang, Wei Wang, Hai-Tao Zheng, Yong Jiang, Hui Wang, Rui Xie, and Wei Wu

Moreover, the existing method of alleviating error accumulation based on replacing reference words does not take into account the different effects of each word.

Visual Storytelling

PandaLM: An Automatic Evaluation Benchmark for LLM Instruction Tuning Optimization

2 code implementations8 Jun 2023 Yidong Wang, Zhuohao Yu, Zhengran Zeng, Linyi Yang, Cunxiang Wang, Hao Chen, Chaoya Jiang, Rui Xie, Jindong Wang, Xing Xie, Wei Ye, Shikun Zhang, Yue Zhang

To ensure the reliability of PandaLM, we collect a diverse human-annotated test dataset, where all contexts are generated by humans and labels are aligned with human preferences.

Language Modelling Large Language Model

Exploiting Pseudo Image Captions for Multimodal Summarization

no code implementations9 May 2023 Chaoya Jiang, Rui Xie, Wei Ye, Jinan Sun, Shikun Zhang

Cross-modal contrastive learning in vision language pretraining (VLP) faces the challenge of (partial) false negatives.

Common Sense Reasoning Contrastive Learning +1

Causality-aware Concept Extraction based on Knowledge-guided Prompting

1 code implementation3 May 2023 Siyu Yuan, Deqing Yang, Jinxi Liu, Shuyu Tian, Jiaqing Liang, Yanghua Xiao, Rui Xie

The prompt adopts the topic of the given entity from the existing knowledge in KGs to mitigate the spurious co-occurrence correlations between entities and biased concepts.

Knowledge Graphs Natural Language Understanding

Exploring Vision-Language Models for Imbalanced Learning

1 code implementation4 Apr 2023 Yidong Wang, Zhuohao Yu, Jindong Wang, Qiang Heng, Hao Chen, Wei Ye, Rui Xie, Xing Xie, Shikun Zhang

However, their performance on imbalanced dataset is relatively poor, where the distribution of classes in the training dataset is skewed, leading to poor performance in predicting minority classes.

Zero-Shot Learning

Optimal Sampling Designs for Multi-dimensional Streaming Time Series with Application to Power Grid Sensor Data

no code implementations14 Mar 2023 Rui Xie, Shuyang Bai, Ping Ma

When applied to European power grid consumption data, the proposed leverage score based sampling methods outperform the benchmark sampling method in online estimation and prediction.

Computational Efficiency Time Series +1

Sizing Grid-Connected Wind Power Generation and Energy Storage with Wake Effect and Endogenous Uncertainty: A Distributionally Robust Method

no code implementations30 Dec 2022 Rui Xie, Wei Wei, Yue Chen

In this paper, a bi-objective distributionally robust optimization (DRO) model is proposed to determine the capacities of wind power generation and ESSs considering the wake effect.

A Curriculum Learning Approach for Multi-domain Text Classification Using Keyword weight Ranking

no code implementations27 Oct 2022 Zilin Yuan, Yinghui Li, Yangning Li, Rui Xie, Wei Wu, Hai-Tao Zheng

We noted that the distinctness of the domain-specific features is different, so in this paper, we propose to use a curriculum learning strategy based on keyword weight ranking to improve the performance of multi-domain text classification models.

text-classification Text Classification

Focus Is What You Need For Chinese Grammatical Error Correction

no code implementations23 Oct 2022 Jingheng Ye, Yinghui Li, Shirong Ma, Rui Xie, Wei Wu, Hai-Tao Zheng

Chinese Grammatical Error Correction (CGEC) aims to automatically detect and correct grammatical errors contained in Chinese text.

Grammatical Error Correction Sentence

Large-scale Multi-granular Concept Extraction Based on Machine Reading Comprehension

1 code implementation30 Aug 2022 Siyu Yuan, Deqing Yang, Jiaqing Liang, Jilun Sun, Jingyue Huang, Kaiyan Cao, Yanghua Xiao, Rui Xie

In order to supply existing KGs with more fine-grained and new concepts, we propose a novel concept extraction framework, namely MRC-CE, to extract large-scale multi-granular concepts from the descriptive texts of entities.

Descriptive Knowledge Graphs +1

Learning What You Need from What You Did: Product Taxonomy Expansion with User Behaviors Supervision

1 code implementation28 Mar 2022 Sijie Cheng, Zhouhong Gu, Bang Liu, Rui Xie, Wei Wu, Yanghua Xiao

Specifically, i) to fully exploit user behavioral information, we extract candidate hyponymy relations that match user interests from query-click concepts; ii) to enhance the semantic information of new concepts and better detect hyponymy relations, we model concepts and relations through both user-generated content and structural information in existing taxonomies and user click logs, by leveraging Pre-trained Language Models and Graph Neural Network combined with Contrastive Learning; iii) to reduce the cost of dataset construction and overcome data skews, we construct a high-quality and balanced training dataset from existing taxonomy with no supervision.

Contrastive Learning Taxonomy Expansion

Can Pre-trained Language Models Interpret Similes as Smart as Human?

1 code implementation ACL 2022 Qianyu He, Sijie Cheng, Zhixu Li, Rui Xie, Yanghua Xiao

In this paper, we investigate the ability of PLMs in simile interpretation by designing a novel task named Simile Property Probing, i. e., to let the PLMs infer the shared properties of similes.

Sentiment Analysis Sentiment Classification

Graph Enhanced Dual Attention Network for Document-Level Relation Extraction

no code implementations COLING 2020 Bo Li, Wei Ye, Zhonghao Sheng, Rui Xie, Xiangyu Xi, Shikun Zhang

Document-level relation extraction requires inter-sentence reasoning capabilities to capture local and global contextual information for multiple relational facts.

Document-level Relation Extraction Relation +1

Leveraging Code Generation to Improve Code Retrieval and Summarization via Dual Learning

no code implementations24 Feb 2020 Wei Ye, Rui Xie, Jinglei Zhang, Tianxiang Hu, Xiaoyin Wang, Shikun Zhang

Since both tasks aim to model the association between natural language and programming language, recent studies have combined these two tasks to improve their performance.

Code Generation Code Summarization +3

Cannot find the paper you are looking for? You can Submit a new open access paper.