Search Results for author: Kang Xu

Found 12 papers, 3 papers with code

Large Scale Foundation Models for Intelligent Manufacturing Applications: A Survey

no code implementations11 Dec 2023 Haotian Zhang, Semujju Stuart Dereck, Zhicheng Wang, Xianwei Lv, Kang Xu, Liang Wu, Ye Jia, Jing Wu, Zhuo Long, Wensheng Liang, X. G. Ma, Ruiyan Zhuang

Although the applications of artificial intelligence especially deep learning had greatly improved various aspects of intelligent manufacturing, they still face challenges for wide employment due to the poor generalization ability, difficulties to establish high-quality training datasets, and unsatisfactory performance of deep learning methods.

Diffusion Model is an Effective Planner and Data Synthesizer for Multi-Task Reinforcement Learning

1 code implementation NeurIPS 2023 Haoran He, Chenjia Bai, Kang Xu, Zhuoran Yang, Weinan Zhang, Dong Wang, Bin Zhao, Xuelong Li

Specifically, we propose Multi-Task Diffusion Model (\textsc{MTDiff}), a diffusion-based method that incorporates Transformer backbones and prompt learning for generative planning and data synthesis in multi-task offline settings.

Reinforcement Learning (RL)

On the Value of Myopic Behavior in Policy Reuse

no code implementations28 May 2023 Kang Xu, Chenjia Bai, Shuang Qiu, Haoran He, Bin Zhao, Zhen Wang, Wei Li, Xuelong Li

Leveraging learned strategies in unfamiliar scenarios is fundamental to human intelligence.

CUR Transformer: A Convolutional Unbiased Regional Transformer for Image Denoising

1 code implementation journal 2023 Kang Xu, Weixin Li, Xia Wang, Xiaoyan Hu, Ke Yan, Xiaojie Wang, Xuan Dong

Based on the prior that, for each pixel, its similar pixels are usually spatially close, our insights are that (1) we partition the image into non-overlapped windows and perform regional self-attention to reduce the search range of each pixel, and (2) we encourage pixels across different windows to communicate with each other.

Image Denoising Jpeg Compression Artifact Reduction +1

Open-Ended Diverse Solution Discovery with Regulated Behavior Patterns for Cross-Domain Adaptation

no code implementations24 Sep 2022 Kang Xu, Yan Ma, Bingsheng Wei, Wei Li

While Reinforcement Learning can achieve impressive results for complex tasks, the learned policies are generally prone to fail in downstream tasks with even minor model mismatch or unexpected perturbations.

Domain Adaptation

Quantification before Selection: Active Dynamics Preference for Robust Reinforcement Learning

no code implementations23 Sep 2022 Kang Xu, Yan Ma, Wei Li

Our key insight is that dynamic systems with different parameters provide different levels of difficulty for the policy, and the difficulty of behaving well in a system is constantly changing due to the evolution of the policy.

Informativeness reinforcement-learning +1

Neural Topic Modeling with Deep Mutual Information Estimation

no code implementations12 Mar 2022 Kang Xu, Xiaoqiu Lu, Yuan-Fang Li, Tongtong Wu, Guilin Qi, Ning Ye, Dong Wang, Zheng Zhou

NTM-DMIE is a neural network method for topic learning which maximizes the mutual information between the input documents and their latent topic representation.

Mutual Information Estimation Text Clustering +1

Evolutionary Action Selection for Gradient-based Policy Learning

no code implementations12 Jan 2022 Yan Ma, Tianxing Liu, Bingsheng Wei, Yi Liu, Kang Xu, Wei Li

Evolutionary Algorithms (EAs) and Deep Reinforcement Learning (DRL) have recently been integrated to take the advantage of the both methods for better exploration and exploitation. The evolutionary part in these hybrid methods maintains a population of policy networks. However, existing methods focus on optimizing the parameters of policy network, which is usually high-dimensional and tricky for EA. In this paper, we shift the target of evolution from high-dimensional parameter space to low-dimensional action space. We propose Evolutionary Action Selection-Twin Delayed Deep Deterministic Policy Gradient (EAS-TD3), a novel hybrid method of EA and DRL. In EAS, we focus on optimizing the action chosen by the policy network and attempt to obtain high-quality actions to promote policy learning through an evolutionary algorithm.

Continuous Control Evolutionary Algorithms

Generating Pertinent and Diversified Comments with Topic-aware Pointer-Generator Networks

no code implementations9 May 2020 Junheng Huang, Lu Pan, Kang Xu, Weihua Peng, Fayuan Li

In this paper, we propose a novel generation model based on Topic-aware Pointer-Generator Networks (TPGN), which can utilize the topic information hidden in the articles to guide the generation of pertinent and diversified comments.

Comment Generation Text Generation

Attention-Mechanism-based Tracking Method for Intelligent Internet of Vehicles

no code implementations29 Oct 2018 Kang Xu, Song Bin, Guo Jie, Du Xiaojiang, Guizani Mohsen

Aiming at the problem that the traditional convolutional neural network is vulnerable to background interference, this paper proposes vehicle tracking method based on human attention mechanism for self-selection of deep features with an inter-channel fully connected layer.

Formulating Semantics of Probabilistic Argumentation by Characterizing Subgraphs: Theory and Empirical Results

no code implementations1 Aug 2016 Beishui Liao, Kang Xu, Huaxin Huang

The results show that our approach not only dramatically decreases the time for computing p(E^\sigma), but also has an attractive property, which is contrary to that of existing approaches: the denser the edges of a PrAG are or the bigger the size of a given extension E is, the more efficient our approach computes p(E^\sigma).

Cannot find the paper you are looking for? You can Submit a new open access paper.