Search Results for author: Chenguang Wang

Found 18 papers, 7 papers with code

基于风格化嵌入的中文文本风格迁移(Chinese text style transfer based on stylized embedding)

no code implementations CCL 2021 Chenguang Wang, Hongfei Lin, Liang Yang

“对话风格能够反映对话者的属性, 例如情感、性别和教育背景等。在对话系统中, 通过理解用户的对话风格, 能够更好地对用户进行建模。同样的, 面对不同背景的用户, 对话机器人也应该使用不同的语言风格与之交流。语言表达风格是文本的内在属性, 然而现有的大多数文本风格迁移研究, 集中在英文领域, 在中文领域则研究较少。本文构建了三个可用于中文文本风格迁移研究的数据集, 并将多种已有的文本风格迁移方法应用于该数据集。同时, 本文提出了基于DeepStyle算法与Transformer的风格迁移模型, 通过预训练可以获得不同风格的隐层向量表示。并基于Transformer构建生成端模型, 在解码阶段, 通过重建源文本的方式, 保留生成文本的内容信息, 并且引入对立风格的嵌入表示, 使得模型能够生成不同风格的文本。实验结果表明, 本文提出的模型在构建的中文数据集上均优于现有模型。”

Style Transfer Text Style Transfer

Protecting Intellectual Property of Language Generation APIs with Lexical Watermark

1 code implementation5 Dec 2021 Xuanli He, Qiongkai Xu, Lingjuan Lyu, Fangzhao Wu, Chenguang Wang

Nowadays, due to the breakthrough in natural language generation (NLG), including machine translation, document summarization, image captioning, etc NLG models have been encapsulated in cloud APIs to serve over half a billion people worldwide and process over one hundred billion word generations per day.

Document Summarization Image Captioning +3

A Game-Theoretic Approach for Improving Generalization Ability of TSP Solvers

no code implementations28 Oct 2021 Chenguang Wang, Yaodong Yang, Oliver Slumbers, Congying Han, Tiande Guo, Haifeng Zhang, Jun Wang

In this paper, we introduce a two-player zero-sum framework between a trainable \emph{Solver} and a \emph{Data Generator} to improve the generalization ability of deep learning-based solvers for Traveling Salesman Problem (TSP).

Traveling Salesman Problem

Generating Multivariate Load States Using a Conditional Variational Autoencoder

no code implementations21 Oct 2021 Chenguang Wang, Ensieh Sharifnia, Zhi Gao, Simon H. Tindemans, Peter Palensky

In this paper, a multivariate load state generating model on the basis of a conditional variational autoencoder (CVAE) neural network is proposed.

Learning Graph Representation by Aggregating Subgraphs via Mutual Information Maximization

no code implementations24 Mar 2021 Chenguang Wang, Ziwen Liu

For this purpose, we propose a universal framework to generate subgraphs in an auto-regressive way and then using these subgraphs to guide the learning of graph representation by Graph Neural Networks.

Contrastive Learning Graph Representation Learning +1

Language Models are Open Knowledge Graphs

2 code implementations22 Oct 2020 Chenguang Wang, Xiao Liu, Dawn Song

This paper shows how to construct knowledge graphs (KGs) from pre-trained language models (e. g., BERT, GPT-2/3), without human supervision.

Knowledge Graphs

Training Strategies for Autoencoder-based Detection of False Data Injection Attacks

no code implementations14 May 2020 Chenguang Wang, Kaikai Pan, Simon Tindemans, Peter Palensky

The security of energy supply in a power grid critically depends on the ability to accurately estimate the state of the system.

Detection of False Data Injection Attacks Using the Autoencoder Approach

no code implementations4 Mar 2020 Chenguang Wang, Simon Tindemans, Kaikai Pan, Peter Palensky

State estimation is of considerable significance for the power system operation and control.

Transformer on a Diet

1 code implementation14 Feb 2020 Chenguang Wang, Zihao Ye, Aston Zhang, Zheng Zhang, Alexander J. Smola

Transformer has been widely used thanks to its ability to capture sequence information in an efficient way.

Language Modelling

Meta-Path Constrained Random Walk Inference for Large-Scale Heterogeneous Information Networks

no code implementations2 Dec 2019 Chenguang Wang

It is impractical for users to provide the meta-path(s) to support the large scale inference, and biased examples will result in incorrect meta-path based inference, thus limit the power of the meta-path.

PoD: Positional Dependency-Based Word Embedding for Aspect Term Extraction

no code implementations COLING 2020 Yichun Yin, Chenguang Wang, Ming Zhang

Dependency context-based word embedding jointly learns the representations of word and dependency context, and has been proved effective in aspect term extraction.

POS Term Extraction

Language Models with Transformers

1 code implementation arXiv 2019 Chenguang Wang, Mu Li, Alexander J. Smola

In this paper, we explore effective Transformer architectures for language model, including adding additional LSTM layers to better capture the sequential context while still keeping the computation efficient.

Ranked #2 on Language Modelling on Penn Treebank (Word Level) (using extra training data)

Language Modelling Neural Architecture Search

World Knowledge as Indirect Supervision for Document Clustering

no code implementations30 Jul 2016 Chenguang Wang, Yangqiu Song, Dan Roth, Ming Zhang, Jiawei Han

We provide three ways to specify the world knowledge to domains by resolving the ambiguity of the entities and their types, and represent the data with world knowledge as a heterogeneous information network.

Cannot find the paper you are looking for? You can Submit a new open access paper.