Search Results for author: Zhuoren Jiang

Found 21 papers, 12 papers with code

Cross-language Citation Recommendation via Hierarchical Representation Learning on Heterogeneous Graph

1 code implementation31 Dec 2018 Zhuoren Jiang, Yue Yin, Liangcai Gao, Yao Lu, Xiaozhong Liu

While the volume of scholarly publications has increased at a frenetic pace, accessing and consuming the useful candidate papers, in very large digital libraries, is becoming an essential and challenging task for scholars.

Citation Recommendation Representation Learning

Automatic Generation of Headlines for Online Math Questions

1 code implementation27 Nov 2019 Ke Yuan, Dafang He, Zhuoren Jiang, Liangcai Gao, Zhi Tang, C. Lee Giles

Compared to conventional summarization tasks, this task has two extra and essential constraints: 1) Detailed math questions consist of text and math equations which require a unified framework to jointly model textual and mathematical information; 2) Unlike text, math equations contain semantic and structural features, and both of them should be captured together.

Math

Read Beyond the Lines: Understanding the Implied Textual Meaning via a Skim and Intensive Reading Model

no code implementations3 Jan 2020 Guoxiu He, Zhe Gao, Zhuoren Jiang, Yangyang Kang, Changlong Sun, Xiaozhong Liu, Wei Lu

The nonliteral interpretation of a text is hard to be understood by machine models due to its high context-sensitivity and heavy usage of figurative language.

Reading Comprehension Sentence

MedSRGAN: medical images super-resolution using generative adversarial networks

1 code implementation Springer 2020 Yuchong Gu, Zitao Zen, Haibin Chen, Jun Wei, Yaqin Zhang, Binghui Chen, Yingqin Li, Yujuan Qin, Qing Xie, Zhuoren Jiang, Yao Lu

Super-resolution (SR) in medical imaging is an emerging application in medical imaging due to the needs of high quality images acquired with limited radiation dose, such as low dose Computer Tomography (CT), low field magnetic resonance imaging (MRI).

Super-Resolution

Camouflaged Chinese Spam Content Detection with Semi-supervised Generative Active Learning

no code implementations ACL 2020 Zhuoren Jiang, Zhe Gao, Yu Duan, Yangyang Kang, Changlong Sun, Qiong Zhang, Xiaozhong Liu

We propose a Semi-supervIsed GeNerative Active Learning (SIGNAL) model to address the imbalance, efficiency, and text camouflage problems of Chinese text spam detection task.

Active Learning Chinese Spam Detection +2

Detecting User Community in Sparse Domain via Cross-Graph Pairwise Learning

no code implementations6 Sep 2020 Zheng Gao, Hongsong Li, Zhuoren Jiang, Xiaozhong Liu

In this paper, our model, Pairwise Cross-graph Community Detection (PCCD), is proposed to cope with the sparse graph problem by involving external graph knowledge to learn user pairwise community closeness instead of detecting direct communities.

Community Detection

Topic-Oriented Spoken Dialogue Summarization for Customer Service with Saliency-Aware Topic Modeling

1 code implementation14 Dec 2020 Yicheng Zou, Lujun Zhao, Yangyang Kang, Jun Lin, Minlong Peng, Zhuoren Jiang, Changlong Sun, Qi Zhang, Xuanjing Huang, Xiaozhong Liu

In a customer service system, dialogue summarization can boost service efficiency by automatically creating summaries for long spoken dialogues in which customers and agents try to address issues about specific topics.

Semi-Supervised Active Learning for Semi-Supervised Models: Exploit Adversarial Examples With Graph-Based Virtual Labels

no code implementations ICCV 2021 Jiannan Guo, Haochen Shi, Yangyang Kang, Kun Kuang, Siliang Tang, Zhuoren Jiang, Changlong Sun, Fei Wu, Yueting Zhuang

Although current mainstream methods begin to combine SSL and AL (SSL-AL) to excavate the diverse expressions of unlabeled samples, these methods' fully supervised task models are still trained only with labeled data.

Active Learning

Community-based Cyberreading for Information Understanding

no code implementations27 Mar 2021 Zhuoren Jiang, Xiaozhong Liu, Liangcai Gao, Zhi Tang

Although the content in scientific publications is increasingly challenging, it is necessary to investigate another important problem, that of scientific information understanding.

Learning-To-Rank

A Role-Selected Sharing Network for Joint Machine-Human Chatting Handoff and Service Satisfaction Analysis

1 code implementation EMNLP 2021 Jiawei Liu, Kaisong Song, Yangyang Kang, Guoxiu He, Zhuoren Jiang, Changlong Sun, Wei Lu, Xiaozhong Liu

Chatbot is increasingly thriving in different domains, however, because of unexpected discourse complexity and training data sparseness, its potential distrust hatches vital apprehension.

Chatbot Multi-Task Learning

H2CGL: Modeling Dynamics of Citation Network for Impact Prediction

1 code implementation16 Apr 2023 Guoxiu He, Zhikai Xue, Zhuoren Jiang, Yangyang Kang, Star Zhao, Wei Lu

Then, a novel graph neural network, Hierarchical and Heterogeneous Contrastive Graph Learning Model (H2CGL), is proposed to incorporate heterogeneity and dynamics of the citation network.

Contrastive Learning Graph Learning

Disentangling the Potential Impacts of Papers into Diffusion, Conformity, and Contribution Values

no code implementations15 Nov 2023 Zhikai Xue, Guoxiu He, Zhuoren Jiang, Sichen Gu, Yangyang Kang, Star Zhao, Wei Lu

In this study, we propose a novel graph neural network to Disentangle the Potential impacts of Papers into Diffusion, Conformity, and Contribution values (called DPPDCC).

Empowering Dual-Level Graph Self-Supervised Pretraining with Motif Discovery

1 code implementation19 Dec 2023 Pengwei Yan, Kaisong Song, Zhuoren Jiang, Yangyang Kang, Tianqianjin Lin, Changlong Sun, Xiaozhong Liu

While self-supervised graph pretraining techniques have shown promising results in various domains, their application still experiences challenges of limited topology learning, human knowledge dependency, and incompetent multi-level interactions.

Representation Learning Transfer Learning

Tree-Based Hard Attention with Self-Motivation for Large Language Models

no code implementations14 Feb 2024 Chenxi Lin, Jiayu Ren, Guoxiu He, Zhuoren Jiang, Haiyan Yu, Xiaomin Zhu

Moreover, TEAROOM comprises a self-motivation strategy for another LLM equipped with a trainable adapter and a linear layer.

Hard Attention

Evolving Knowledge Distillation with Large Language Models and Active Learning

no code implementations11 Mar 2024 Chengyuan Liu, Yangyang Kang, Fubang Zhao, Kun Kuang, Zhuoren Jiang, Changlong Sun, Fei Wu

In this paper, we propose EvoKD: Evolving Knowledge Distillation, which leverages the concept of active learning to interactively enhance the process of data generation using large language models, simultaneously improving the task capabilities of small domain model (student model).

Active Learning Knowledge Distillation +5

Cannot find the paper you are looking for? You can Submit a new open access paper.