Search Results for author: Zelin Dai

Found 7 papers, 6 papers with code

SQUIRE: A Sequence-to-sequence Framework for Multi-hop Knowledge Graph Reasoning

1 code implementation17 Jan 2022 Yushi Bai, Xin Lv, Juanzi Li, Lei Hou, Yincen Qu, Zelin Dai, Feiyu Xiong

Multi-hop knowledge graph (KG) reasoning has been widely studied in recent years to provide interpretable predictions on missing links with evidential paths.

Navigate Reinforcement Learning (RL)

Commonsense Knowledge Salience Evaluation with a Benchmark Dataset in E-commerce

1 code implementation22 May 2022 Yincen Qu, Ningyu Zhang, Hui Chen, Zelin Dai, Zezhong Xu, Chengming Wang, Xiaoyu Wang, Qiang Chen, Huajun Chen

In addition to formulating the new task, we also release a new Benchmark dataset of Salience Evaluation in E-commerce (BSEE) and hope to promote related research on commonsense knowledge salience evaluation.

Multiple Generative Models Ensemble for Knowledge-Driven Proactive Human-Computer Dialogue Agent

4 code implementations8 Jul 2019 Zelin Dai, Weitang Liu, Guanhua Zhan

Multiple sequence to sequence models were used to establish an end-to-end multi-turns proactive dialogue generation agent, with the aid of data augmentation techniques and variant encoder-decoder structure designs.

Data Augmentation Dialogue Generation

Is Multi-Hop Reasoning Really Explainable? Towards Benchmarking Reasoning Interpretability

1 code implementation EMNLP 2021 Xin Lv, Yixin Cao, Lei Hou, Juanzi Li, Zhiyuan Liu, Yichi Zhang, Zelin Dai

However, we find in experiments that many paths given by these models are actually unreasonable, while little works have been done on interpretability evaluation for them.

Benchmarking Link Prediction

Interpretable and Low-Resource Entity Matching via Decoupling Feature Learning from Decision Making

1 code implementation ACL 2021 Zijun Yao, Chengjiang Li, Tiansi Dong, Xin Lv, Jifan Yu, Lei Hou, Juanzi Li, Yichi Zhang, Zelin Dai

Using a set of comparison features and a limited amount of annotated data, KAT Induction learns an efficient decision tree that can be interpreted by generating entity matching rules whose structure is advocated by domain experts.

Attribute Decision Making +2

How Can Cross-lingual Knowledge Contribute Better to Fine-Grained Entity Typing?

no code implementations Findings (ACL) 2022 Hailong Jin, Tiansi Dong, Lei Hou, Juanzi Li, Hui Chen, Zelin Dai, Qu Yincen

Cross-lingual Entity Typing (CLET) aims at improving the quality of entity type prediction by transferring semantic knowledge learned from rich-resourced languages to low-resourced languages.

Entity Typing Transfer Learning +1

Cannot find the paper you are looking for? You can Submit a new open access paper.