Search Results for author: Yaxin Zhu

Found 6 papers, 2 papers with code

ICXML: An In-Context Learning Framework for Zero-Shot Extreme Multi-Label Classification

1 code implementation16 Nov 2023 Yaxin Zhu, Hamed Zamani

This paper focuses on the task of Extreme Multi-Label Classification (XMC) whose goal is to predict multiple labels for each instance from an extremely large label space.

Extreme Multi-Label Classification In-Context Learning

Deep Feature Fusion via Graph Convolutional Network for Intracranial Artery Labeling

no code implementations22 May 2022 Yaxin Zhu, Peisheng Qian, Ziyuan Zhao, Zeng Zeng

Intracranial arteries are critical blood vessels that supply the brain with oxygenated blood.

Data Augmentation with Adversarial Training for Cross-Lingual NLI

no code implementations ACL 2021 Xin Dong, Yaxin Zhu, Zuohui Fu, Dongkuan Xu, Gerard de Melo

Due to recent pretrained multilingual representation models, it has become feasible to exploit labeled data from one language to train a cross-lingual model that can then be applied to multiple new languages.

Cross-Lingual Natural Language Inference Data Augmentation

Faithfully Explainable Recommendation via Neural Logic Reasoning

1 code implementation NAACL 2021 Yaxin Zhu, Yikun Xian, Zuohui Fu, Gerard de Melo, Yongfeng Zhang

Knowledge graphs (KG) have become increasingly important to endow modern recommender systems with the ability to generate traceable reasoning paths to explain the recommendation process.

Decision Making Explainable Recommendation +3

COOKIE: A Dataset for Conversational Recommendation over Knowledge Graphs in E-commerce

no code implementations21 Aug 2020 Zuohui Fu, Yikun Xian, Yaxin Zhu, Yongfeng Zhang, Gerard de Melo

In this work, we present a new dataset for conversational recommendation over knowledge graphs in e-commerce platforms called COOKIE.

Knowledge Graphs

Leveraging Adversarial Training in Self-Learning for Cross-Lingual Text Classification

no code implementations29 Jul 2020 Xin Dong, Yaxin Zhu, Yupeng Zhang, Zuohui Fu, Dongkuan Xu, Sen yang, Gerard de Melo

The resulting model then serves as a teacher to induce labels for unlabeled target language samples that can be used during further adversarial training, allowing us to gradually adapt our model to the target language.

General Classification intent-classification +4

Cannot find the paper you are looking for? You can Submit a new open access paper.