no code implementations • NLP4ConvAI (ACL) 2022 • JianGuo Zhang, Kazuma Hashimoto, Yao Wan, Zhiwei Liu, Ye Liu, Caiming Xiong, Philip Yu
Pre-trained Transformer-based models were reported to be robust in intent classification.
no code implementations • 23 Apr 2024 • Chao Chen, Chenghua Guo, Rui Xu, Xiangwen Liao, Xi Zhang, Sihong Xie, Hui Xiong, Philip Yu
Graphical models, including Graph Neural Networks (GNNs) and Probabilistic Graphical Models (PGMs), have demonstrated their exceptional capabilities across numerous fields.
no code implementations • 29 Mar 2024 • Yucheng Jin, Yun Xiong, Juncheng Fang, Xixi Wu, Dongxiao He, Xing Jia, Bingchen Zhao, Philip Yu
Inter-class correlations are subsequently eliminated by the prototypical attention network, leading to distinctive representations for different classes.
1 code implementation • 2 Jan 2024 • Li Sun, Zhenhao Huang, Zixi Wang, Feiyang Wang, Hao Peng, Philip Yu
In light of the issues above, we propose the problem of \emph{Motif-aware Riemannian Graph Representation Learning}, seeking a numerically stable encoder to capture motif regularity in a diverse-curvature manifold without labels.
no code implementations • 22 Aug 2023 • Zheng Liu, Xiaohan Li, Philip Yu
The fairness issue of clinical data modeling, especially on Electronic Health Records (EHRs), is of utmost importance due to EHR's complex latent structure and potential selection bias.
no code implementations • 16 Nov 2022 • Xiaohan Li, Zheng Liu, Luyi Ma, Kaushiki Nag, Stephen Guo, Philip Yu, Kannan Achan
Considering the influence of historical purchases on users' future interests, the user and item representations can be viewed as unobserved confounders in the causal diagram.
1 code implementation • 4 Nov 2022 • Yibo Wang, Congying Xia, Guan Wang, Philip Yu
In order to handle new entities in product titles and address the special language styles problem of product titles in e-commerce domain, we propose our textual entailment model with continuous prompt tuning based hypotheses and fusion embeddings for e-commerce entity typing.
no code implementations • 28 Oct 2022 • Zheng Liu, Xiaohan Li, Philip Yu
First, these methods usually mean a trade-off between the model's performance and fairness.
2 code implementations • EMNLP 2021 • JianGuo Zhang, Trung Bui, Seunghyun Yoon, Xiang Chen, Zhiwei Liu, Congying Xia, Quan Hung Tran, Walter Chang, Philip Yu
In this work, we focus on a more challenging few-shot intent detection scenario where many intents are fine-grained and semantically similar.
no code implementations • 3 May 2021 • Congying Xia, Caiming Xiong, Philip Yu
PSN consists of two identical subnetworks with the same structure but different weights: an action network and an object network.
1 code implementation • NAACL 2021 • Congying Xia, Wenpeng Yin, Yihao Feng, Philip Yu
Two major challenges exist in this new task: (i) For the learning process, the system should incrementally learn new classes round by round without re-training on the examples of preceding classes; (ii) For the performance, the system should perform well on new classes without much loss on preceding classes.
no code implementations • EACL 2021 • Ye Liu, Yao Wan, JianGuo Zhang, Wenting Zhao, Philip Yu
In this paper, we claim that the syntactic and semantic structures among natural language are critical for non-autoregressive machine translation and can further improve the performance.
no code implementations • Joint Conference on Lexical and Computational Semantics 2020 • Shaika Chowdhury, Philip Yu, Yuan Luo
Domain knowledge is important to understand both the lexical and relational associations of words in natural language text, especially for domain-specific tasks like Natural Language Inference (NLI) in the medical domain, where due to the lack of a large annotated dataset such knowledge cannot be implicitly learned during training.
no code implementations • Findings of the Association for Computational Linguistics 2020 • Congying Xia, Caiming Xiong, Philip Yu, Richard Socher
In this paper, we focus on generating training examples for few-shot intents in the realistic imbalanced scenario.
no code implementations • 30 Jul 2020 • He Huang, Yuanwei Chen, Wei Tang, Wenhao Zheng, Qing-Guo Chen, Yao Hu, Philip Yu
On the other hand, there is a large semantic gap between seen and unseen classes in the existing multi-label classification datasets.
no code implementations • 4 Apr 2020 • Congying Xia, Chenwei Zhang, Hoang Nguyen, Jiawei Zhang, Philip Yu
In this paper, we formulate a more realistic and difficult problem setup for the intent detection task in natural language understanding, namely Generalized Few-Shot Intent Detection (GFSID).
no code implementations • COLING 2020 • Tao Zhang, Congying Xia, Chun-Ta Lu, Philip Yu
Named entity typing (NET) is a classification task of assigning an entity mention in the context with given semantic types.
no code implementations • 24 Mar 2020 • Yang Liu, Zhuo Ma, Ximeng Liu, Jian Liu, Zhongyuan Jiang, Jianfeng Ma, Philip Yu, Kui Ren
To this end, machine unlearning becomes a popular research topic, which allows users to eliminate memorization of their private data from a trained machine learning model. In this paper, we propose the first uniform metric called for-getting rate to measure the effectiveness of a machine unlearning method.
no code implementations • 27 Feb 2020 • Lichao Sun, Kazuma Hashimoto, Wenpeng Yin, Akari Asai, Jia Li, Philip Yu, Caiming Xiong
There is an increasing amount of literature that claims the brittleness of deep neural networks in dealing with adversarial examples that are created maliciously.
1 code implementation • ACL 2019 • Congying Xia, Chenwei Zhang, Tao Yang, Yaliang Li, Nan Du, Xian Wu, Wei Fan, Fenglong Ma, Philip Yu
This paper presents a novel framework, MGNER, for Multi-Grained Named Entity Recognition where multiple entities or entity mentions in a sentence could be non-overlapping or totally nested.
Ranked #5 on Nested Mention Recognition on ACE 2005
Multi-Grained Named Entity Recognition named-entity-recognition +5