Mental disease detection (MDD) from social media has suffered from poor generalizability and interpretability, due to lack of symptom modeling.
Natural Language Inference Generation task is to generate a text hypothesis given a text premise and a logical relation between the two.
Depression is a prominent health challenge to the world, and early risk detection (ERD) of depression from online posts can be a promising technique for combating the threat.
News recommendation for anonymous readers is a useful but challenging task for many news portals, where interactions between readers and articles are limited within a temporary login session.
Current text-image approaches (e. g., CLIP) typically adopt dual-encoder architecture us- ing pre-trained vision-language representation.
Previous dialogue summarization techniques adapt large language models pretrained on the narrative text by injecting dialogue-specific features into the models.
Current metrics are found in poor correlation with human annotations on these datasets.
Due to the variety of possible user backgrounds and use cases, the information need can be quite diverse but also specific to a detailed topic, while previous works assume generating one CQ per context and the results tend to be generic.
Recent work has indicated that many natural language understanding and reasoning datasets contain statistical cues that may be taken advantaged of by NLP models whose capability may thus be grossly overestimated.
In this paper, we propose the task of relation classification of interlocutors based on their dialogues.
Ranked #1 on Dialog Relation Extraction on DDRel
In this paper, we propose a dialogue extraction algorithm to transform a dialogue history into threads based on their dependency relations.
Matching question-answer relations between two turns in conversations is not only the first step in analyzing dialogue structures, but also valuable for training dialogue systems.
In this framework, models not only strive to classify query instances, but also seek underlying knowledge about the support instances to obtain better instance representations.
Text style transfer aims to paraphrase a sentence in one style into another style while preserving content.
In this paper, we propose a novel configurable framework to automatically generate distractive choices for open-domain cloze-style multiple-choice questions, which incorporates a general-purpose knowledge base to effectively create a small distractor candidate set, and a feature-rich learning-to-rank model to select distractors that are both plausible and reliable.
Besides the commonly used feature importance as a global interpretation, feature contribution is a local measure that reveals the relationship between a specific instance and the related output.
However, user needs in e-commerce are still not well defined, and none of the existing ontologies has the enough depth and breadth for universal user needs understanding.
Previous work on cross-lingual sequence labeling tasks either requires parallel data or bridges the two languages through word-byword matching.
This paper targets to a novel but practical recommendation problem named exact-K recommendation.
This paper studies the problem of automatically extracting a short title from a manually written longer description of E-commerce products for display on mobile devices.
Slot filling is a critical task in natural language understanding (NLU) for dialog systems.
In neural machine translation, the attention mechanism facilitates the translation process by producing a soft alignment between the source sentence and the target sentence.
Recognizing metaphors and identifying the source-target mappings is an important task as metaphorical text poses a big challenge for machine reading.