Search Results for author: Longjun Cai

Found 12 papers, 2 papers with code

Seq2Path: Generating Sentiment Tuples as Paths of a Tree

no code implementations Findings (ACL) 2022 Yue Mao, Yi Shen, Jingchao Yang, Xiaoying Zhu, Longjun Cai

A tree can represent “1-to-n” relations (e. g., an aspect term may correspond to multiple opinion terms) and the paths of a tree are independent and do not have orders.

Aspect-Based Sentiment Analysis Aspect-Based Sentiment Analysis (ABSA) +3

Constrained Sequence-to-Tree Generation for Hierarchical Text Classification

no code implementations2 Apr 2022 Chao Yu, Yi Shen, Yue Mao, Longjun Cai

Hierarchical Text Classification (HTC) is a challenging task where a document can be assigned to multiple hierarchically structured categories within a taxonomy.

Multi-Label Classification text-classification +1

Reducing the Covariate Shift by Mirror Samples in Cross Domain Alignment

1 code implementation NeurIPS 2021 Yin Zhao, Minquan Wang, Longjun Cai

Eliminating the covariate shift cross domains is one of the common methods to deal with the issue of domain shift in visual unsupervised domain adaptation.

Unsupervised Domain Adaptation

Pairwise Emotional Relationship Recognition in Drama Videos: Dataset and Benchmark

1 code implementation23 Sep 2021 Xun Gao, Yin Zhao, Jie Zhang, Longjun Cai

We expect the ERATO as well as our proposed SMTA to open up a new way for PERR task in video understanding and further improve the research of multi-modal fusion methodology.

Video Understanding

A Joint Training Dual-MRC Framework for Aspect Based Sentiment Analysis

no code implementations4 Jan 2021 Yue Mao, Yi Shen, Chao Yu, Longjun Cai

Some recent work focused on solving a combination of two subtasks, e. g., extracting aspect terms along with sentiment polarities or extracting the aspect and opinion terms pair-wisely.

Aspect-Based Sentiment Analysis Aspect-oriented Opinion Extraction +6

VIDEO AFFECTIVE IMPACT PREDICTION WITH MULTIMODAL FUSION AND LONG-SHORT TEMPORAL CONTEXT

no code implementations25 Sep 2019 Yin Zhao, Longjun Cai, Chaoping Tu, Jie Zhang, Wu Wei

Feature extraction, multi-modal fusion and temporal context fusion are crucial stages for predicting valence and arousal values in the emotional impact, but have not been successfully exploited.

Video Affective Effects Prediction with Multi-modal Fusion and Shot-Long Temporal Context

no code implementations1 Sep 2019 Jie Zhang, Yin Zhao, Longjun Cai, Chaoping Tu, Wu Wei

We select the most suitable modalities for valence and arousal tasks respectively and each modal feature is extracted using the modality-specific pre-trained deep model on large generic dataset.

Predicting the Popularity of Online Videos via Deep Neural Networks

no code implementations29 Nov 2017 Yue Mao, Yi Shen, Gang Qin, Longjun Cai

Predicting the popularity of online videos is important for video streaming content providers.

Multi-Task Learning Relation Network

Cannot find the paper you are looking for? You can Submit a new open access paper.