3 code implementations • ACL 2022 • Huisheng Mao, Ziqi Yuan, Hua Xu, Wenmeng Yu, Yihe Liu, Kai Gao
The platform features a fully modular video sentiment analysis framework consisting of data management, feature extraction, model training, and result analysis modules.
2 code implementations • 19 Mar 2022 • Rui Wang, Kai Gao, Jingjin Yu, Kostas Bekris
Object rearrangement is important for many applications but remains challenging, especially in confined spaces, such as shelves, where objects cannot be accessed from above and they block reachability to each other.
1 code implementation • Findings (ACL) 2022 • Kang Zhao, Hua Xu, Jiangong Yang, Kai Gao
Specifically, supervised contrastive learning based on a memory bank is first used to train each new task so that the model can effectively learn the relation representation.
2 code implementations • ACL 2021 • Hanlei Zhang, Xiaoteng Li, Hua Xu, Panpan Zhang, Kang Zhao, Kai Gao
It is composed of two main modules: open intent detection and open intent discovery.
1 code implementation • 8 May 2021 • Kang Zhao, Hua Xu, Yue Cheng, Xiaoteng Li, Kai Gao
Joint entity and relation extraction is an essential task in information extraction, which aims to extract all relational triples from unstructured text.
Ranked #2 on
Relation Extraction
on SemEval-2010 Task 8
Joint Entity and Relation Extraction
Relation Classification
1 code implementation • 21 Apr 2021 • Jie Lian, Jingyu Liu, Shu Zhang, Kai Gao, Xiaoqing Liu, Dingwen Zhang, Yizhou Yu
Leveraging on constant structure and disease relations extracted from domain knowledge, we propose a structure-aware relation network (SAR-Net) extending Mask R-CNN.
2 code implementations • 28 Jan 2021 • Rui Wang, Kai Gao, Daniel Nakhimovich, Jingjin Yu, Kostas E. Bekris
DFSDP is extended to solve single-buffer, non-monotone instances, given a choice of an object and a buffer.
1 code implementation • ACM Multimedia 2020 • Kaicheng Yang, Hua Xu, Kai Gao
In this paper, we propose the Cross-Modal BERT (CM-BERT), which relies on the interaction of text and audio modality to fine-tune the pre-trained BERT model.
Ranked #1 on
Multimodal Sentiment Analysis
on MOSI