Search Results for author: Xiaoguang Mao

Found 6 papers, 2 papers with code

PTA: Enhancing Multimodal Sentiment Analysis through Pipelined Prediction and Translation-based Alignment

no code implementations23 May 2024 Shezheng Song, Shasha Li, Shan Zhao, Chengyu Wang, Xiaopeng Li, Jie Yu, Qian Wan, Jun Ma, Tianwei Yan, Wentao Ma, Xiaoguang Mao

In contrast, a pipeline framework first identifies aspects through MATE (Multimodal Aspect Term Extraction) and then aligns these aspects with image patches for sentiment classification (MASC: Multimodal Aspect-Oriented Sentiment Classification).

Aspect-Based Sentiment Analysis Multimodal Sentiment Analysis +2

DWE+: Dual-Way Matching Enhanced Framework for Multimodal Entity Linking

1 code implementation7 Apr 2024 Shezheng Song, Shasha Li, Shan Zhao, Xiaopeng Li, Chengyu Wang, Jie Yu, Jun Ma, Tianwei Yan, Bin Ji, Xiaoguang Mao

Multimodal entity linking (MEL) aims to utilize multimodal information (usually textual and visual information) to link ambiguous mentions to unambiguous entities in knowledge base.

Contrastive Learning Entity Linking

A Dual-way Enhanced Framework from Text Matching Point of View for Multimodal Entity Linking

1 code implementation19 Dec 2023 Shezheng Song, Shan Zhao, Chengyu Wang, Tianwei Yan, Shasha Li, Xiaoguang Mao, Meng Wang

Multimodal Entity Linking (MEL) aims at linking ambiguous mentions with multimodal information to entity in Knowledge Graph (KG) such as Wikipedia, which plays a key role in many applications.

Entity Linking Text Matching

How to Bridge the Gap between Modalities: A Comprehensive Survey on Multimodal Large Language Model

no code implementations10 Nov 2023 Shezheng Song, Xiaopeng Li, Shasha Li, Shan Zhao, Jie Yu, Jun Ma, Xiaoguang Mao, Weimin Zhang

The study surveys existing modal alignment methods in MLLMs into four groups: (1) Multimodal Converters that change data into something LLMs can understand; (2) Multimodal Perceivers to improve how LLMs perceive different types of data; (3) Tools Assistance for changing data into one common format, usually text; and (4) Data-Driven methods that teach LLMs to understand specific types of data in a dataset.

Language Modelling Large Language Model +1

StyleFlow: Disentangle Latent Representations via Normalizing Flow for Unsupervised Text Style Transfer

no code implementations19 Dec 2022 Kangchen Zhu, Zhiliang Tian, Ruifeng Luo, Xiaoguang Mao

Since cycle construction helps to improve the style transfer ability of the model by rebuilding transferred sentences back to original-style sentences, it brings about a content loss in unsupervised text style transfer tasks.

Data Augmentation Decoder +5

Attention Please: Consider Mockito when Evaluating Newly Proposed Automated Program Repair Techniques

no code implementations13 Dec 2018 Shangwen Wang, Ming Wen, Xiaoguang Mao, Deheng Yang

Our findings show that: 1) Mockito bugs are not more complex for repairing compared with bugs from other projects; 2) the bugs repaired by the state-of-the-art tools share the same repair patterns compared with those patterns required to repair Mockito bugs; however, 3) the state-of-the-art tools perform poorly on Mockito bugs (Nopol can only correctly fix one bug while SimFix and CapGen cannot fix any bug in Mockito even if all the buggy locations have been exposed).

Software Engineering

Cannot find the paper you are looking for? You can Submit a new open access paper.