Search Results for author: Xiao Feng

Found 7 papers, 0 papers with code

MMGA: Multimodal Learning with Graph Alignment

no code implementations18 Oct 2022 Xuan Yang, Quanjin Tao, Xiao Feng, Donghong Cai, Xiang Ren, Yang Yang

In this paper, we propose MMGA (Multimodal learning with Graph Alignment), a novel multimodal pre-training framework to incorporate information from graph (social network), image and text modalities on social media to enhance user representation learning.

Representation Learning

CoMo: A novel co-moving 3D camera system

no code implementations26 Jan 2021 Andrea Cavagna, Xiao Feng, Stefania Melillo, Leonardo Parisi, Lorena Postiglione, Pablo Villegas

With the rotation of the cameras we overcome the limitations of standard static systems that restrict the duration of the collected data to the short interval of time in which targets are in the cameras common field of view, but at the same time we change in time the external parameters of the system, which have then to be calibrated frame-by-frame.

Stereo camera system calibration: the need of two sets of parameters

no code implementations14 Jan 2021 Riccardo Beschi, Xiao Feng, Stefania Melillo, Leonardo Parisi, Lorena Postiglione

The reconstruction of a scene via a stereo-camera system is a two-steps process, where at first images from different cameras are matched to identify the set of point-to-point correspondences that then will actually be reconstructed in the three dimensional real world.

3D Reconstruction Vocal Bursts Valence Prediction

Correction of Faulty Background Knowledge based on Condition Aware and Revise Transformer for Question Answering

no code implementations30 Jun 2020 Xinyan Zhao, Xiao Feng, Haoming Zhong, Jun Yao, Huanhuan Chen

CAR-Transformer (1) revises each condition value based on the whole conversation and original conditions values, and (2) it encodes the revised conditions and utilizes the conditions embedding to select an answer.

Question Answering

Cannot find the paper you are looking for? You can Submit a new open access paper.