Search Results for author: Ying Zeng

Found 11 papers, 4 papers with code

ARS-DETR: Aspect Ratio Sensitive Oriented Object Detection with Transformer

1 code implementation9 Mar 2023 Ying Zeng, Xue Yang, Qingyun Li, Yushi Chen, Junchi Yan

Existing oriented object detection methods commonly use metric AP$_{50}$ to measure the performance of the model.

object-detection Object Detection +1

Multimodal Information Bottleneck: Learning Minimal Sufficient Unimodal and Multimodal Representations

1 code implementation31 Oct 2022 Sijie Mai, Ying Zeng, Haifeng Hu

To this end, we introduce the multimodal information bottleneck (MIB), aiming to learn a powerful and sufficient multimodal representation that is free of redundancy and to filter out noisy information in unimodal representations.

Multimodal Emotion Recognition Multimodal Sentiment Analysis

Hybrid Contrastive Learning of Tri-Modal Representation for Multimodal Sentiment Analysis

no code implementations4 Sep 2021 Sijie Mai, Ying Zeng, Shuangjia Zheng, Haifeng Hu

Specifically, we simultaneously perform intra-/inter-modal contrastive learning and semi-contrastive learning (that is why we call it hybrid contrastive learning), with which the model can fully explore cross-modal interactions, preserve inter-class relationships and reduce the modality gap.

Contrastive Learning Multimodal Sentiment Analysis

Efficient Peer Effects Estimators with Group Effects

1 code implementation10 May 2021 Guido M. Kuersteiner, Ingmar R. Prucha, Ying Zeng

We show that these moment conditions can be cast in terms of a linear random group effects model and lead to a class of GMM estimators that are generally identified as long as there is sufficient variation in group size.

Taxonomy Completion via Triplet Matching Network

1 code implementation6 Jan 2021 Jieyu Zhang, Xiangchen Song, Ying Zeng, Jiaze Chen, Jiaming Shen, Yuning Mao, Lei LI

Previous approaches focus on the taxonomy expansion, i. e. finding an appropriate hypernym concept from the taxonomy for a new query concept.

Taxonomy Expansion

Analyzing Unaligned Multimodal Sequence via Graph Convolution and Graph Pooling Fusion

no code implementations27 Nov 2020 Sijie Mai, Songlong Xing, Jiaxuan He, Ying Zeng, Haifeng Hu

A majority of existing works generally focus on aligned fusion, mostly at word level, of the three modalities to accomplish this task, which is impractical in real-world scenarios.

Xiaomingbot: A Multilingual Robot News Reporter

no code implementations ACL 2020 Runxin Xu, Jun Cao, Mingxuan Wang, Jiaze Chen, Hao Zhou, Ying Zeng, Yu-Ping Wang, Li Chen, Xiang Yin, Xijin Zhang, Songcheng Jiang, Yuxuan Wang, Lei LI

This paper proposes the building of Xiaomingbot, an intelligent, multilingual and multimodal software robot equipped with four integral capabilities: news generation, news translation, news reading and avatar animation.

News Generation Translation +1

Importance-Aware Learning for Neural Headline Editing

no code implementations25 Nov 2019 Qingyang Wu, Lei LI, Hao Zhou, Ying Zeng, Zhou Yu

We propose to automate this headline editing process through neural network models to provide more immediate writing support for these social media news writers.

Headline Generation

Constraint-free Natural Image Reconstruction from fMRI Signals Based on Convolutional Neural Network

no code implementations16 Jan 2018 Chi Zhang, Kai Qiao, Linyuan Wang, Li Tong, Ying Zeng, Bin Yan

Without semantic prior information, we present a novel method to reconstruct nature images from fMRI signals of human visual cortex based on the computation model of convolutional neural network (CNN).

Image Reconstruction

Scale Up Event Extraction Learning via Automatic Training Data Generation

no code implementations11 Dec 2017 Ying Zeng, Yansong Feng, Rong Ma, Zheng Wang, Rui Yan, Chongde Shi, Dongyan Zhao

We show that this large volume of training data not only leads to a better event extractor, but also allows us to detect multiple typed events.

Event Extraction

Cannot find the paper you are looking for? You can Submit a new open access paper.