Search Results for author: Yanpeng Zhao

Found 13 papers, 9 papers with code

On the Transferability of Visually Grounded PCFGs

1 code implementation21 Oct 2023 Yanpeng Zhao, Ivan Titov

We consider a zero-shot transfer learning setting where a model is trained on the source domain and is directly applied to target domains, without any further training.

Transfer Learning

DynaVol: Unsupervised Learning for Dynamic Scenes through Object-Centric Voxelization

no code implementations30 Apr 2023 Yanpeng Zhao, Siyu Gao, Yunbo Wang, Xiaokang Yang

The voxel features and global features are complementary and are both leveraged by a compositional NeRF decoder for volume rendering.

Neural Rendering Novel View Synthesis +3

Connecting the Dots between Audio and Text without Parallel Data through Visual Knowledge Transfer

1 code implementation NAACL 2022 Yanpeng Zhao, Jack Hessel, Youngjae Yu, Ximing Lu, Rowan Zellers, Yejin Choi

In a difficult zero-shot setting with no paired audio-text data, our model demonstrates state-of-the-art zero-shot performance on the ESC50 and US8K audio classification tasks, and even surpasses the supervised state of the art for Clotho caption retrieval (with audio queries) by 2. 2\% R@1.

Audio Classification Audio Tagging +3

PCFGs Can Do Better: Inducing Probabilistic Context-Free Grammars with Many Symbols

1 code implementation NAACL 2021 Songlin Yang, Yanpeng Zhao, Kewei Tu

In this work, we present a new parameterization form of PCFGs based on tensor decomposition, which has at most quadratic computational complexity in the symbol number and therefore allows us to use a much larger number of symbols.

Constituency Grammar Induction

Unsupervised Natural Language Parsing (Introductory Tutorial)

no code implementations EACL 2021 Kewei Tu, Yong Jiang, Wenjuan Han, Yanpeng Zhao

Unsupervised parsing learns a syntactic parser from training sentences without parse tree annotations.

An Empirical Study of Compound PCFGs

2 code implementations EACL (AdaptNLP) 2021 Yanpeng Zhao, Ivan Titov

Compound probabilistic context-free grammars (C-PCFGs) have recently established a new state of the art for unsupervised phrase-structure grammar induction.

Sentence

Visually Grounded Compound PCFGs

1 code implementation EMNLP 2020 Yanpeng Zhao, Ivan Titov

In this work, we study visually grounded grammar induction and learn a constituency parser from both unlabeled text and its visual groundings.

Constituency Grammar Induction Language Modelling

Unsupervised Transfer of Semantic Role Models from Verbal to Nominal Domain

1 code implementation1 May 2020 Yanpeng Zhao, Ivan Titov

Nominal roles are not labeled in the training data, and the learning objective instead pushes the labeler to assign roles predictive of the arguments.

Semantic Role Labeling Sentence

Language Style Transfer from Sentences with Arbitrary Unknown Styles

no code implementations13 Aug 2018 Yanpeng Zhao, Wei Bi, Deng Cai, Xiaojiang Liu, Kewei Tu, Shuming Shi

Then, by recombining the content with the target style, we decode a sentence aligned in the target domain.

Sentence Sentence ReWriting +1

Gaussian Mixture Latent Vector Grammars

1 code implementation ACL 2018 Yanpeng Zhao, Liwen Zhang, Kewei Tu

We introduce Latent Vector Grammars (LVeGs), a new framework that extends latent variable grammars such that each nonterminal symbol is associated with a continuous vector space representing the set of (infinitely many) subtypes of the nonterminal.

Constituency Parsing Part-Of-Speech Tagging

Structured Attentions for Visual Question Answering

1 code implementation ICCV 2017 Chen Zhu, Yanpeng Zhao, Shuaiyi Huang, Kewei Tu, Yi Ma

In this paper, we demonstrate the importance of encoding such relations by showing the limited effective receptive field of ResNet on two datasets, and propose to model the visual attention as a multivariate distribution over a grid-structured Conditional Random Field on image regions.

Visual Question Answering

Cannot find the paper you are looking for? You can Submit a new open access paper.