Search Results for author: Nicholas Jing Yuan

Found 22 papers, 10 papers with code

APGN: Adversarial and Parameter Generation Networks for Multi-Source Cross-Domain Dependency Parsing

no code implementations Findings (EMNLP) 2021 Ying Li, Meishan Zhang, Zhenghua Li, Min Zhang, Zhefeng Wang, Baoxing Huai, Nicholas Jing Yuan

Thanks to the strong representation learning capability of deep learning, especially pre-training techniques with language model loss, dependency parsing has achieved great performance boost in the in-domain scenario with abundant labeled training data for target domains.

Dependency Parsing Language Modelling +1

A Coarse-to-Fine Labeling Framework for Joint Word Segmentation, POS Tagging, and Constituent Parsing

1 code implementation CoNLL (EMNLP) 2021 Yang Hou, Houquan Zhou, Zhenghua Li, Yu Zhang, Min Zhang, Zhefeng Wang, Baoxing Huai, Nicholas Jing Yuan

In the coarse labeling stage, the joint model outputs a bracketed tree, in which each node corresponds to one of four labels (i. e., phrase, subphrase, word, subword).

Part-Of-Speech Tagging POS +2

Learning Profitable NFT Image Diffusions via Multiple Visual-Policy Guided Reinforcement Learning

no code implementations20 Jun 2023 Huiguo He, Tianfu Wang, Huan Yang, Jianlong Fu, Nicholas Jing Yuan, Jian Yin, Hongyang Chao, Qi Zhang

The proposed framework consists of a large language model (LLM), a diffusion-based image generator, and a series of visual rewards by design.

Attribute Image Generation +3

Recognizing Unseen Objects via Multimodal Intensive Knowledge Graph Propagation

no code implementations14 Jun 2023 Likang Wu, Zhi Li, Hongke Zhao, Zhefeng Wang, Qi Liu, Baoxing Huai, Nicholas Jing Yuan, Enhong Chen

Zero-Shot Learning (ZSL), which aims at automatically recognizing unseen objects, is a promising learning paradigm to understand new real-world knowledge for machines continuously.

Attribute Knowledge Graphs +2

MM-Diffusion: Learning Multi-Modal Diffusion Models for Joint Audio and Video Generation

1 code implementation CVPR 2023 Ludan Ruan, Yiyang Ma, Huan Yang, Huiguo He, Bei Liu, Jianlong Fu, Nicholas Jing Yuan, Qin Jin, Baining Guo

To generate joint audio-video pairs, we propose a novel Multi-Modal Diffusion model (i. e., MM-Diffusion), with two-coupled denoising autoencoders.

Denoising FAD +1

Multi-modal Siamese Network for Entity Alignment

1 code implementation KDD 2022 Liyi Chen, Zhi Li, Tong Xu, Han Wu, Zhefeng Wang, Nicholas Jing Yuan, Enhong Chen

To deal with that problem, in this paper, we propose a novel Multi-modal Siamese Network for Entity Alignment (MSNEA) to align entities in different MMKGs, in which multi-modal knowledge could be comprehensively leveraged by the exploitation of inter-modal effect.

Ranked #7 on Multi-modal Entity Alignment on UMVM-oea-d-w-v1 (using extra training data)

Attribute Contrastive Learning +3

Multi-Modal Knowledge Graph Construction and Application: A Survey

no code implementations11 Feb 2022 Xiangru Zhu, Zhixu Li, Xiaodan Wang, Xueyao Jiang, Penglei Sun, Xuwu Wang, Yanghua Xiao, Nicholas Jing Yuan

In this survey on MMKGs constructed by texts and images, we first give definitions of MMKGs, followed with the preliminaries on multi-modal tasks and techniques.

graph construction Knowledge Graphs +1

Efficient Document-level Event Extraction via Pseudo-Trigger-aware Pruned Complete Graph

1 code implementation11 Dec 2021 Tong Zhu, Xiaoye Qu, Wenliang Chen, Zhefeng Wang, Baoxing Huai, Nicholas Jing Yuan, Min Zhang

Most previous studies of document-level event extraction mainly focus on building argument chains in an autoregressive way, which achieves a certain success but is inefficient in both training and inference.

Document-level Event Extraction Event Extraction

Denoising Distantly Supervised Named Entity Recognition via a Hypergeometric Probabilistic Model

1 code implementation17 Jun 2021 Wenkai Zhang, Hongyu Lin, Xianpei Han, Le Sun, Huidan Liu, Zhicheng Wei, Nicholas Jing Yuan

Specifically, during neural network training, we naturally model the noise samples in each batch following a hypergeometric distribution parameterized by the noise-rate.

Denoising named-entity-recognition +2

An In-depth Study on Internal Structure of Chinese Words

1 code implementation ACL 2021 Chen Gong, Saihao Huang, Houquan Zhou, Zhenghua Li, Min Zhang, Zhefeng Wang, Baoxing Huai, Nicholas Jing Yuan

Several previous works on syntactic parsing propose to annotate shallow word-internal structures for better utilizing character-level information.

Sentence

Knowledge-based Review Generation by Coherence Enhanced Text Planning

no code implementations9 May 2021 Junyi Li, Wayne Xin Zhao, Zhicheng Wei, Nicholas Jing Yuan, Ji-Rong Wen

For global coherence, we design a hierarchical self-attentive architecture with both subgraph- and node-level attention to enhance the correlations between subgraphs.

Informativeness Knowledge Graphs +3

Read, Retrospect, Select: An MRC Framework to Short Text Entity Linking

no code implementations7 Jan 2021 Yingjie Gu, Xiaoye Qu, Zhefeng Wang, Baoxing Huai, Nicholas Jing Yuan, Xiaolin Gui

Entity linking (EL) for the rapidly growing short text (e. g. search queries and news titles) is critical to industrial applications.

Entity Linking Machine Reading Comprehension +1

Object-Aware Multi-Branch Relation Networks for Spatio-Temporal Video Grounding

no code implementations16 Aug 2020 Zhu Zhang, Zhou Zhao, Zhijie Lin, Baoxing Huai, Nicholas Jing Yuan

Spatio-temporal video grounding aims to retrieve the spatio-temporal tube of a queried object according to the given sentence.

Object Relation +4

FastLR: Non-Autoregressive Lipreading Model with Integrate-and-Fire

no code implementations6 Aug 2020 Jinglin Liu, Yi Ren, Zhou Zhao, Chen Zhang, Baoxing Huai, Nicholas Jing Yuan

NAR lipreading is a challenging task that has many difficulties: 1) the discrepancy of sequence lengths between source and target makes it difficult to estimate the length of the output sequence; 2) the conditionally independent behavior of NAR generation lacks the correlation across time which leads to a poor approximation of target distribution; 3) the feature representation ability of encoder can be weak due to lack of effective alignment mechanism; and 4) the removal of AR language model exacerbates the inherent ambiguity problem of lipreading.

Language Modelling Lipreading

A Rigorous Study on Named Entity Recognition: Can Fine-tuning Pretrained Model Lead to the Promised Land?

no code implementations EMNLP 2020 Hongyu Lin, Yaojie Lu, Jialong Tang, Xianpei Han, Le Sun, Zhicheng Wei, Nicholas Jing Yuan

Specifically, we erase name regularity, mention coverage and context diversity respectively from the benchmarks, in order to explore their impact on the generalization ability of models.

named-entity-recognition Named Entity Recognition +1

Integrating Graph Contextualized Knowledge into Pre-trained Language Models

no code implementations30 Nov 2019 Bin He, Di Zhou, Jinghui Xiao, Xin Jiang, Qun Liu, Nicholas Jing Yuan, Tong Xu

Complex node interactions are common in knowledge graphs, and these interactions also contain rich knowledge information.

Knowledge Graphs Representation Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.